AI Legal Authority - Artificial Intelligence and Law Reference Network Member
Artificial intelligence intersects with law across a rapidly expanding front — touching liability frameworks, intellectual property ownership, employment regulations, consumer protection statutes, and constitutional due process guarantees. This page defines the scope of the AI Legal Authority reference network, explains how the network's 107 member properties are organized, describes the legal scenarios each jurisdiction-specific resource addresses, and maps the boundaries between state-law coverage and federal regulatory frameworks. The network serves as a structured reference point for understanding how AI-related legal questions arise at the state and federal level across the United States.
Definition and scope
The AI Legal Authority network is a reference hub connecting 107 state and subject-matter legal authority sites to address the legal dimensions of artificial intelligence in the United States. The network operates under the broader National Legal Authority index, which coordinates reference-grade coverage across major practice areas and jurisdictions.
Artificial intelligence law, as a field, encompasses at least four distinct legal domains:
- Liability and tort law — questions of who bears responsibility when an AI system causes physical, financial, or reputational harm.
- Intellectual property law — disputes over copyright, patent, and trade secret ownership of AI-generated outputs and AI-training datasets.
- Employment and labor law — regulations governing AI-driven hiring, termination, surveillance, and wage-setting systems.
- Administrative and regulatory law — federal and state agency rules that impose disclosure, audit, or impact-assessment requirements on AI systems used in consequential decisions.
The Federal Trade Commission (FTC) has issued guidance under Section 5 of the FTC Act (15 U.S.C. § 45) identifying deceptive and unfair AI practices as enforcement priorities. The Equal Employment Opportunity Commission (EEOC) released technical guidance in 2023 addressing adverse impact liability under Title VII of the Civil Rights Act when employers use algorithmic screening tools (EEOC, "Artificial Intelligence and Algorithmic Fairness Initiative"). The National Institute of Standards and Technology (NIST) published the AI Risk Management Framework (AI RMF 1.0) in January 2023 (NIST AI RMF), which establishes a voluntary governance structure that regulators and courts have begun to reference as an industry standard of care.
Understanding the terminology that underlies all of these domains is essential; the U.S. Legal System Terminology and Definitions reference page provides definitions for foundational concepts that recur throughout AI-related litigation and rulemaking.
How it works
The AI Legal Authority network functions by mapping AI-related legal questions to the specific jurisdictions and practice areas where those questions arise, then connecting users to authoritative reference resources for each jurisdiction. The how the U.S. legal system works conceptual overview explains the federal-state structure that determines which court, agency, or legislature holds authority over any given AI legal dispute.
The network's state-level members cover the full 50-state geography. Each state authority site documents local statutes, administrative regulations, and notable court decisions affecting AI in that jurisdiction. The framework for navigating these resources tracks three operational layers:
- Federal baseline — Constitutional provisions (First, Fourth, and Fourteenth Amendments), federal statutes (Title VII, FCRA, ADA, CCPA-analog proposals), and federal agency rules that set minimum standards applicable nationwide.
- State statutory layer — State legislatures enacting AI-specific bills or applying existing consumer protection, privacy, and tort statutes to AI conduct. Illinois's Biometric Information Privacy Act (BIPA, 740 ILCS 14) and California's AB 2013 (2024) represent the most litigated state-level frameworks as of 2024.
- Common law development — Courts adapting negligence, products liability, defamation, and contract doctrine to AI facts without statutory guidance.
The regulatory context for the U.S. legal system page details how federal agency rulemaking interacts with state authority across industries where AI deployment is accelerating — including financial services, healthcare, and transportation.
State-by-state member network
The 50-state coverage of this network provides jurisdiction-specific reference on every AI-adjacent legal question a researcher, policy analyst, or legal professional might encounter.
Alabama Legal Services Authority covers state tort, contract, and regulatory frameworks applicable to AI-related disputes arising in Alabama courts, including reference to the Alabama Deceptive Trade Practices Act. Alaska Legal Services Authority addresses the unique jurisdictional considerations in Alaska, where tribal law, federal land authority, and state consumer protections intersect with emerging AI deployment in resource industries.
Arizona Legal Services Authority documents Arizona's sandbox regulatory program — one of the first state-level legal innovation sandboxes in the country — which has direct implications for AI-powered legal services companies operating under A.R.S. § 7-901. Arkansas Legal Services Authority provides reference on Arkansas tort law and the Arkansas Deceptive Trade Practices Act as applied to algorithmic consumer interactions.
California Legal Services Authority is the most heavily referenced state node in the network, given California's role as the primary source of AI statutory activity — including the California Consumer Privacy Act (CCPA, Cal. Civ. Code § 1798.100 et seq.), the Automated Decision Systems Accountability Act proposals, and CPPA rulemaking on automated decision-making technology. Colorado Legal Services Authority addresses Colorado's SB 24-205, the Colorado Artificial Intelligence Act signed in 2024, which imposes impact assessment requirements on high-risk AI systems — making Colorado the first state to enact comprehensive AI accountability legislation.
Connecticut Legal Services Authority covers Connecticut's data privacy law (Public Act 22-15) and its interaction with AI profiling and automated decision-making disclosures. Delaware Legal Services Authority is an essential resource for AI and corporate law questions because Delaware's Court of Chancery adjudicates the majority of U.S. corporate governance disputes, including fiduciary duty claims arising from board-level AI adoption decisions. The Delaware Contractor Authority provides complementary reference on contractor and procurement obligations relevant to AI service agreements under Delaware law.
Florida Legal Services Authority documents Florida's HB 1459 (2024) on government use of AI and the Florida Digital Bill of Rights framework affecting commercial AI data practices. Georgia Legal Services Authority covers Georgia tort law and the state's growing fintech and AI industry regulatory environment, including Georgia Department of Banking and Finance guidance.
Hawaii Legal Services Authority addresses Hawaii's unique Pacific jurisdiction considerations and the state legislature's active AI task force recommendations affecting public-sector AI procurement. Idaho Legal Services Authority covers Idaho's relatively sparse AI-specific statutory framework and how common law negligence fills the gap in AI harm cases under Idaho courts.
Illinois Legal Services Authority is a critical reference node because Illinois hosts BIPA (740 ILCS 14), the most litigated biometric data statute in the United States, which applies directly to facial recognition and voice AI systems. Indiana Legal Services Authority covers Indiana's consumer data protection law (IC 24-15) and its automated decision-making provisions effective in 2026.
Iowa Legal Services Authority addresses Iowa's agricultural and financial sector AI deployments and the Iowa Consumer Fraud Act's application to algorithmic pricing systems. Kansas Legal Services Authority covers Kansas administrative law and the Kansas Consumer Protection Act as they apply to AI-generated commercial communications.
Kentucky Legal Services Authority documents Kentucky tort doctrine and the state's approach to AI in medical and insurance contexts under KRS Chapter 304. Louisiana Legal Services Authority addresses the civil law tradition in Louisiana — the only U.S. jurisdiction with a Civil Code system derived from Napoleonic law — and how that framework produces distinct AI liability analysis compared to common law states.
Maine Legal Services Authority covers Maine's data privacy statutes and the state's "opt-in" consent standard, which is stricter than the opt-out model in most other states and has direct AI data-training implications. Maryland Legal Services Authority addresses Maryland's Online Data Privacy Act (2024) and its provisions on profiling and automated decision-making affecting Maryland residents.
Massachusetts Legal Services Authority covers Massachusetts's active legislative environment around AI — including proposed bills on algorithmic employment decisions and facial recognition moratoriums — and the Massachusetts Consumer Protection Act (G.L. c. 93A) as applied to AI. Michigan Legal Services Authority documents Michigan's approach to AI in automotive systems, reflecting the state's dominant role in autonomous vehicle regulation and the Michigan Vehicle Code amendments addressing automated driving systems.
Minnesota Legal Services Authority covers Minnesota's Consumer Data Privacy Act (effective July 31, 2025) and the specific AI profiling rights it grants Minnesota consumers. Mississippi Legal Services Authority addresses Mississippi tort law and the
References
- Federal Trade Commission – AI Guidance and Enforcement
- 15 U.S.C. § 45 – FTC Act, Section 5 (Unfair or Deceptive Acts or Practices)
- Equal Employment Opportunity Commission – Artificial Intelligence and Algorithmic Fairness Initiative
- Title VII of the Civil Rights Act of 1964 – 42 U.S.C. § 2000e
- NIST AI Risk Management Framework (AI RMF 1.0)
- NIST – Artificial Intelligence Resource Center
- U.S. Copyright Office – Copyright and Artificial Intelligence
- Executive Order 14110 – Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Office of the Federal Register)
- Office of Management and Budget – Memorandum M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence
- Federal Trade Commission – 16 C.F.R. Part 255 (Endorsements and Testimonials, including AI-generated)