Scarinci Hollenbeck, LLC, LLCScarinci Hollenbeck, LLC, LLC

Firm Insights

Will AI Companies Lose Key Liability Protections Under New Regulations?

Author: Christopher D. Warren

Date: January 21, 2025

Key Contacts

Back

The rapid growth of artificial intelligence (AI) has regulators scrambling to craft new laws to govern the technology. Businesses that develop AI solutions, as well as the companies that deploy them, should keep a close eye on regulatory developments.

Under a recent bi-partisan proposal, AI companies could lose key protections under Section 230 of the Communications Decency Act that other Internet companies enjoy. Separately, the attorneys general of several states are calling for a regulatory framework that addresses the risks associated with AI without hampering the development of trustworthy applications.

Previous Federal AI Legislation

The proposed “No Section 230 Immunity for AI Act” seeks to clarify that Section 230 immunity will not apply to claims based on generative AI. The bi-partisan legislation was introduced by Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), the Ranking Member and the Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, on June 14, 2023.

“AI companies should be forced to take responsibility for business decisions as they’re developing products — without any Section 230 legal shield,” Sen. Blumenthal said in a statement. “This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era. AI platform accountability is a key principle of a framework for regulation that targets risk and protects the public.”

As discussed in greater detail in prior articles, Section 230 provides:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

In basic terms, the far-reaching law shields online platforms from being sued for content posted by a third-party user.

The “No Section 230 Immunity for AI Act” would amend the statute to add the following:

Nothing in this section (other than subsection (c)(2)(A)) shall be construed to impair or limit any claim in a civil action or charge in a criminal prosecution brought under Federal or State law against the provider of an interactive computer service if the conduct underlying the claim or charge involves the use or provision of generative artificial intelligence by the interactive computer service.

The legislation defines the term “generative artificial intelligence” as an artificial intelligence system that is capable of generating novel text, video, images, audio, and other media based on prompts or other forms of data provided by a person.’’ ChatGPT, Dall-E and Bard are currently among the most well-known generative AI interfaces.

Previous State AGs Call for AI Regulation

On June 12, 2023, a coalition of 23 attorneys general (AGs) wrote a letter to the chief counsel for the National Telecommunications and Information Administration (NTIA), calling for the creation of a risk-based regulatory framework for AI. The letter was drafted by the AGs of Colorado, Connecticut, Tennessee, and Virginia and joined by their colleagues from other states, including California, New York, and New Jersey.

The AGs emphasized the importance of fostering the proper development of dynamic and trustworthy tools without hampering innovation. “This means, for example, that a prescriptive regulatory regime may not be best suited to this challenge,” the letter states. “By contrast, commitments to robust transparency, reliable testing and assessment requirements, and after-the-fact enforcement is a very promising approach.”

The AGs recommended a risk-based approach to regulation, highlighting that some AI use cases (e.g., routes for package delivery) present lower risks compared to others (e.g., health care delivery options). They also emphasized the need for nuanced evaluation of risks in the context of AI.

The AGs specifically called for NTIA to establish independent standards for transparency, including:

1. Testing

2. Assessments

3. Audits of AI solutions

Additionally, the AGs argued for states to enjoy concurrent enforcement authority in a federal AI regulatory regime:

“Significantly, State AG authority can enable more effective enforcement to redress possible harms. Consumers already turn to state Attorneys General offices to raise concerns and complaints, positioning our offices as trusted intermediaries that can elevate concerns and take action on smaller cases.”

Updates since Bipartisan efforts for regulation of the AI Industry

Since the introduction of the “No Section 230 Immunity for AI Act,” the debate has intensified.

Support from Consumer Advocacy Groups: Many consumer advocacy organizations support limiting Section 230 protections for AI companies, arguing it is necessary to hold companies

accountable for the risks posed by AI-generated content, including misinformation and bias. According to John Davison, director of the Center for AI Accountability:

The lack of accountability for AI outputs risks leaving consumers unprotected from harmful or misleading information. This legislation is an important step in closing that gap.

Industry Pushback: AI companies argue that stripping Section 230 protections could stifle innovation and lead to an avalanche of litigation, potentially hampering the development of new AI applications. OpenAI’s CEO Sam Altman recently stated:

While accountability is important, overly broad legislation risks discouraging startups and smaller players from entering the field. This is a space where thoughtful, balanced regulation is critical.

Broader AI Regulation Proposals: In addition to federal legislation, various states are exploring AI-specific regulatory frameworks, further complicating the compliance landscape for businesses. For example, California’s proposed “AI Accountability Act” would require AI developers to publicly disclose the datasets used to train their systems and submit regular bias audit reports.

Key Takeaways

Like most emerging technologies, AI has enormous risks and benefits. How regulators will strike a balance between protecting the public and fostering innovation remains to be seen, despite the clear need for regulatory developments at both the federal and state levels that do not stifle innovation.

    No Aspect of the advertisement has been approved by the Supreme Court. Results may vary depending on your particular facts and legal circumstances.

    Scarinci Hollenbeck, LLC, LLC

    Related Posts

    See all
    Does Your Homeowners Insurance Provide Adequate Coverage? post image

    Does Your Homeowners Insurance Provide Adequate Coverage?

    Your home is likely your greatest asset, which is why it is so important to adequately protect it. Homeowners insurance protects you from the financial costs of unforeseen losses, such as theft, fire, and natural disasters, by helping you rebuild and replace possessions that were lost While the definition of “adequate” coverage depends upon a […]

    Author: Jesse M. Dimitro

    Link to post with title - "Does Your Homeowners Insurance Provide Adequate Coverage?"
    Understanding the Importance of a Non-Contingent Offer post image

    Understanding the Importance of a Non-Contingent Offer

    Making a non-contingent offer can dramatically increase your chances of securing a real estate transaction, particularly in competitive markets like New York City. However, buyers should understand that waiving contingencies, including those related to financing, or appraisals, also comes with significant risks. Determining your best strategy requires careful analysis of the property, the market, and […]

    Author: Jesse M. Dimitro

    Link to post with title - "Understanding the Importance of a Non-Contingent Offer"
    Fred D. Zemel Appointed Chair of Strategic Planning at Scarinci & Hollenbeck, LLC post image

    Fred D. Zemel Appointed Chair of Strategic Planning at Scarinci & Hollenbeck, LLC

    Business Transactional Attorney Zemel to Spearhead Strategic Initiatives for Continued Growth and Innovation Little Falls, NJ – February 21, 2025 – Scarinci & Hollenbeck, LLC is pleased to announce that Partner Fred D. Zemel has been named Chair of the firm’s Strategic Planning Committee. In this role, Mr. Zemel will lead the committee in identifying, […]

    Author: Scarinci Hollenbeck, LLC

    Link to post with title - "Fred D. Zemel Appointed Chair of Strategic Planning at Scarinci & Hollenbeck, LLC"
    Novation Agreement Process: Step-by-Step Guide for Businesses post image

    Novation Agreement Process: Step-by-Step Guide for Businesses

    Big changes sometimes occur during the life cycle of a contract. Cancelling a contract outright can be bad for your reputation and your bottom line. Businesses need to know how to best address a change in circumstances, while also protecting their legal rights. One option is to transfer the “benefits and the burdens” of a […]

    Author: Dan Brecher

    Link to post with title - "Novation Agreement Process: Step-by-Step Guide for Businesses"
    What Is a Trade Secret? Key Elements and Legal Protections Explained post image

    What Is a Trade Secret? Key Elements and Legal Protections Explained

    What is a trade secret and why you you protect them? Technology has made trade secret theft even easier and more prevalent. In fact, businesses lose billions of dollars every year due to trade secret theft committed by employees, competitors, and even foreign governments. But what is a trade secret? And how do you protect […]

    Author: Ronald S. Bienstock

    Link to post with title - "What Is a Trade Secret? Key Elements and Legal Protections Explained"
    What Is Title Insurance? Safeguarding Against Title Defects post image

    What Is Title Insurance? Safeguarding Against Title Defects

    If you are considering the purchase of a property, you may wonder — what is title insurance, do I need it, and why do I need it? Even seasoned property owners may question if the added expense and extra paperwork is really necessary, especially considering that people and entities insured by title insurance make fewer […]

    Author: Patrick T. Conlon

    Link to post with title - "What Is Title Insurance? Safeguarding Against Title Defects"

    No Aspect of the advertisement has been approved by the Supreme Court. Results may vary depending on your particular facts and legal circumstances.

    Sign up to get the latest from our attorneys!

    Explore What Matters Most to You.

    Consider subscribing to our Firm Insights mailing list by clicking the button below so you can keep up to date with the firm`s latest articles covering various legal topics.

    Stay informed and inspired with the latest updates, insights, and events from Scarinci Hollenbeck. Our resource library provides valuable content across a range of categories to keep you connected and ahead of the curve.

    Will AI Companies Lose Key Liability Protections Under New Regulations?

    Author: Christopher D. Warren

    The rapid growth of artificial intelligence (AI) has regulators scrambling to craft new laws to govern the technology. Businesses that develop AI solutions, as well as the companies that deploy them, should keep a close eye on regulatory developments.

    Under a recent bi-partisan proposal, AI companies could lose key protections under Section 230 of the Communications Decency Act that other Internet companies enjoy. Separately, the attorneys general of several states are calling for a regulatory framework that addresses the risks associated with AI without hampering the development of trustworthy applications.

    Previous Federal AI Legislation

    The proposed “No Section 230 Immunity for AI Act” seeks to clarify that Section 230 immunity will not apply to claims based on generative AI. The bi-partisan legislation was introduced by Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), the Ranking Member and the Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, on June 14, 2023.

    “AI companies should be forced to take responsibility for business decisions as they’re developing products — without any Section 230 legal shield,” Sen. Blumenthal said in a statement. “This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era. AI platform accountability is a key principle of a framework for regulation that targets risk and protects the public.”

    As discussed in greater detail in prior articles, Section 230 provides:

    “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

    In basic terms, the far-reaching law shields online platforms from being sued for content posted by a third-party user.

    The “No Section 230 Immunity for AI Act” would amend the statute to add the following:

    Nothing in this section (other than subsection (c)(2)(A)) shall be construed to impair or limit any claim in a civil action or charge in a criminal prosecution brought under Federal or State law against the provider of an interactive computer service if the conduct underlying the claim or charge involves the use or provision of generative artificial intelligence by the interactive computer service.

    The legislation defines the term “generative artificial intelligence” as an artificial intelligence system that is capable of generating novel text, video, images, audio, and other media based on prompts or other forms of data provided by a person.’’ ChatGPT, Dall-E and Bard are currently among the most well-known generative AI interfaces.

    Previous State AGs Call for AI Regulation

    On June 12, 2023, a coalition of 23 attorneys general (AGs) wrote a letter to the chief counsel for the National Telecommunications and Information Administration (NTIA), calling for the creation of a risk-based regulatory framework for AI. The letter was drafted by the AGs of Colorado, Connecticut, Tennessee, and Virginia and joined by their colleagues from other states, including California, New York, and New Jersey.

    The AGs emphasized the importance of fostering the proper development of dynamic and trustworthy tools without hampering innovation. “This means, for example, that a prescriptive regulatory regime may not be best suited to this challenge,” the letter states. “By contrast, commitments to robust transparency, reliable testing and assessment requirements, and after-the-fact enforcement is a very promising approach.”

    The AGs recommended a risk-based approach to regulation, highlighting that some AI use cases (e.g., routes for package delivery) present lower risks compared to others (e.g., health care delivery options). They also emphasized the need for nuanced evaluation of risks in the context of AI.

    The AGs specifically called for NTIA to establish independent standards for transparency, including:

    1. Testing

    2. Assessments

    3. Audits of AI solutions

    Additionally, the AGs argued for states to enjoy concurrent enforcement authority in a federal AI regulatory regime:

    “Significantly, State AG authority can enable more effective enforcement to redress possible harms. Consumers already turn to state Attorneys General offices to raise concerns and complaints, positioning our offices as trusted intermediaries that can elevate concerns and take action on smaller cases.”

    Updates since Bipartisan efforts for regulation of the AI Industry

    Since the introduction of the “No Section 230 Immunity for AI Act,” the debate has intensified.

    Support from Consumer Advocacy Groups: Many consumer advocacy organizations support limiting Section 230 protections for AI companies, arguing it is necessary to hold companies

    accountable for the risks posed by AI-generated content, including misinformation and bias. According to John Davison, director of the Center for AI Accountability:

    The lack of accountability for AI outputs risks leaving consumers unprotected from harmful or misleading information. This legislation is an important step in closing that gap.

    Industry Pushback: AI companies argue that stripping Section 230 protections could stifle innovation and lead to an avalanche of litigation, potentially hampering the development of new AI applications. OpenAI’s CEO Sam Altman recently stated:

    While accountability is important, overly broad legislation risks discouraging startups and smaller players from entering the field. This is a space where thoughtful, balanced regulation is critical.

    Broader AI Regulation Proposals: In addition to federal legislation, various states are exploring AI-specific regulatory frameworks, further complicating the compliance landscape for businesses. For example, California’s proposed “AI Accountability Act” would require AI developers to publicly disclose the datasets used to train their systems and submit regular bias audit reports.

    Key Takeaways

    Like most emerging technologies, AI has enormous risks and benefits. How regulators will strike a balance between protecting the public and fostering innovation remains to be seen, despite the clear need for regulatory developments at both the federal and state levels that do not stifle innovation.

    Let`s get in touch!

    * The use of the Internet or this form for communication with the firm or any individual member of the firm does not establish an attorney-client relationship. Confidential or time-sensitive information should not be sent through this form.

    Sign up to get the latest from the Scarinci Hollenbeck, LLC attorneys!

    Please select a category(s) below: