Elon's Vision
  • Contacts
  • Privacy Policy
  • Terms & Conditions
  • News
  • Economy
  • Editor’s Pick
  • Investing
  • Stock
No Result
View All Result
  • News
  • Economy
  • Editor’s Pick
  • Investing
  • Stock
No Result
View All Result
Elon's Vision
No Result
View All Result
Home Editor's Pick

AI Policy Already Exists, We Just Don’t Call It That: Generally Applicable Law and New Technology

by
October 13, 2025
in Editor's Pick
0
AI Policy Already Exists, We Just Don’t Call It That: Generally Applicable Law and New Technology
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

Jennifer Huddleston and Christopher Gardner

Recent debates around a potential moratorium on state-level artificial intelligence (AI) laws have raised questions about what might happen without federal action on AI. Opponents of the moratorium often express concerns about what might happen if there is a gap between such a moratorium and the establishment of a federal AI framework. However, many concerns about discrimination, fraud, or other abuses have existing legal frameworks, which means the legal and regulatory environment in which a new technology like AI operates is far from the Wild West, particularly in many already regulated industries. How might generally applicable law play out when it comes to AI, and what does this mean both for laws that might be preempted and for the concerns that might already be resolved?

Generally Applicable Law in the AI Context

The US has been a culture of fast-paced innovation and technological advancement since its founding, so it is only natural that our existing common law-focused legal system has, in many ways, been able to adapt to quickly changing technologies without new regulatory bodies or laws. This inherent adaptability stems primarily from generally applicable laws that focus on particular harms rather than regulating a specific technology.

In the context of new and developing general-purpose technologies, such as AI, a generally applicable law should not unfairly favor one form of technology over another. Such an approach focuses on the bad actor or the harm rather than on the technology used. This is typically achieved through legislation that reflects the functions and values citizens are expected to embody in their interactions with one another. This is also advantageous from a legislative or regulatory perspective, as it is more adaptable given the static nature of law compared to the dynamic nature of technology and innovation.

This recognition and trust in the existing generally applicable guardrails stem from a recognition that we all need to make: AI is not the first, or the last, new technology that society and our existing governance have adapted to.

Current Examples of Generally Applicable Law in a US Federal Context

The role of the US as a global leader in technological innovation has been made possible by the light-touch regulation associated with the reliance on generally applicable laws. This historical precedent is particularly important to keep in mind when considering our lack of understanding of the full potential use cases of generative AI. Generative AI has been around for only a few years, but it has already impacted nearly every aspect of our lives. Its role as an evolutionary technology augmenting the productivity of our work means that even industry-specific laws can be generally applicable when it comes to AI.

A prime example of this is in FINRA’s (Financial Industry Regulatory Authority) Regulatory Notice 24–0[CG1] 9. This notice did not introduce any new regulatory obligations but simply referred member firms back to the tech-neutral, industry-specific regulations that govern its member firms’ behavior in a highly competitive environment.

Other agencies have also stated their intention to either reconsider problematic, generally applicable law that can deter innovation or use their existing standards and regulations to resolve these concerns. For example, in 2023, Rohit Chopra, then director of the Consumer Financial Protection Bureau, said, “There is no AI exemption to the laws on the books.” In this regard, existing laws likely already address many of the high-risk concerns, such as the use of AI in the financial services sector, as the use of AI does not alleviate the deployer of wrongdoing.

However, this example does not extend only to precise regulations and requirements, such as those under the CFPB. Existing standards around professional conduct and common law may also already cover many of the underlying concerns about AI. For example, courts have responded with sanctions or other appropriate professional misconduct steps when attorneys have filed briefs or other documents with AI “hallucinations” that turn out to be erroneous or nonexistent. Such actions allow professional norms to continue to be enforced, but without specific changes due to the emergence of new technology.

Generally Applicable Law in the Face of a Potential AI Moratorium

In 2025, Congress considered, but initially rejected, a potential moratorium on state-level AI regulation as part of the “One Big Beautiful Bill.” However, in September, Senator Ted Cruz (R‑TX) introduced a new state AI policy moratorium as separate legislation.

One grievance expressed by critics of the law is that it would prevent states from trying to protect their citizens from the potential harms of AI, such as discriminatory applications in child safety. However, under the moratorium proposals, the “primary purpose” of the law must be to regulate AI; thus, allowing states to continue to pass generally applicable laws that focus on potential harm.

Generally applicable laws would allow states to respond to concerns such as data privacy, fraud, or discrimination, provided that such laws are applied in a technologically neutral manner and not merely to AI or its applications. A potential improvement to current moratorium debates might be to clarify the process of updating existing generally applicable laws. This would clarify that merely applying or updating existing law to include violations via AI does not render those laws’ “primary purpose” to be the regulation of AI in violation of the moratorium. This would allow states to clarify the applicability of existing general-purpose laws grounded in technologically neutral harms in ways that could benefit the clarity and resolve concerns for developers, deployers, and consumers; however, it would still limit the potential to engage in model-level or other regulation.

The Risk of Potential Over-Application of Generally Applicable Law to AI and How to Minimize It

The impact of generally applicable law on AI products is not without its own risks of overregulation. As with the internet before it, there may be cases where AI illustrates that existing regulations are suboptimal or ill-fitted.

For example, many states have or are considering general-purpose technology laws around issues like data privacy or youth online safety. These have their own consequences and would significantly impact AI, its application, and its development. But they would also likely be considered as laws that are general-purpose and not directly targeting AI. Another risk of generally applicable law is how agencies could engage in overzealous interpretations of their own regulations in ways that could hinder AI deployment. While in some cases such deterrence may prevent potential harm, in other cases it could also prevent significant beneficial applications. For example, this is why the Department of Transportation has considered how its own currently generally applicable regulations may need to be amended or reconsidered to enable autonomous vehicles and other AI-driven transportation innovation applications.

One potential way to overcome some of the possible risks of poorly applying generally applicable law to AI is to consider the use of regulatory sandboxes. Sandboxes allow state or federal regulators to remove certain regulations or requirements for participants, usually for a limited period of time and often with alternative requirements in place. This approach can be an intermediary step to more significant deregulation and allow testing of whether current regulations still reflect an appropriate risk profile. Ideally, sandboxes are not an end step, but rather a part of a broader analysis of whether regulations are still necessary and can allow new technologies, such as AI, to impact and improve previously regulated industries.

Conclusion

Fears of an AI moratorium or the slow pace of federal AI regulation are often based on the idea that there is nothing to handle the problems that arise. However, many of the risks and concerns about AI are not unique to the technology itself and may be addressed by existing laws. In fact, there may be times when we need to consider not only whether existing laws sufficiently cover AI, but also whether they no longer serve their intended purpose.

[CG1]Regulatory Notice 24–09 | FIN​RA​.org

Previous Post

Constitutional Reform in Jamaica: Sentiment or Substance?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Get the daily email that makes reading the news actually enjoyable. Stay informed and entertained, for free.
Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!
  • Trending
  • Comments
  • Latest

Jay Bhattacharya on Public Health

October 12, 2021

That Bangladesh Mask Study!

December 1, 2021

Antitrust Regulation Assumes Bureaucrats Know the “Correct” Amount of Competition

November 24, 2021
Pints of champagne could be the next ‘Brexit dividend’

Pints of champagne could be the next ‘Brexit dividend’

December 24, 2021
AI Policy Already Exists, We Just Don’t Call It That: Generally Applicable Law and New Technology

AI Policy Already Exists, We Just Don’t Call It That: Generally Applicable Law and New Technology

0

0

0

0
AI Policy Already Exists, We Just Don’t Call It That: Generally Applicable Law and New Technology

AI Policy Already Exists, We Just Don’t Call It That: Generally Applicable Law and New Technology

October 13, 2025

Constitutional Reform in Jamaica: Sentiment or Substance?

October 13, 2025
Some Inconvenient Truths for Climate Radicals

Some Inconvenient Truths for Climate Radicals

October 13, 2025
The Open University and NatWest launch £50,000 ‘Open Business Creators Fund’ to empower women entrepreneurs

The Open University and NatWest launch £50,000 ‘Open Business Creators Fund’ to empower women entrepreneurs

October 13, 2025

Recent News

AI Policy Already Exists, We Just Don’t Call It That: Generally Applicable Law and New Technology

AI Policy Already Exists, We Just Don’t Call It That: Generally Applicable Law and New Technology

October 13, 2025

Constitutional Reform in Jamaica: Sentiment or Substance?

October 13, 2025
Some Inconvenient Truths for Climate Radicals

Some Inconvenient Truths for Climate Radicals

October 13, 2025
The Open University and NatWest launch £50,000 ‘Open Business Creators Fund’ to empower women entrepreneurs

The Open University and NatWest launch £50,000 ‘Open Business Creators Fund’ to empower women entrepreneurs

October 13, 2025

Disclaimer: ElonsVision.com, its managers, its employees, and assigns (collectively "The Company") do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

  • Contacts
  • Privacy Policy
  • Terms & Conditions

Copyright © 2025 ElonsVision. All Rights Reserved.

No Result
View All Result
  • News
  • Economy
  • Editor’s Pick
  • Investing
  • Stock

Copyright © 2025 ElonsVision. All Rights Reserved.