Elon's Vision
  • Contacts
  • Privacy Policy
  • Terms & Conditions
  • News
  • Economy
  • Editor’s Pick
  • Investing
  • Stock
No Result
View All Result
  • News
  • Economy
  • Editor’s Pick
  • Investing
  • Stock
No Result
View All Result
Elon's Vision
No Result
View All Result
Home Editor's Pick

First Impressions of the AI Order’s Impact on Fintech

by
November 3, 2023
in Editor's Pick
0
First Impressions of the AI Order’s Impact on Fintech
0
SHARES
23
VIEWS
Share on FacebookShare on Twitter

Jack Solowey

Jack Solowey, policy analyst at the Cato Institute’s Center for Monetary and Financial Alternatives.

This week, the Biden administration issued a long‐​anticipated Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “EO”). Given the breadth of the nearly 20,000-word document’s whole‐​of‐​government approach to AI—addressing the technology’s intersection with issues ranging from biosecurity to the labor force to government hiring—it unsurprisingly contains several provisions that address financial policy specifically.

Notably, the EO names financial services as one of several “critical fields” where the stakes of AI policy are particularly high. Nonetheless, by not providing a clear framework for financial regulators to validate the existence of heightened or novel risks from AI or to understand the cost of lost benefits due to intervention, the EO risks initiating agency overreach.

As a general matter, the EO largely calls on a host of administrative agencies to work on reports, collaborations, and strategic plans related to AI risks and capabilities. But the EO also orders the Secretary of Commerce to establish reporting mandates for those developing or providing access to AI models of certain capabilities. Under those mandates, developers of so‐​called “dual‐​use foundation models”—those meeting certain technical specifications and posing a “serious risk” to security and the public—must report their activities to the federal government.

In addition, those providing computing infrastructure of a certain capability must submit Know‐​Your‐​Customer reports to the federal government regarding foreign persons who use that infrastructure to train large AI models “that could be used in malicious cyber‐​enabled activity.”

While it’s conceivable that these general‐​purpose reporting provisions could impact the financial services sector where financial companies develop or engage with covered advanced models, the provisions most relevant to fintech today are found elsewhere in the EO.

Where financial regulators are concerned, the EO requires varying degrees of study and action. As for studies, the Treasury Department must issue a report on AI‐​specific cybersecurity best practices for financial institutions. More concretely, the Secretary of Housing and Urban Development is tasked with issuing additional guidance on whether the use of technologies like tenant screening systems and algorithmic advertising is covered by or violative of federal laws on fair credit reporting and equal credit opportunity.

But the EO puts most financial regulators in a gray middle ground between the “study” and “act” ends of the spectrum, providing that agencies are “encouraged” to “consider” using their authorities “as they deem appropriate” to weigh in on a variety of financial AI policy issues. The Federal Housing Finance Agency and Consumer Financial Protection Bureau, for instance, are encouraged to consider requiring regulated entities to evaluate certain models (e.g., for underwriting and appraisal) for bias. More expansively, independent agencies generally—which would include the Federal Reserve and Securities and Exchange Commission—are encouraged to consider rulemaking and/​or guidance to protect Americans from fraud, discrimination, and threats to privacy, as well as from (supposed) financial stability risks due to AI in particular.

The wisdom—or lack thereof—of these instructions can hinge on how the agencies interpret them. On the one hand, agencies should first ask whether existing authorities are relevant to AI issues—so as not to exceed those authorities. Similarly, agencies should ask whether applying those authorities to AI issues is appropriate—as opposed to blindly assuming AI presents heightened or novel risks requiring new rules without validating those assumptions.

On the other hand, to the extent agencies interpret the EO’s instructions as some version of “don’t just stand there, do something (or at least make it look like you are),” it could end up being the very thing that initiates misapplied authorities or excessive rules. Because the EO does not offer financial regulators a clear framework for confirming the presence of elevated or new risks from AI, or for minimizing the costs of intervention, it risks being interpreted more as a call for financial regulators to hurry up and regulate than to thoughtfully deliberate. In so doing, the EO risks undercutting its own goal of “[h]arnessing AI for good and realizing its myriad benefits” by mitigating risks.

For a chance to deliberate about financial AI policy questions, join the Cato Institute’s Center for Monetary and Financial Alternatives on November 16 for a virtual panel: “Being Predictive: Financial AI and the Regulatory Future.”

Previous Post

Hawley Wants to Restrict Funding of Political Speech

Next Post

October’s Sobering Jobs Report Adds to Mounting Bad Economic News

Next Post
October’s Sobering Jobs Report Adds to Mounting Bad Economic News

October's Sobering Jobs Report Adds to Mounting Bad Economic News

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Get the daily email that makes reading the news actually enjoyable. Stay informed and entertained, for free.
Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!
  • Trending
  • Comments
  • Latest

Jay Bhattacharya on Public Health

October 12, 2021

That Bangladesh Mask Study!

December 1, 2021

Antitrust Regulation Assumes Bureaucrats Know the “Correct” Amount of Competition

November 24, 2021
Pints of champagne could be the next ‘Brexit dividend’

Pints of champagne could be the next ‘Brexit dividend’

December 24, 2021

What is a Distributed Immutable Ledger and Why Does it Matter?

0

0

0

0

What is a Distributed Immutable Ledger and Why Does it Matter?

June 6, 2025

AWS Advanced Tier Partner Status Achieved by Eternal Web Ltd.

June 6, 2025

A Different Perspective on Costa Rica’s Traffic Disaster

June 6, 2025

What Impact Will Quantum Technologies Have on the $9.8 Trillion Global Healthcare Industry?

June 6, 2025

Recent News

What is a Distributed Immutable Ledger and Why Does it Matter?

June 6, 2025

AWS Advanced Tier Partner Status Achieved by Eternal Web Ltd.

June 6, 2025

A Different Perspective on Costa Rica’s Traffic Disaster

June 6, 2025

What Impact Will Quantum Technologies Have on the $9.8 Trillion Global Healthcare Industry?

June 6, 2025

Disclaimer: ElonsVision.com, its managers, its employees, and assigns (collectively "The Company") do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

  • Contacts
  • Privacy Policy
  • Terms & Conditions

Copyright © 2025 ElonsVision. All Rights Reserved.

No Result
View All Result
  • News
  • Economy
  • Editor’s Pick
  • Investing
  • Stock

Copyright © 2025 ElonsVision. All Rights Reserved.