Update 917: AI Regulatory Vacuum

Update 917 – AI Regulatory Vacuum;
Trump’s EO and the Need for Rules

AI has quickly entered the workplace and come to occupy a central place in capital markets, with the tech sector leading the year’s major gains. Last Thursday, Trump signed an Executive Order to shut down state regulations of AI, despite many Americans’ worries about the lack of regulation, as they are faced with the flooding of AI content on social media and data centers, increasing retail electric bills by as much as 267 percent

Today, we dive into the economic implications AI will have for industry, jobs, and America’s position in the world. Federal regulatory negligence in this policy area runs a broad range of macro and household economic risks, which we detail below.

Best, 

Dana


Last week, Trump signed an Executive Order that would limit states from enacting and enforcing their own AI regulations. It represents the most substantial action on artificial intelligence in the second Trump administration. Trump’s stated goal is to prevent patchwork regulation by 50 different states, but he failed to acknowledge that the varying state regulations have emerged because a lack of federal regulation has put the onus on individual states to pass their own. 

In the midst of an AI investment boom, many Americans are growing concerned about the economic and other risks posed by the AI revolution. Below, we examine the concerns raised by the AI boom, the government actions taken to address them (or lack thereof), and the issues an AI regulatory regime designed by Congress must address.

The AI Boom and Growing Anxiety

Artificial intelligence is the story of the American economy right now, as 20/20 Vision covered back in October. Tech companies have invested an estimated $1.6 trillion into artificial intelligence since 2013, with a forecasted $375 billion of that coming in 2025 alone. Big Tech spent so much on AI that, according to Bank of America, just four of the biggest AI companies – Microsoft, Amazon, Alphabet, and Meta —  made $344 billion in capital expenditures this year, or roughly 1.1 percent of the US’s annual GDP, mostly on AI infrastructure. While the actual revenue generated by AI remains insignificant compared to the money being invested in it, surveys show that CEOs plan to spend even more on AI in 2026, giving rise to debate on whether the current spending on AI is sustainable and whether a recession will occur once this spending dies down. 

Source: Reuters

AI has also become a crucial part of geopolitics, especially concerning the rivalry between the US and China. Both countries are racing to take the lead on global AI development, knowing that if the technology becomes as transformative as the internet, it could give them a sizable advantage in the competition for global influence. The current consensus is that the US has the most powerful AI models and better access to the most advanced computer chips, but China has a significant advantage in its electric grid and other infrastructure needed to support AI adoption. Access to advanced AI hardware has become a major point of interest in bilateral trade negotiations between the two countries in the wake of Trump’s decision to allow China to buy advanced H200 chips from Nvidia.

With AI enthusiasm reaching a fever pitch on Wall Street and in Silicon Valley, it should come as no surprise that there are growing concerns about what the AI revolution could mean for everyday Americans regarding job security. Like any transformational technology, AI could displace a large number of workers from their jobs. While past automation has offset this by creating new, higher-skilled jobs, AI is unique in that high-skilled, white-collar jobs are now the most at risk. The number of jobs that AI has created so far is still fairly small as a percentage of the broader labor market, so the question of whether AI will create enough jobs to offset the number of jobs it eliminates remains unanswered.

Another issue of significant concern is AI scams and fraud. According to analysts at JPMorgan, cyber threats and fraud scams rose 33 percent to $16.6 billion in 2024, driven largely by AI. Deepfakes and other AI-powered tools have made it easier for scammers to use impersonations to gain people’s trust and swindle them. In one particularly elaborate scam, fraudsters managed to trick a Hong Kong finance worker into believing that he was on a video conference with his company’s CFO and then induced him to wire $25 million of the company’s funds. The tools to better identify deepfakes and other kinds of impersonation have not caught up with new technology yet, and cybercriminals are making the most of the moment to steal billions of dollars annually.

AI Regulation at the Federal and State Levels

Legislators now face the inevitability of stepping in to do something about the potential dangers of AI. According to a Gallup poll released in September, Americans prioritize AI safety and data security over rapid AI development by a 71-percentage-point margin. Despite the growing demands, the federal government has not rushed to deliver a nationwide standard. Last year, a bipartisan Senate AI working group released a blueprint on AI policy, but Congress has followed up with only limited legislation that focuses on a narrow set of issues, not an overarching regulatory framework. In 2023, President Biden issued executive orders to establish guardrails for AI development, but the Trump administration repealed them immediately. Federal regulations do very little to address the specific issues raised by AI.

The lack of federal action has created a regulatory vacuum, leaving state governments to fill the void. According to the National Conference of State Legislatures, as of July, 38 states had already adopted or enacted 100 AI regulatory measures. These measures cover a wide range of issues, with some of the more notable trends being laws to: 

  • protect data privacy, 
  • protect intellectual property,
  • combat the spread of misinformation, and
  • combat AI-fueled criminal activity. 

This is a bipartisan issue, with legislatures from North Dakota to California stepping up to pass AI regulations.

With states leading on regulation, AI companies face a patchwork of differing regulations and little regulatory clarity. Trump and the GOP’s solution to this disparity is to clamp down on state-level AI regulation that conflicts with their industry-supported agenda of rapid AI development. Last week, after two failed attempts to pass a moratorium on state AI regulation through Congress, Trump signed an executive order to:

  • Empower the attorney general to sue states that pass laws which endanger “the United States’ global AI dominance.”
  • Direct federal regulators to withhold federal funds for broadband and other projects if states do not comply.
  • Call on Trump’s Special Advisor for AI and Crypto, David Sacks, to create a “legislative recommendation” for a uniform Federal AI policy framework that would serve as the basis for congressional action.

The bottom line is that the EO would curtail state-level AI regulations and encourage Congress to act on the matter, but would not institute any federal regulations in the interim to fill the gap while Congress proceeds through the legislative process.

While tech CEOs praised the executive order, it drew immediate bipartisan backlash. For a party that has long valued states’ rights, an executive order to limit states’ ability to pass their own laws on AI has rubbed a lot of Republicans the wrong way. Florida Governor Ron DeSantis criticized the push for a state AI regulation moratorium by stating that “denying the people the ability to channel these technologies in a productive way via self-government constitutes federal government overreach and lets technology companies run wild.” Trump’s former campaign advisor Steve Bannon called the EO “entirely unenforceable” and blamed tech CEOs for pushing Trump to enact a policy that, according to Bannon, would hurt Trump’s MAGA base while enriching Big Tech.

GOP criticism of Trump’s EO has been far more muted on the Hill, but many GOP lawmakers are still pushing for their concerns to be addressed in a prospective bill to establish a federal framework for AI. Senator Marsha Blackburn (R-TN), for example, has been pushing hard for stronger protections for children from AI-powered surveillance and related threats. Given Congress’s inaction on AI to date, passing such legislation in 2026, ahead of the mid-term elections, seems a tall order. If Congress fails to pass anything, this EO could prevent states from enacting more regulations and punish those who do not comply without providing the federal regulatory regime needed to replace them.

The Case for Guardrails on AI Development and Use

AI will continue to develop and could be a net benefit to society, but not without regulation. Congress and the administration need to pick up the slack and provide federal safeguards against the already prominent harms of AI.

Fraud

Laws to combat AI-powered fraud and scams would be a good place to start. Beyond enhancing penalties for using AI tools to commit financial crimes, lawmakers should also allocate resources to the DOJ and federal regulators to develop their own tools to better identify and prevent deepfakes and other forms of impersonation. There should also be greater federal protections in place to prohibit people’s voices and likenesses from being used without their permission and allow individuals to sue fraudsters who do so anyway.

A good example of legislation to combat AI-powered scams is the Artificial Intelligence Scam Prevention Act. The bill, introduced by Senators Amy Klobuchar (D-MN) and Shelley Moore Capito (R-WV), would explicitly prohibit the use of AI to impersonate any person with the intent to defraud.

Transparency and AI Governance

Clear rules and guidelines for AI governance are needed, specifically surrounding transparency and accountability for AI-powered decision-making. As AI models become more advanced and their processes more complex, there is a real risk that firms that use AI to inform their decisions will not have a complete understanding of how or why their models reach conclusions on essential issues. For example, some have raised concerns that the use of AI in the mortgage underwriting process could lead to minorities getting denied loans at a higher rate than white borrowers with similar credit histories because the AI’s algorithm used proxies for racial identity in their datasets – such as zip codes – without the underwriter even realizing that this was occurring. 

Firms could start relying on AI to help them make critical business decisions, such as how to invest their money, without fully understanding the AI’s reasoning behind these decisions. Lawmakers can help by developing standards for reporting and disclosures regarding AI use. They can also enact requirements that human professionals remain in the loop to keep an eye on AI decision-making, such as lawyers, to ensure that AI uses personal data in compliance with federal regulations.

One example of legislation to address this topic is Colorado’s AI Act, which requires firms to identify when they use AI as a “substantial factor” in “consequential decisions,” such as when they choose whether or not to underwrite a loan, while also mandating transparency and accountability for developers and deployers. 

Data Protection

Lawmakers and regulators need to enact safeguards to ensure that people’s private data is not being used without their permission, especially for illicit activities. The explosion of AI makes the need for increased data privacy protections even more critical, with companies and individuals already making money off of the stolen data, images, and likenesses of people, including children. Regulations should include allowing individuals to refuse AI firms’ use of their sensitive data, such as medical data, to train their models, or to block companies from using their image and likeness for unauthorized commercial purposes. These laws should be especially strong for protecting the data, images, and likenesses of children, as generative AI tools are already being exploited by online predators.

One good example of state data protection laws is the Maryland Online Data Privacy Act of 2024, which gives Maryland residents new tools to confirm how their data is used by online data holders and to opt out of that use if they choose.

The Way Forward

Business leaders and politicians want to seize on the potential that AI brings, and this technology has the potential to greatly transform the global economy, for better and also for worse. Congress’s lack of action has allowed the administration to set federal AI policy through the banning of state-level regulation, which does little more than eliminate what few guardrails that exist against AI harms and prevent further guardrails from being established. Even light regulation that, for example, updates existing laws to better cover the use of AI by criminals, while placing few burdens on AI developers, would be better than nothing. 

The federal government’s job is to lead the nation through advancements like this, not to prevent states from doing it in their absence. Congress and the administration need to make enacting a federal AI regulatory regime a priority and ensure it addresses the apparent issues that could cause the most economic and societal harm.