Update 803: Economic Implications of the AI Revolution

Update 803 – The Coming AI Revolution:
Economic Opportunities, Policy Challenges

The big news today comes with the release of the July CPI report, which showed that inflation as measured by CPI has fallen below 3.0 percent for the first time since March 2021, the fourth consecutive monthly decline. With inflation headed toward the Fed’s two percent target and the labor market notably cooling, the Fed is now poised to initiate interest rate cuts in September. We will review the data in further detail in our Friday update.

In today’s update, we explore the revolution in artificial intelligence (AI). AI has been a subject of considerable public debate and will undoubtedly spark even more as its impact on the economy and society grows. The influence that AI may have is broad — many are concerned about what AI will mean for them economically, with concerns ranging from job security to whether or not they will be able to get a loan and how to regulate the emerging industry. Below, we drill down into the current state of AI and outline some of its potential impact on the economy and society.

Best,

Dana


Few technologies in recent memory have simultaneously inspired as much awe and dread as artificial intelligence. Indeed, AI has as much if not even more potential to transform society as the internet did before it. From finance and art to healthcare and scientific discovery, AI’s innovations could alter many aspects of our lives. It also comes with great risks and trying to balance AI’s risks with its rewards will be difficult for policymakers. Creating guardrails for an emergent technology is a challenge, but it’s exactly what legislators and regulators will need to do. This update takes a snapshot of the developments that led to the current movement in AI, explores the scope of its possible impact on the economy and some of its risks, and highlights what decisions policymakers will confront regarding the issue.

Developmental Landscape of AI

Before getting into the details, it is worth taking a step back to focus on definitions. Artificial intelligence refers to the ability of machines and computers to perform tasks that would normally require human intelligence. Principal subcategories include:

  • Machine learning (ML) – ML refers to a subfield of AI in which computers “learn” without explicitly being programmed on a set of rules. Instead of programmers providing complex, detailed instructions for a computer to carry out a task, ML allows computers to sort data via complex relational information. ML models can solve problems and perform complex tasks, ranging from driving autonomous vehicles to curating a social media feed.
  • Generative AI (GenAI) – GenAI takes an ML model and goes further by allowing the program to create new content based on inputs. GenAI has a wide range of potential uses and is capable of generating text, images, music, videos, and much more.

AI developers are exploring how to use the technology in such disparate fields as healthcare, finance, education, and security. Many have already encountered AI through an AI-powered customer service assistant, AI-generated images on social media, or through an AI model used to do grunt work in their workplace. The future of what AI could accomplish is promising across many applications, including helping companies engage in quantitative trading or even facilitating space exploration.

Economic Impact of AI

AI is expected to have a massive impact on the global economy. An April 2023 report from Goldman Sachs indicates that GenAI uses could drive a 7 percent (or $7 trillion) increase in global GDP over the following ten years. On the impact of AI and its correlative impact, PwC estimates that AI will drive $15.7 trillion of economic growth by 2030, with $3.7 trillion of that growth in the US alone. Some experts claim that AI could even power GDP growth of as much as 30 percent a year, though such estimates may be overly optimistic.

The chief way that AI is expected to spur economic growth is by increasing productivity. Goldman Sachs’s report expected AI to lift productivity growth by 1.5 percentage points over the same 10-year period. Another report estimated that GenAI could increase labor productivity by 0.1 to 0.6 percent annually through 2040. AI is now capable of replacing “grunt work” that normally was reserved for humans, allowing human employees to focus on higher-skilled tasks that are beyond AI’s current capabilities. AI is making it substantially easier to analyze very large amounts of data rapidly, allowing humans to make decisions based on that data – or evaluate decisions made by an AI model itself – much more easily. These and other uses allow machines and humans to complement each other in the workplace, allowing existing workers to do significantly more work, and more efficiently.

AI could also have other effects on the global economy, yielding 

  • Increased productivity making it cheaper for businesses to operate, driving down the costs of goods and services.
  • Greater integration into manufacturing, allowing goods to be manufactured with higher efficiency at a lower cost.
  • Broader innovation, enhancing the development of new products and services while increasing capabilities for existing ones.
  • Enhanced ability to make better, data-informed decisions, allowing better assessment of market trends and risks.

Several major factors and considerations will likely affect the depth and scale of the impact for AI, including:

  • Cost and Resources – AI is very expensive and requires considerable human investment, energy, water, and other resources to train, build, and use. The sheer cost of developing and maintaining AI means that it could take many years for it to produce the changes many predict are on the horizon.
  • Bottlenecks – A report from the Kellogg School at Northwestern University outlines that another important factor will be how much AI is useful to address productivity “bottlenecks” in industries where productivity has slowed despite groundbreaking technological innovations. One example of a bottleneck is the amount of time and effort it takes to travel long distances which, despite advances in technology, has not gotten much more efficient or less expensive. If AI can ameliorate these bottlenecks and increase productivity in sectors that have not seen serious productivity growth in decades, the effect on the economy would be considerable. 
  • Human Labor — The report argues that if AI improves human labor enough to make it cheaper than hiring a human being, it could replace employees but not produce enough growth to offset lost employment with gains in living standards. On the flip side, if AI can do tasks many times more effectively and efficiently than human labor can, the resulting economic growth would increase standards of living across the board.

Risks Associated with AI

AI does carry potential risks. A frequent concern expressed over AI adoption is its potential for workforce displacement. This is not solely a theoretical concern: one report of 750 business leaders found that 37 percent of respondents said that AI replaced workers in 2023, while 44 percent predicted there would be layoffs in 2024 due to AI efficiency. According to a report from the IMF, about 60 percent of jobs in advanced economies may be impacted by AI. The report estimates that about half of the exposed jobs will benefit from AI integration, while the other half will face lower demand for their skills.

Source: IMF

That said, experts have argued that AI will have a transformative impact on the workforce rather than replacing it outright. Past automation and technological advancement did eliminate some jobs, but ultimately replaced them with other, higher-skilled jobs. If AI follows the same path, workers may be able to adapt by training in skills that are complementary to the new workplace, such as the ability to train and monitor AI.

But AI also increases the ability of bad actors to commit fraud. The rise of GenAI has allowed scammers to use personal information to create more personally targeted scams with ease, even imitating one’s voice and identity. These more sophisticated methods allow criminals to bypass conventional indicators of scams, making these scams harder to spot. These scammers are already causing considerable damage. For example, earlier this year, an employee at a Hong Kong multinational company transferred $25.5 million to a criminal pretending to be the company’s CFO using an AI-generated deepfake. This is not an isolated incident either: in 2023, Americans reported losing a record $10 billion in scams according to the FTC, with the unreported losses undoubtedly being much higher.

Source: The Wall Street Journal

In a similar vein, bad actors’ use of AI brings considerable risks to cybersecurity. AI makes hacking easier and more potent, can accelerate the spread of misinformation, and even disrupt elections. That said, AI also gives firms and governments greater tools to enhance cybersecurity. AI can make systems more secure by finding vulnerabilities and analyzing the extent to which sensitive information is at risk. AI can also accelerate detection and response times to cyberattacks and identify and mitigate potential threats faster than humans can with conventional methods. It is no wonder, then, that cybersecurity firms increasingly view AI as an essential component of their work on both offense and defense.

On the flip side, AI has also given financial firms and the government better tools to identify and combat fraud. For example, the Treasury Department launched fraud detection tools powered by AI in 2022, and in 2023 alone these tools recovered $375 million in stolen funds.

If not used properly, AI runs the risk of perpetuating discrimination and bias in a variety of harmful ways. The problem is that AI is only as good as the data it is given: the training data used to build models inevitably reflects the biases of the society in which it originates, thus leading to biased or discriminatory outcomes.

An example of an area in which AI could reinforce financial bias is lending. Banks and other financial institutions have long used algorithmic models that are now using AI and ML to determine creditworthiness. An investigation by The Markup found that mortgage approval algorithms were 80 percent more likely to deny applications by Black applicants than by white applicants. To combat this potential for bias and discrimination, organizations such as the ACLU, UnidosUS, the Anti-Defamation League, and others have pushed the federal government to take decisive action to curb potential civil rights violations perpetrated by the misuse of AI. 

Furthermore, the growing use of pricing algorithms raises concerns that third-party AI tools may enable companies to engage in price fixing in contravention of existing antitrust laws to the detriment of competition and consumers.

Regulatory Policy Response to AI

The risks attending the growth of AI have put increasing pressure on lawmakers to facilitate its development while curtailing its potential harms. Earlier this month, the EU passed the AI Act, a comprehensive regulatory framework for the use of AI across its member states. The act categorizes AI systems according to the risks they pose to society, with Minimal Risk as the lowest and Unacceptable Risk as the highest, and regulates each category accordingly. The AI Act primarily impacts US tech companies, which remain today the world’s largest developers of AI systems. China, on the other hand, has focused less on regulating AI and more on enhancing its development and adoption so it can catch up with the US.

The Biden-Harris administration has taken the lead in pushing for higher standards in the use of AI by the government. In October 2022, the administration released its blueprint for an AI Bill of Rights, including provisions for data privacy, algorithmic discrimination protections, and safe and effective AI systems, among other things. In October 2023, President Biden issued an executive order that established new standards for AI safety and security, protections for Americans’ privacy, and protections for civil rights while still seeking to promote American leadership in the development of AI technologies. In March 2024, the Biden-Harris administration announced new guidance on how government agencies could use AI, including calls for safeguards and greater transparency.

Congress, on the other hand, has been much more hesitant to act on AI, with significant disagreement between Democrats and Republicans on how to approach the issue. In brief, Republicans have been much more hesitant than Democrats to enact regulations that they believe would stifle innovation. In May, Senate Majority Leader Chuck Schumer and a bipartisan group of Senators released a roadmap for AI policy, calling on the government to invest billions into AI research and development and on Congress to come up with guardrails to curtail AI’s biggest risks, such as discrimination and job displacement. On the heels of the roadmap’s announcement, Senate Rules advanced three bills out of committee that would counter the threat AI poses to elections, such as by spreading misinformation. 

Without action from Congress, the states have begun to pick up the slack on AI regulation. For example, California is currently considering SB 1047, a bill that would force companies to test powerful AI for safety before releasing it to the public and would empower the state’s attorney general to sue companies if their AI systems cause serious harm. 

The issue of AI regulation has made its way into the 2024 election discourse. The GOP’s party platform promised to repeal President Biden’s “dangerous” executive order on artificial intelligence, instead supporting AI development “rooted in free speech and human flourishing.” In a similar vein, Donald Trump’s allies are drafting an AI executive order that would launch a series of “Manhattan Projects” to develop military technology and likely deregulate AI development. While Kamala Harris has not yet come out with her own platform on AI, as Vice President, she has spoken on the need to ensure that AI is “adopted and advanced in a way that protects the public from potential harms and that ensures that everyone can enjoy its benefits.”

Rules of the Road Ahead for AI 

It is undeniable that AI carries many potential benefits for economic growth and innovation. At the same time, it is equally undeniable that AI carries many risks that could do harm if the technology is implemented irresponsibly. Policymakers will need to thread the needle to allow the benefits of AI to be distributed throughout society while limiting its potential drawbacks.

The time is ripe for Congress and other policymakers to take the initiative and enact sensible AI legislation while the technology’s impact on society is evolving. While President Biden and Senator Schumer’s actions are a step in the right direction, America needs concrete legislative action to address AI. If handled correctly, AI could have an immense positive impact on society, and that goal should be what policymakers strive to achieve.