
The Scoop
Artificial intelligence startup Anthropic is launching an effort to address the economic consequences of rapidly advancing AI technology, after CEO Dario Amodei made dire predictions about job losses and a massive spike in unemployment.
On Friday, the company will kick off the Anthropic Economic Futures Program, which will bolster research on the impacts of AI and encourage new proposals on how to mitigate the downsides.
Amodei has been outspoken about the potential fallout of AI, including the risks to humanity. More recently, he predicted AI would eliminate 50% of entry-level white collar jobs within five years, which has upset the Trump administration, given its focus on incentivizing innovation to compete against China.
“We just really want to catalyze more people to be thinking about this, studying this,” said Sarah Heck, Anthropic’s head of policy programs and partnerships. “This is really a wide call for people thinking creatively about how we can study the broad effects of AI.”
Large tech companies like Google and Microsoft now say about a quarter of their inhouse code is produced by AI models, and some companies have begun to hire fewer software developers, in part because their current developers can do more with less. Anthropic’s AI models are a favorite for software developers and power AI coding startups like Cursor, Replit and others.
The goal of the Economic Futures Program is to glean more information on how AI is affecting the broader global economy and labor market. An in-house team of economic experts at Anthropic plan to award 20 to 50 global research grants of up to $50,000 each, and provide free access to Anthropic’s AI products to assist in the analysis.
Heck said a few areas that researchers might look at are AI’s effect on gross domestic product, how quickly the technology is being adopted, and whether the labor market is already changing as a result.
The program will also establish new forums for researchers and policymakers to develop ideas on how to prepare for the eventual onslaught of AI. This fall, Anthropic plans to host a conference in Washington, DC, and Europe to present and discuss those plans.
In this article:
Know More
In February, Anthropic launched its own “Economic Index” to glean insights into how AI is changing the economy based on how people are using Claude, the company’s chatbot. Amodei predicted in March that AI might handle almost all coding tasks within a year.
The new Economic Futures Program will expand the Economic Index to include new data, both from new research and from public sources, giving a fuller picture of the changes on the ground.
Anthropic has woven the perils of AI into the mission of the company, a public benefit corporation that has committed to putting humanity before investors. The company was founded in 2021 by several OpenAI employees who wanted to focus more on AI safety. Those staff members, including Amodei, have said the best way to ensure powerful models remain safe is to develop them first, with safety in mind.
The company has advocated for regulations at the federal and state level aimed at preventing dangerous outcomes of AI development, such as the proliferation of biological weapons.
Critics have accused Anthropic of using AI safety and regulation as a way to consolidate power and increase the valuation of the company.
Recently, Anthropic got on the wrong side of Trump administration officials for lobbying against a provision of a Republican tax bill that would put a moratorium on state regulations on AI.
Anthropic’s new endeavor aims to fund researchers and economists from a diverse set of ideological backgrounds based all over the world. “I would, as much as possible, like us to be politically agnostic,” Heck said.

Reed’s view
People who work in tech have been thinking for years about the economic implications of AI. The common solution is usually universal basic income, or giving a salary to people for no work. It’s time to think of some new ideas.
What’s notable about Anthropic’s move here is that it’s essentially saying, “We don’t know exactly what to do, but we’d like smart people to think about it more.” Amodei’s splashy predictions about how fast AI is going to displace workers or software developers make for great headlines and have gotten the company some attention, but funding actual research will do a lot more to prepare us for what is about to come. But I wouldn’t expect any black-and-white answers. The trajectory of AI capabilities and how they might be adopted and at what rate is guesswork.
Heck referenced datasets from the Bureau of Labor Statistics, LinkedIn, and others as source material for studying the effects of AI. I asked her, though, how current statistics could really tell us much about the effects of future AI tools.
She argued that AI is being adopted widely enough in everyday life that there may already be data out there on the effects, if researchers look hard enough. And she made a good point that part of the research could look at creating frameworks for studying AI’s impact before Pandora’s economic box has been opened.

Room for Disagreement
OpenAI chief operating officer Brad Lightcap took issue with Amodei’s dire predictions about the way AI might affect labor in an interview earlier this week on the Hard Fork podcast.
“We’ve seen no evidence of this,” Lightcap said. “Dario is a scientist, and I would hope that he takes an evidence-based approach to these types of things.”
“We work with every business under the sun,” he said. “We look at the problem and opportunity of deploying AI into every company on earth, and we have yet to see any evidence that people are kind of wholesale replacing entry-level jobs.”

Notable
- Researchers are racing to quantify the “cognitive cost” large language models impose on regular users’ creative capacities, The New Yorker’s Kyle Chayka writes.
- Proposals to stop states from regulating AI independent of the federal government have turned into a parliamentary and political minefield for Republicans, as they haggle over their megabill.