what’s at stake
As Congress weighs how and whether to pass legislation regulating artificial intelligence, tech executives have become fixtures on Capitol Hill.
Beyond meeting with US lawmakers, big AI companies like Meta, Anthropic, and OpenAI are also getting into the business of political spending to influence AI rules.
OpenAI’s Sam Altman took these overt influence efforts a step further earlier this week by publishing a blueprint, first reported by Axios, for how the government should regulate AI to minimize risks and address the economic impact of the technology.
Altman called them “people-first policy ideas,” but his suggestions aimed a spotlight on the AI industry’s public advocacy.
In this article:
who’s making the case
Jeff Hauser, founder and executive director of the Revolving Door Project, argued that lawmakers should not allow the AI industry to heavily influence regulations they draft:
“Since Milton Friedman’s vision of ‘shareholder capitalism’ that all corporations exist to do is enrich their shareholders became pervasive, any claim that companies headquartered in the United States are in any meaningful sense `patriotically American’ or public interest-minded has been rendered null and void. And that goes double for an industry in which the largest shareholders are often foreign sovereign wealth funds or idiosyncratically authoritarian oligarchs, foreign or domestic.
“Obviously, the United States government should collect as much information as possible from corporations — no one is advocating that the government literally ignore all statements from the private sector. But as the AI industry proclaims that it will inevitably reorder all of human existence, AI companies should be the subjects of regulation, rather than its author.”
Vivek Chilukuri, who leads the Center for a New American Security’s technology and national security program, argued that the problem is not AI companies influencing regulations, but a lack of know-how among policymakers:
“Governments today are so behind in AI expertise that they often don’t know what questions to ask or problems to solve — let alone anticipate. For instance, industry has distinct knowledge about AI’s emergent capabilities and risks, the efficacy of evaluations, and innovations in model training and alignment. Failure to account for these fast-changing nuances would produce regulations that, at best, don’t work, and could actively cause harm.
“Narrowing the asymmetry of AI expertise between the public and private sectors is therefore paramount, but it will take time. This persistent imbalance, combined with the dizzying pace of AI progress, points to two truths: Government must act urgently to regulate AI, and it needs industry more than ever to do this well. But our broken campaign finance system and revolving door political culture have undermined the very trust in public-private collaboration that is essential for a successful AI transition. We’re running out of time to get this right.”
Nathan Leamer, executive director of Build American AI, a group affiliated with the pro-AI super PAC Leading the Future, argued that those who say they can write rules around AI without consulting the industry are peddling a “dangerous fantasy”:
“We have a long and successful history of emerging industries playing an important role in discerning relevant policy questions. It is a dangerous fantasy to believe we can regulate a technology as transformative as AI while keeping the very people who build it at arm’s length. Excluding industry from the room doesn’t make for ‘independent’ policy but sets the table for ignorant policy. The innovators on the front lines have the most granular understanding of these models, and ignoring their technical expertise is a recipe for a regulatory regime that is obsolete before the ink is even dry.
“However, with that said, effective governance isn’t a solo act; it’s a concert where academics, analysts, government, technical experts, and voters all play a necessary role in establishing basic safety standards that enable innovation. We need a national framework that balances the private sector’s deep-bench knowledge with the appropriate level of public oversight to ensure America continues to lead the world in innovation.”
Notable
- Leading the Future backed five House Democrats this week, a sign that the group “plans to spend its considerable war chest on both sides of the aisle,” Politico reported.
- People close to Altman question whether he can be trusted, according to the New Yorker.




