The Scoop
Last month, NBC News broke the news that the artificial intelligence behemoth OpenAI had quietly issued a series of legal threats against nonprofit organizations that have criticized it.
OpenAI reacted with frustration — not with the reporting, but with who paid for it.
The company privately complained to NBC News that the author of the story had been paid by the Tarbell Center for AI Journalism, as part of a fellowship program that embeds journalists within news organizations for several months to write stories about artificial intelligence and its growing impact.
OpenAI’s representatives pointed out that Tarbell, which has also supported reporting at Bloomberg, Time, The Verge, and The Los Angeles Times, is funded in part by one of the groups mentioned in the story, the Future of Life Institute, which is dedicated to warning about AI risks. The news organization quietly appended a disclosure to the piece noting the connection.
Now the Tarbell fellowship itself has become the latest front in a widening battle between accelerationists, including the many aggressive pro-AI development technologists and investors championed by OpenAI and the Trump administration, and skeptics of the guardrail-free development of artificial intelligence, including adherents of the loose movement known as effective altruism. Increasingly, that ideological debate has played out in the media, as OpenAI and its allies attempt to brighten the dark portrait of their industry that’s become common in media coverage.
In this article:
Know More
The behind-the-scenes media war over AI reflects a deep split inside the tech community over whether the private companies driving development are taking the technology’s risks seriously enough, and whether they should be bound by new state or federal regulations. Groups like those that fund Tarbell worry about consequences ranging from disinformation to the end of the human race. Investors and technologists on the other side dismiss some of these concerns as science fiction, and argue that the private sector is currently doing a good job balancing safety and the tech race with China.
The debate has played out in the boardroom of OpenAI, where the effective altruists who helped found the organization tried unsuccessfully to oust CEO Sam Altman several years ago, and the Oval Office of the White House, where President Donald Trump replaced the Biden administration’s skepticism of AI with industry-friendly policies.
Now, the two sides are increasingly also battling for control of the narrative in the news media, which has grown wary of AI amid stories about its real-world harms.
Over the last year, OpenAI and its allies have been seen as taking a more aggressive approach to its critics and towards the media; one close media observer noted the change in tone from Altman’s cerebral and reflective 2021 interview with New York Times columnist Ezra Klein compared to his more combative onstage talk with the Times’ Hard Fork podcast earlier this year. The company has staffed up with seasoned political professionals, including high-level Democrats aimed at lobbying in California, an important battleground in state regulation of AI, and Republicans, focused on the federal government in Washington.
Allies of accelerationists went to work in the media. They sought to point out where the Biden administration and Democratic staffers in Congress were, they believed, at times unwittingly being influenced by AI skeptics and effective altruists aligned with Anthropic, the AI company that has taken a safety-conscious view of AI development. Politico reported in 2023 that Horizon Institute for Public Service — a group that was “effectively created” by Open Philanthropy, a foundation with ties to Anthropic — had funded fellows working on AI in Senate Democratic offices.
The pro-safety crowd has also reacted to the growing tensions.
Open Philanthropy, which was founded to manage the fortunes of Facebook co-founder Dustin Moskovitz and his wife, Cari Tuna, has been bothered by media coverage of it, in particular a series of critical stories from Politico. In recent months, the organization brought in additional experienced political communications staff and renamed itself Coefficient Giving.
CG is one of a series of nonprofits backing Tarbell. Launched in 2022, the center says in its mission statement that it funds work by early-career tech journalists who are focused on helping society “navigate the emergence of increasingly advanced AI.” The organization is “committed to supporting independent journalism that demystifies the debates around this technology, holds the companies and people building it to account, and fosters the discourse necessary to chart a path that benefits society,” it says.
That mission statement fit comfortably with the goals of leading news organizations, which in recent years have welcomed Tarbell fellows into their newsrooms. Over the last several years, Time magazine has had four fellows on staff. Bloomberg had a fellow on staff covering AI and technology. The Verge has had a series of fellows, who are largely treated like junior employees in the newsroom, as have The Los Angeles Times and The Information. (Earlier this year, Semafor briefly had a Tarbell fellow on staff, but ended the agreement after several weeks without publishing any of their work.)
The fellows, generally reporters at the beginning of their careers, have complete editorial independence. People familiar with the arrangements from Bloomberg, NBC News, and Vox Media, the parent company of the Verge, emphasized that the fellows operate under the guidance of editors within their respective publications, not Tarbell, but declined to discuss them on the record.
Still, the arrangement has raised questions among accelerationists and their allies, who believe news organizations are effectively accepting free labor and putting their thumb on the scale for a particular ideology.
Asked about the fellowship and the company’s complaints about NBC’s October article, OpenAI declined to comment. But people critical of the effective altruism movement have increasingly singled out the new organization in unsubtle ways. Around the same time Semafor began asking around about Tarbell, a critical piece about the fellowship and its ties to donors and investors popped up in the conservative Washington Examiner.
For its part, the Tarbell Center said attempts to discredit the work of its journalists proved why the organization’s funding was so important.
“The Tarbell Center exists to support rigorous and independent accountability journalism. We maintain a strict firewall between our funding and our fellows’ editorial output. Our fellows and their host newsrooms possess total autonomy; Tarbell never directs coverage, assignments, or angles,” Cillian Crosson, the center’s executive director, told Semafor in a statement.
“It’s telling that OpenAI is attempting to discredit independent reporting instead of addressing the factual findings of that reporting. As AI companies become some of the most powerful entities on the planet, investigating their activities — whether those of OpenAI, or anyone else — is exactly what journalists should be doing. We are proud to support that work.”
“I do think this is just OpenAI looking for a conspiracy, instead of inward,” one media executive at an outlet that has worked with Tarbell said.
In a statement to Semafor, Naina Bajekal, Coefficient Giving’s director of communications and the former executive editor of Time, said the organization’s work is “informed by the belief that AI has enormous potential to accelerate science and fuel economic growth, but that it could also pose some unprecedented risks — a common-sense view shared by leaders across the political spectrum.”
We take editorial independence seriously: We have no involvement in coverage decisions and have every reason to believe that Tarbell and the newsrooms where its fellows work adhere to the editorial standards that lead to fair, balanced journalism,” Bajekal said.
Max’s view
At a time when many media organizations are facing steep financial challenges, the opportunity to bring in hungry, smart young journalists interested in covering the biggest story of the decade has been hard to resist. And over the course of the last several years, Tarbell fellows have produced interesting work and legitimately newsworthy scoops, some of which have been positive about AI, or critical of both OpenAI and Anthropic. OpenAI has rapidly become one of the most important companies in the world, and of course deserves strong journalistic scrutiny as it attempts to reshape modern life.
Still, at a moment of record low trust in media, editors may want to think twice before welcoming journalists paid by an outside organization with an ideological point of view on one of the biggest stories in the world.
Perhaps this is the new reality of a more modest news media in 2025. As the economics of news business have become trickier, media companies have become more accustomed to loosening some of the old boundaries between the editorial and revenue sides. Many media companies take cuts of sales of products they recommend, while independent news creators often sell their own ads while writing about companies, services, and products. Hosts of podcasts who read advertisements can bill them at higher rates than normal ads because listeners trust the hosts.
Indeed, I emailed a leading thinker on media ethics in nonprofits to ask what they thought of the arrangement. They declined to comment, acknowledging that they were “conflicted out” by ties to one of the organizations involved, and could not weigh in objectively.


