• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s special edition, we look at how effective altruism, an ideology built on message boards, ͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
November 21, 2023
semafor

Technology

Technology
Sign up for our free newsletters
 
Louise Matsakis and Reed Albergotti

The AI industry turns against its favorite philosophy

THE SCENE

One of the most prominent backers of the “effective altruism” movement at the heart of the ongoing turmoil at OpenAI, Skype co-founder Jaan Tallinn, told Semafor he is now questioning the merits of running companies based on the philosophy.

“The OpenAI governance crisis highlights the fragility of voluntary EA-motivated governance schemes,” said Tallinn, who has poured millions into effective altruism-linked nonprofits and AI startups. “So the world should not rely on such governance working as intended.”

His comments are part of a growing backlash against effective altruism and its arguments about the risks AI poses to humanity, which has snowballed over the last few days into the movement’s second major crisis in a year.

The first was caused by the downfall of convicted crypto fraudster Sam Bankman-Fried, who was once among the leading figures of EA, an ideology that emerged in the elite corridors of Silicon Valley and Oxford University in the 2010s offering an alternative, utilitarian-infused approach to charitable giving.

EA then played a role in the meltdown at OpenAI when its nonprofit board of directors — tasked solely with ensuring the company’s artificial intelligence models are “broadly beneficial” to all of humanity — abruptly fired CEO Sam Altman on Friday, creating a standoff that currently threatens its entire existence.

Three of the six seats on OpenAI’s board are occupied by people with deep ties to effective altruism: think tank researcher Helen Toner, Quora CEO Adam D’Angelo, and RAND scientist Tasha McCauley. A fourth member, OpenAI co-founder and chief scientist Ilya Sutskever, also holds views on AI that are generally sympathetic to EA.

Until a few days ago, OpenAI and its biggest corporate backers didn’t seem to think there was anything worrisome about this unusual governance structure. The president of Microsoft, which has invested $13 billion into OpenAI, argued earlier this month that the ChatGPT maker’s status as a nonprofit was what made it more trustworthy than competitors like Meta.

People get hung up on structure,” Vinod Khosla, whose venture capital firm was among the first to invest in OpenAI’s for-profit subsidiary in 2019, said at an AI conference last week. “If you’re talking about changing the world, who freaking cares?”

Less than a week later, Khosla and other OpenAI investors are now left with shares of uncertain value. So far, it looks like Microsoft and Altman — a billionaire serial entrepreneur — are successfully outmaneuvering the effective altruists on the board. Nearly all of OpenAI’s employees have threatened to quit if the directors don’t resign, saying they will instead join their former boss on a new team at Microsoft.

Semafor/Joey Pfeifer

KNOW MORE

Effective altruism emerged over a decade ago with a new way to think about helping the world. Instead of donating to causes that people found personally compelling, its leaders urged adherents to consider invented variables like “expected value,” which they said could be used to objectively determine where their impact would be the greatest.

EAs initially focused mostly on issues like animal welfare and global poverty, but over time, worries about an AI-fueled apocalypse became a central focus. With funding from deep-pocketed donors like billionaire and Facebook co-founder Dustin Moskovitz, they built their own insular universe to study AI safety, including a web of nonprofits and research organizations, forecasting centers, conferences, and web forums.

Toner and McCauley are both leaders of major EA groups backed by Moskovitz. McCauley sits on the board of Effective Ventures, one of the most important institutions of the movement. Earlier this year, Oxford philosophy professor William MacAskill, perhaps the most famous EA ever, named her as one of a small group of “senior figures.” Toner is the director of strategy at the Center for Security and Emerging Technology (CSET), a think tank at Georgetown University funded by Moskovitz’s grant-making organization Open Philanthropy.

Before joining CSET, Toner worked at Open Philanthropy and several other EA-linked organizations, and she and McCauley also sit on the board of another one called the Centre for the Governance of AI. D’Angelo, meanwhile, worked with Moskovitz at Facebook in the early aughts and sits on the board of his software company Asana. He has repeatedly echoed similar concerns about AI espoused by EA.

Despite these connections, OpenAI told a journalist in September that “none of our board members are effective altruists.” It argued that their interactions with the EA movement were largely “focused on topics related to AI safety.”

LOUISE’S VIEW

Among the issues that the OpenAI saga revealed is EA’s strange detachment from practical expertise. As this week painfully showed, its acolytes seem to be more effective on message boards than in boardrooms.

The entire “AI safety” field is interwoven with effective altruism, with some concepts borrowed from related niches like the rationalist subculture. Since its inception, it has been divorced from the broader, preexisting world of civil society groups thinking about potential harms from automated systems, which frequently focused on gender and racial bias, invasions of privacy, and other present-day ills.

I’ve spent much of the last few months trying to understand the arguments made by AI safety researchers and their growing influence around the world, particularly among government regulators. What I saw is how incredibly isolated EA-linked groups appeared to be from everyone else, frequently to their own detriment.

For example, many AI safety proponents focus on the possibility of AI-fueled bioterrorism, a concern also parroted by OpenAI. But they rarely take into consideration the wealth of existing expertise about how biosecurity threats have been handled in the past.

An early report indicated that OpenAI’s board also apparently didn’t think it needed the advice of lawyers or communications professionals before it ousted Altman, a decision that has now backfired spectacularly. Before he was arrested last year, Bankman-Fried surrounded himself with fellow effective altruists as well, failing repeatedly to follow widespread, established norms about evaluating the risks of volatile investments.

For a long time, the inward-looking nature of AI safety and effective altruism benefited OpenAI and its investors. The ideology allowed the startup’s own researchers to help shape the debate about what risks were the most important, without needing to worry about dissenting opinions from outside its bubble. And giving lip service to safety concerns provided a layer of public relations cover for the fast-moving tech startup.

Many EA figures are now again turning to their own methods for making sense of what has transpired, guided by sources like a prediction market website backed by Bankman-Fried and other EA donors. “I remain confused but I note that the market now predicts that this was bad for AI risk indeed,” Tallinn said, citing one of its polls.

PostEmail
Hot On Semafor
  • Why a Hollywood mogul invested in a Gulf gas company.
  • Donald Trump may be running for the third time in eight years, but the world has changed around him in ways his campaign believes will give him a boost.
  • Stocks in Argentina soared as investors warmed to President-elect Javier Milei’s “shock therapy” plans for the country’s economy.
PostEmail