• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


Apr 28, 2023, 1:04pm EDT
tech

The co-founder of Skype invested in some of AI’s hottest startups — but he thinks he failed

Semafor/Joey Pfeifer
PostEmailWhatsapp
Title icon

The Scene

Jaan Tallinn used the fortune he made selling Skype in 2009 to invest in AI companies like Anthropic and DeepMind — not because he was excited about the future of artificial intelligence — but because he believed the technology was a threat.

By funneling more than $100 million in more than 100 startups, the billionaire hoped he could steer its development toward human safety.

“My philosophy has been that I want to displace money that doesn’t care,” he said in an interview, describing his strategy, which he now believes was doomed.

AD

“Plan A failed. There is a dissonance between privately being concerned and then publicly trying to avoid any steps that would address the issue.”

Tallinn, the 51-year-old computer programmer who lives in Tallinn, Estonia, said in the interview via Skype he was disappointed that Anthropic and other AI labs he has funded didn’t sign on to a recent open letter, which implored the artificial intelligence industry to take a six-month pause on new research. It was organized by the Future of Life Institute, which Tallinn co-founded, and included prominent signatories like Elon Musk.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

AD

Anthropic co-founder Jack Clark said the company, which recently received a $300 million investment from Google, does not sign petitions as a matter of policy. “We think it’s helpful that people are beginning to debate different approaches to increasing the safety of AI development and deployment,” the company said in a statement.

Still, of all the firms on the forefront of AI development, Tallinn believes Anthropic is the most safety-conscious, creating breakthrough guardrails such as “Constitutional AI,” which constrains AI models with strict operating instructions.

Tallinn said Anthropic could have released its chatbot, Claude, much earlier but decided to wait to address safety concerns. Anthropic has also supported the idea of government oversight of the AI industry.

AD

But it and other major players like OpenAI are advancing the technology so quickly that Tallinn believes even conscientious companies have lost the ability to keep AI from spiraling out of control.

Title icon

Know More

Tallinn was one of the first deep-pocketed technologists to embrace the idea that AI posed an existential threat to humanity,

At first, he was a skeptic, he said in a recent discussion. That is, until he sat down with Eliezer Yudkowsky for four hours at a San Francisco airport restaurant in March 2009. Yudkowsky, the founder of the Machine Intelligence Research Institute, is one of the leading voices sounding the alarm on the technology.

Through MIRI, Tallinn was introduced to many of the brightest minds in the industry, including one of Anthropic’s founders, Dario Amodei. Like many in the AI safety movement, both also believe in Effective Altruism, which argues the best way to do good in the world is to embrace an ambitious approach guided by reason.

Tallinn began investing in companies that would go on to shape the field of AI. He was an early financial supporter and board member of U.K.-based DeepMind, which was founded in 2010 and acquired by Google in 2014. It was recently merged with Google Brain.

Amodei went on to work on artificial intelligence at Google and later joined OpenAI, the non profit started by Musk and other luminaries like Reid Hoffman. Amodei became a key figure in the direction of OpenAI, according to people familiar with the matter, pushing forward the large language models that ultimately became ChatGPT.

Tallinn offered to financially support OpenAI’s safety research and met regularly with Amodei and others at the organization, but ultimately didn’t put money into OpenAI.

Then, during the pandemic, Tallinn heard Amodei was leaving OpenAI to start his own company. Tallinn reached out immediately. He declined to say what the two discussed, but according to people familiar with the matter, Amodei and others were starting Anthropic in part because they did not believe OpenAI was focused enough on AI safety. Tallinn ultimately led a $124 million series A fundraising round for Anthropic.

He was of two minds: “On the one hand, it’s great to have this safety-focused thing. On the other hand, this is proliferation,’ he said, because creating Anthropic might add to the competitive landscape, thus speeding development.

Instead of taking a board seat, Tallinn said he wanted someone he trusted to join Anthropic’s board and argued for Luke Muehlhauser, an artificial intelligence safety researcher who was once executive director of MIRI now works at Open Philanthropy, an organization that makes charitable grants based on the principles of Effective Altruism.

“I was looking for someone I felt was a good representative of the things that I’m worried about,” Tallinn said.

Despite all the measures and the good intentions, he said the powerful new AI models being pioneered by Anthropic are simply too risky. He now believes the best thing governments can do to limit AI is to put a cap on the compute power that is allowed to be used to train new models.

“Anthropic is way more safety conscious than any of the labs that I’ve seen,” he said. “But that doesn’t change the fact that they’re dealing with dangerous stuff and I’m not sure if they should be. I’m not sure if anyone should be.”

Title icon

Reed’s view

It’s hardly a guarantee that the large language models being developed by OpenAI, Anthropic and others will lead to world-killing superintelligence.

But there’s pretty good evidence that, even if that were the case, we’d be unable to stop it with any kind of government regulation. Generating killer robots may, in the near future, require nothing more than a laptop. How are you going to stop that?

AI would be easier to control if the U.S. government was at the forefront of its development. To do that, Uncle Sam would need to hire top AI talent to work at national labs. Those are muscles that the government lost when the Cold War ended, but there’s no reason it couldn’t get them back.

The golden age of more responsible technological innovation in the U.S. came after World War II, when very smart people in government worked hand-in-hand with very smart people in the private sector. It lasted for half a century.

That’s a secret sauce worth recreating and, because the government does not have a profit motive, it might be more effective at controlling AI than politicians trying to pass laws.

Title icon

Room for Disagreement

Alondra Nelson, who helped author the Blueprint for an AI Bill of Rights, argues here that there is a lot that the government has already done and could do in the future to address all the risks associated with AI: “It will require asking for real accountability from companies, including transparency into data collection methods, training data, and model parameters. And it will also require more public participation — not simply as consumers being brought in as experimental subjects for new generative AI releases but, rather, creating pathways for meaningful engagement in the development process prerelease,” she wrote.

Title icon

Notable

  • In February 2020, tensions at OpenAI were spilling out into the open, as this profile details.
Semafor Logo
AD