• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


Updated Sep 22, 2023, 1:59pm EDT
tech

Author of ‘pause AI’ letter reflects on its impact

Wikimedia Commons
PostEmailWhatsapp
Title icon

The Scene

Six months ago, Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and other tech luminaries called for a pause on certain developments of artificial intelligence. Since then, companies and consumers have only accelerated their adoption of the technology.

MIT professor Max Tegmark helped lead the effort to call for a pause as president of the Future of Life Institute, which aims to protect humanity from an AI apocalypse. We talked to Tegmark about his real goal in issuing the letter and whether another call to action is needed in an edited conversation below.

Title icon

The View From Max Tegmark

Q: Take us back six months. What were you trying to accomplish with the letter?

AD

A: The progress in AI had gone a lot faster than many had expected. Three years ago, most experts were predicting we were decades away from artificial general intelligence that could master language and common knowledge to the point of fooling a human and then GPT-4 gave us that. We felt that there was a huge amount of pent up anxiety but people were afraid of articulating that out of fear of being branded Luddite scaremongers. My key goal with the pause letter was to legitimize this, to make people feel safe having the conversation of whether we really should continue full steam ahead or whether we needed to slow down some things.

Q: Obviously, there wasn’t a pause. But what impact has the letter had on the conversation and the development of the technology?

A: I was never expecting there to actually be an immediate pause. So I was overwhelmed by the success of the letter in bringing about this sorely needed conversation. It was amazing how it exploded into the public sphere. You started getting hearings in the Senate. All sorts of politicians suddenly feel so comfortable asking questions about this. And people from far outside of AI started weighing in saying, ‘hey, the future shouldn’t be decided by tech bros.’

AD

Q: Over the last six months, has the technology progressed faster or slower than you expected?

A: About as expected. The latest prediction [for the advent of AGI] is 2026. And super intelligence, which completely blows away human intelligence, is predicted to happen within a year of artificial general intelligence by Metaculus.com. People that used to call these things long term risks are starting to stop calling them long term.

Q: Do you believe that it’s going to happen that fast?

AD

A: I’m very humble about the fact that I don’t know when things are going to happen, but what’s frightening is that when I talk to leaders in these companies and when I talk to leading AI research colleagues, I see a huge shift in timelines, away from decades into the future to two years from now, one year from now. It’s quite likely that we’re very close to it. Figuring out human language and learn common knowledge from human data might have been the biggest stumbling blocks holding us back.

[Computer scientist] Yoshua Bengio is on record arguing that we have now passed it. And if you go read Alan Turing, he said in 1951 that when we get there and machines outsmart us, you should expect humans to lose control to machines. After that it’s not our planet as much as it’s the machines’ planet. And no one has really convincingly refuted that argument of Alan Turing. People didn’t worry about it so much before because they figured we have decades to sort that out.

Q: If that’s going to happen, what needs to be done in the next three years to prevent it?

A: The problem is the technology is moving much faster than the policymaking. And that’s why we need to put safety standards in place now that will slow down the riskiest developments to give AI governance a chance to catch up. Politicians just aren’t as fast as the tech industry, and it’s pretty straightforward how to do it.

We’ve done it before in fields like biotech. A biotech company isn’t just allowed to sell its latest drugs and medicine in supermarkets without first convincing experts at the FDA that this is safe and the benefits outweigh the risks. We need an FDA for AI, first in international countries and then at an international level to make sure if it takes a little longer for companies to demonstrate that certain things are safe, then they need to wait a little longer.

Q: In that analogy, if the drug is artificial intelligence, once you have something to test and see if it’s safe or not, hasn’t it already been created and is therefore too late?

A: That’s why we have bio safety labs. If someone is doing gain of function research on a new super bird flu that’s airborne and kills 90% of everyone who gets it, even the research is heavily regulated and can only be done, if at all, with very careful government oversight.

Q: And yet we still had a lab leak and lots of people died. If you look at that model, the regulation didn’t really work.

A: I’d say it’s the other way around. It means you need even stronger regulations. In AI we have none at all in the United States. Anyone can do anything they want. It’s just totally the Wild West. So if it just starts by upping our game and just copy what biologists have done, that’s already a step in the right direction. And it will lead to a pause on some of the things that are happening now. So people can meet those safety standards and it will incentivize companies to spend more money now on safety. Biotech companies actually spend a lot of money on clinical trials, and that’s a good thing.

Q: People like [Skype co-founder] Jaan Tallinn have argued we should put a limit on the compute power that you’re allowed to have.

A: That’s one aspect. One of the most effective ways to regulate nuclear weapons proliferation is to limit people’s access to plutonium and enriched uranium. For AI, the easiest kind of hardware to regulate is access to these massive GPU farms. There are very few companies that produce these and these are very large facilities using megawatts of power. You can detect it from space. Having insight into this and requiring a know your customer regimen for companies, for their big buyers, it is a great place to start. And another complementary thing to do is clarify liability law, and make clear that if a company develops an AI system that kills a lot of people or causes massive harm, they are actually liable for it.

Q: Isn’t it too late once a lawsuit is filed?

A: For human-extinction level things. But between now and then, there will be a lot of lesser disasters. Having a liability regimen and oversight now to prevent those lesser problems will help instill a culture of safety. It’ll help slow down and give us more time to prepare for the even bigger challenges. Right now, everybody’s driving full speed towards this cliff because there’s a lot of money right off the edge of the cliff.

Many of them have the idea that once they get to the cliff, they’re going to get so much power from the technology, but turn around and use the power to stop all the others who are just right behind them. To make it even worse, the whole cliff is covered in thick fog right now, so nobody knows exactly where the edge is.

Q: How do you deal with international regulation? You might be able to get the West to do a pause, but how do you get China, Russia, and other countries from welcoming in these giant GPU farms?

A: I think this US-China problem is overhyped. It’s a very effective technique used by the tech lobbyists to derail regulation attempts. The fact is that China has cracked down more on their own tech companies than America has. And the reason is obvious. The Chinese Communist Party really wants to be in control and is terrified of some Chinese company building a super intelligence and overthrowing them. So we can simply play for the self interest of all the different world governments to incentivize them to not let any one in their own country do something really reckless like that.

Q: So would you have to create an international agency that could do inspections and make sure this isn’t happening?

A: First, you want to make sure each government has the ability to prevent its own companies and researchers from doing crazy stuff that could overthrow that government or otherwise mess up that country. Because most of the research is happening in private industry, both in China and in the West.

After that, you get the governments talking to each other. It becomes more like when the Soviet Union and the U.S. sat down to try to avoid a nuclear war. All sides realize that if anyone builds out of control super intelligence, we all go extinct. It doesn’t matter whether you’re American or Chinese once you’re extinct.

And on the flip side, we also know that if that can be prevented, both the West and China are going to be spectacularly richer and better off by having this opulence of goods and services produced by AI. So there’s a very strong incentive for collaboration.

Q: Are you encouraged by the trend to make these models smaller and more efficient to run on small computers?

A: I think that makes everything even more scary. It’s like if people figure out how to make miniaturized nukes that fit in the suitcase, does that make you feel more safe?

Q: But aren’t they then less powerful?

A: They’re not that much less powerful. And in fact, we know that it’s physically possible to make AI that is as powerful is GPT-4 that uses vastly less power and vastly less hardware, because your own brain can do a lot of stuff that you can’t do with GPT-4, using about 20 watts of electricity, like the light bulb.

Q: Are you saying that the human brain works the same way as a large language model?

A: No, but I’m saying that the human brain is proof that you can do comparably difficult stuff with something that is way smaller, and uses less power. So eventually, we’re going to figure out a way of doing it in silicon also. Maybe not the human way, but maybe an even more efficient way. There’s basically a hump now where we’re compensating for our incredibly lousy software by using way more hardware than is actually necessary.

And then once we’ve actually figured out how to build AGI, the AI itself can figure out ways of doing that same thing with way less hardware. Or how to be way smarter with the amount of hardware it has. This is also where the idea of an intelligence explosion comes from. Once we get to artificial general intelligence, where machines can do AI research better than we can, we should expect we will very quickly see enormous further improvements.

Q: So should we stop all automation research?

A: We didn’t call for a pause on everything. We called for a pause on these ever more gigantic models so that policymaking has a chance to catch up to safety standards.

There is a disturbing defeatism I feel is propagating in the public space. People say things like, ‘oh, it’s inevitable that super intelligence is coming and humans are going to be replaced.’ It’s not inevitable. This is exactly how you do psychological warfare against the country. You convince them that it’s inevitable that they’re going to lose the war. Right now, it’s still humans in charge and let’s keep it that way.

Q: Does there need to be another letter soon?

A: We and other organizations already have put out very specific descriptions of what we need. There’s a conference happening in the UK in November. The first ever international summit on this. So the ideas are there. What needs to happen is political action now from policymakers. All they need to do is listen to public opinion. Another thing that has really changed after the letter is the opinion polls. There was one out just the other day again showing two-thirds of Americans want some kind of pause.

Only a small minority are against it. Most people around the world feel that full steam ahead, uncontrolled AI development is a bad thing. A small number of corporate lobbyists feel it’s a good thing. And if politicians start listening to their voters, rather than the lobbyists, then there will be safety standards. And they will cause there to be a pause.

Semafor Logo
AD