• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG

Intelligence for the New World Economy

  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


Microsoft superintelligence team promises to keep humans in charge

Reed Albergotti
Reed Albergotti
Tech Editor, Semafor
Nov 6, 2025, 9:00am EST
Technology
PostEmailWhatsapp
Title icon

The News

Microsoft is joining the race for superintelligence, but with a caveat: It will prioritize human control over the technology at the expense of maximum capability.

“We cannot just accelerate at all costs. That would just be a crazy suicide mission,” Microsoft AI CEO Mustafa Suleyman, who will head the company’s new superintelligence team, told Semafor this week. “We have to find ways of simultaneously having humanist superintelligence, which delivers the benefits that we’re all chasing for humans, and also accelerating technology at the same time.”

As the capabilities of AI have progressed, the ability of humans to understand and control it have lagged behind. The result is that the largest, most powerful AI “foundation models” like the ones that power ChatGPT sometimes act unpredictably, and computer scientists can’t explain why.

Some of Microsoft AI’s key leaders, like chief scientist Karén Simonyan, will move over to the superintelligence team. Suleyman will continue to lead Microsoft AI, where he helps shape products like Copilot, Edge, Bing and Microsoft Advertising.

AD

Title icon

Know More

Suleyman, along with Demis Hassabis and Shane Legg, helped pioneer artificial intelligence when he co-founded DeepMind, which was acquired by Google in 2014. That acquisition in many ways kicked off today’s AI race and accelerated the conversation around AI safety.

While his life’s work has been advancing AI, Suleyman said we’ve reached a new threshold where capability should no longer be the most important metric.

Humans, for instance, need to retain the ability to control AI models in language they understand — and humans can’t communicate in vector math, the language of today’s most powerful AI models, which converts words and images into columns of numbers.

AD

Mustafa called himself an accelerationist who wants to go as fast as possible, but said it will be necessary to give up some level of capability to ensure humans are able to remain in control.

“That’s a very tough tradeoff because in the history of humanity, we haven’t had to do that,” he said. “The story of our species has been infinitely unlocking capability in science and technology and just inventing more and more and more, and putting it out there without restriction and without guardrails.”

Title icon

Reed’s view

If AI capability keeps growing, safety experts worry that humans could lose control of it. But that isn’t just a safety threat. It’s a business problem that has slowed adoption. The unpredictability makes the software incredibly powerful in some areas, where mistakes and unplanned outcomes aren’t important. But in mission-critical applications, the technology still needs a human to oversee it, limiting its potential.

AD

Getting AI to do useful tasks inside businesses requires hefty investments and human labor to customize the software, build guardrails around it, and maintain its functionality. Microsoft, which has a dominant position in enterprise software and focuses largely on business customers, wants to build AI capabilities into Office and Windows that can automate complex tasks out of the box. That will require the kind of capability more akin to what Suleyman is now tasked with building –- AI that is fully under the control of humans.

Suleyman frames the idea of “humanist” superintelligence mostly in terms of safety risks. And I think that concern is genuine. He was talking about this long before ChatGPT amplified the debate around existential dangers.

But if you read between the lines, Suleyman’s description of humanist superintelligence shows how the business challenges of foundation model companies are fundamentally similar to the concerns of AI safety advocates.

Microsoft’s customers aren’t actually asking for superintelligence. They would be happy with average intelligence with superhuman reliability. But this is not a simple request. The unpredictability of today’s most advanced foundation models is a feature. If it were predictable, it wouldn’t be powerful.

Title icon

Room for Disagreement

Mustafa is a self-described accelerationist, but there are more extreme forms of that ideology that would take issue with the safety-for-capability tradeoff that he believes is necessary. Here’s a passage from the Andreessen Horowitz techno-optimist manifesto:

We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.
We believe in Augmented Intelligence just as much as we believe in Artificial Intelligence. Intelligent machines augment intelligent humans, driving a geometric expansion of what humans can do.
Title icon

Notable

While Microsoft’s humanist version of superintelligence is aimed at building a safer AI, just using the term ‘superintelligence’ is sure to raise eyebrows in the AI safety world. Last month, hundreds of prominent people, including Apple co-founder Steve Wozniak, called for an all-out ban on superintelligence.

AD
AD