• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG

In today’s edition, we have a scoop on the Biden administration considering an executive order desig͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
rotating globe
September 22, 2023


Sign up for our free newsletters
Louise Matsakis
Louise Matsakis

Hi, and welcome back to Semafor Tech. It’s been 10 months since ChatGPT was released, and we’re now beginning to see how lawmakers might approach regulating advanced artificial intelligence. One thing I’m closely watching: who elected officials are listening to on this issue.

Today, I have a scoop about a provision the White House is considering putting in its upcoming executive order on AI, which would require cloud computing giants to share information about their customers with the government. It’s an idea that has been proposed by OpenAI and Microsoft, as well as organizations concerned about what they say are “existential risks” posed by the technology.

The story indicates that people like Sam Altman and Microsoft President Brad Smith are having a big impact in Washington. Read on below for more details about what the provision would mean, and why I think it has the potential to backfire.

Plus, Reed has an interview with the president of the Future of Life Institute, discussing what has happened in the six months since he published a viral letter calling for a pause on advanced AI development.

Move Fast/Break Things

➚ MOVE FAST: Gaming. The $75 billion Microsoft-Activision deal just moved up a level. After initially rejecting the tie-up, U.K. authorities said today that a revised merger pact addressed most of their antitrust concerns, clearing a major hurdle for the transaction to move forward.

➘ BREAK THINGS: Gambling. The hackers who targeted MGM Resorts and Caesars essentially read the room. Reuters reported that the cyber criminals were unusually sophisticated and convincingly posed as employees when calling company IT help desks to obtain supposedly lost login information.

Reuters/Bridget Bennett
Louise Matsakis

White House could force cloud companies to disclose AI customers


The White House is considering requiring cloud computing firms to report some information about their customers to the U.S. government, according to people familiar with an upcoming executive order on artificial intelligence.

The provision would direct the Commerce Department to write rules forcing cloud companies like Microsoft, Google, and Amazon to disclose when a customer purchases computing resources beyond a certain threshold. The order hasn’t been finalized and specifics of it could still change.

Similar “know-your-customer” policies already exist in the banking sector to prevent money laundering and other illegal activities, such as the law mandating firms to report cash transactions exceeding $10,000.

In this case, the rules are intended to create a system that would allow the U.S. government to identify potential AI threats ahead of time, particularly those coming from entities in foreign countries. If a company in the Middle East began building a powerful large language model using Amazon Web Services, for example, the reporting requirement would theoretically give American authorities an early warning about it.

The policy proposal represents a potential step toward treating computing power — or the technical capacity AI systems need to perform tasks — like a national resource. Mining Bitcoin, developing video games, and running AI models like ChatGPT all require large amounts of compute.

If the measure is finalized, it would be a win for organizations like OpenAI and the RAND Corporation think tank, which have been advocating for similar know-your-customer mechanisms in recent months. Others argue it could amount to a surveillance program if not implemented carefully.

“The details are really going to matter here,” said Klon Kitchen, a nonresident senior fellow at the American Enterprise Institute, where he focuses on national security and emerging technology. “I understand why the administration is trying to get at this issue. We’re going to need a strategic understanding of adversarial development of these models.”

The White House declined to comment. The Department of Commerce directed questions to the White House.

Unsplash/ Tabrez Syed


One major challenge for this approach: the amount of computing power it takes to build powerful models like ChatGPT is rapidly falling, thanks to improvements in the algorithms used to train them. By the time the Commerce Department decides on a reporting threshold, it could already be out of date, and trying to make effective updates will be like chasing a moving target.

Instead, Commerce could find other, more qualitative indicators to determine whether an organization’s computing usage is cause for alarm. But that would require cloud firms to extensively spy on their customers, with whom they often have conflicts of interest.

Microsoft, for example, is a major investor in OpenAI. If a promising startup began buying computing resources from Azure to build a ChatGPT competitor, Microsoft would have to report that activity to U.S. authorities under this provision.

Sayash Kapoor, a researcher at Princeton University who studies the societal impacts of AI, noted that this policy would also only apply to one kind of technology: large language models. Other AI tools that have been used for harmful purposes, such as facial recognition algorithms, require far less compute to build and run, meaning they likely wouldn’t meet the threshold. “If we’re looking at it from a harms perspective, I think this is very shortsighted,” Kapoor said.


For Room for Disagreement and the rest of the story, read here. →



Our friends at The Neuron track what’s going on in AI so you don’t have to. Catch up on the latest trends, news, and research impacting your work in just 3 minutes a day. Join 150,000 leaders from forward-thinking organizations like Atlassian, Electronic Arts, Salesforce and more. Sign up for The Neuron here.

Wikimedia Commons

MIT professor Max Tegmark is president of the Future of Life Institute, which aims to protect humanity from an AI apocalypse. Six months ago, his organization got Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and other tech luminaries to sign a letter calling for a pause on certain developments of artificial intelligence.

Q: Obviously, there wasn’t a pause. But what impact has the letter had on the conversation and the development of the technology?

A: I was never expecting there to actually be an immediate pause. So I was overwhelmed by the success of the letter in bringing about this sorely needed conversation. It was amazing how it exploded into the public sphere. You started getting hearings in the Senate. All sorts of politicians suddenly feel so comfortable asking questions about this. And people from far outside of AI started weighing in saying, ‘hey, the future shouldn’t be decided by tech bros.’

Q: Over the last six months, has the technology progressed faster or slower than you expected?

A: About as expected. The latest prediction [for the advent of AGI] is 2026. And super intelligence, which completely blows away human intelligence, is predicted to happen within a year of artificial general intelligence by Metaculus.com. People that used to call these things long term risks are starting to stop calling them long term.

Q: Do you believe that it’s going to happen that fast?

A: I’m very humble about the fact that I don’t know when things are going to happen, but what’s frightening is that when I talk to leaders in these companies and when I talk to leading AI research colleagues, I see a huge shift in timelines, away from decades into the future to two years from now, one year from now. It’s quite likely that we’re very close to it. Figuring out human language and learn common knowledge from human data might have been the biggest stumbling blocks holding us back.

[Computer scientist] Yoshua Bengio is on record arguing that we have now passed it. And if you go read Alan Turing, he said in 1951 that when we get there and machines outsmart us, you should expect humans to lose control to machines. After that it’s not our planet as much as it’s the machines’ planet. And no one has really convincingly refuted that argument of Alan Turing. People didn’t worry about it so much before because they figured we have decades to sort that out.

Q: How do you deal with international regulation? You might be able to get the West to do a pause, but how do you get China, Russia, and other countries from welcoming in these giant GPU farms?

A: I think this US-China problem is overhyped. It’s a very effective technique used by the tech lobbyists to derail regulation attempts. The fact is that China has cracked down more on their own tech companies than America has. And the reason is obvious. The Chinese Communist Party really wants to be in control and is terrified of some Chinese company building a super intelligence and overthrowing them. So we can simply play for the self interest of all the different world governments to incentivize them to not let any one in their own country do something really reckless like that.


For the rest of the conversation, read here. →



This week, I attended Amazon’s big product unveiling at its new headquarters in Virginia and then went to Microsoft’s event the following day in New York. It was fun to go to these events back-to-back to get a feel for how these two companies see the world and the competitive landscape. Here are my three takeaways.

— Reed

  • Antitrust lawsuits are ineffective at pushing big companies to innovate. New, disruptive technology is what actually forces their hands. When OpenAI unveiled ChatGPT, the large companies had to respond. On Wednesday, Amazon demonstrated how it’s undergone a complete transition from its old AI models to the kind of large language models that power ChatGPT. Amazon shareholders didn’t want this: Profits come from making the same thing cheaper and improving margins.

    But it’s a good thing for consumers. It’s also healthy for companies and the ambitious people who work for them. As one Amazon employee told me, this is “war time.” And that means less corporate politics and more building. “I haven’t been this excited since 2014,” Rohit Prasad, Amazon’s chief scientist, told me in an interview.
  • Product development of any kind needs to be re-imagined in the age of AI. At the Microsoft event, I asked an executive whether the new Office 365 was capable of a certain task. He told me he didn’t know yet, because the nature of large language models is that you don’t really know what it can and can’t do until you try. In the past, every capability in any software program had to be deliberately coded. Now, they can just “emerge” out of the ether.

    I think a new breed of coder will emerge in the post-ChatGPT era. User interface will still matter, but it’s almost like there are two users now: humans and large language models. You’ll need to build one UI for the humans and another UI underneath the hood that allows large language models to easily navigate it, opening up more potential capabilities.
  • There are different philosophies on hardware’s new role. When Amazon unveiled its new products, it was clear that the company views them more like conduits that connect customers with the cloud. Microsoft, on the other hand, showed off how powerful graphics cards in a consumer laptop could locally run an open-source large language model.

    Aside from gamers, the vast majority of consumers haven’t really cared about how powerful their computers are for quite some time. And like almost all software today, most AI will just run on the cloud, making the average consumer’s CPU and GPU performance largely irrelevant.

    But it’s becoming clear that to make large language models really powerful, they’ll need to be personalized, or fine-tuned, on all of your data. That can be done by sending it all to clouds owned by Microsoft, Amazon, or Google. Yet we might see a bifurcation, where some information goes to the cloud, and some stays local. That might mean more powerful graphics cards for non-gamers.

    “Both things will happen. I think we’re going to live in a cloud client world like we do today,” Yusuf Mehdi, Microsoft’s consumer chief marketing officer, told me. But he said there will be cases where people will want to run things locally, for performance or, perhaps, privacy. “We’ll have to figure out a back and forth, when’s the right time to do it,” he said.

“It was an exercise in foolishness.”

— Reid Hoffman on the Future of Life Institute letter calling for a pause on AI development. He was speaking at a conference yesterday held by the Special Competitive Studies Project, a subsidiary of the The Eric & Wendy Schmidt Fund for Strategic Innovation.

Hot On Semafor
  • Israeli officials are working to persuade U.S. leaders that a peace deal between Israel and Saudi Arabia could strengthen the U.S. well beyond the Middle East. One top Netanyahu aide said it could be a “reverse 9/11.”
  • The UN’s climate summit was a bust. The plans world leaders laid out are “like trying to put out an inferno with a leaking hose.”
  • An NSFW chatbot app has surged in popularity, but its sexual content spurred OpenAI to crack down. Now, it’s pitching investors on building its own large language model.
Sign up now to get Semafor in your inbox.
Semafor, Inc. 228 Park Ave S, PMB 59081, New York, NY, 10003-1502, USA