Hi, and welcome to Semafor Tech, a twice-weekly newsletter from Louise Matsakis and me that gives an inside look at the struggle for the future of the tech industry.
Humans love to creep themselves out, whether it’s watching horror movies or communicating with dead relatives via a Ouija Board. It turns out AI chatbots like ChatGPT and the more advanced one being tested in Microsoft Bing can fulfill that desire, too. We’ve seen myriad examples posted to Twitter and published in newspapers of creepy responses to AI prompts, but as I explain below, those answers aren’t much more meaningful than what you get back from the Ouija, and focusing on them is a distraction from real issues around AI.
Also, Louise highlights the Chinese surveillance equipment hidden in plain sight ... outside China. And she chats with the person at DeepMind, Google’s AI research outfit, who’s been thinking about how new breakthroughs can affect climate change.
We are also welcoming a new member of the Semafor newsletter family next week. Sign up for Semafor Security from Jay Solomon — showcasing the personalities, hot spots, and money flows driving global instability and conflict.
Are you enjoying Semafor Tech? Help us spread the word!
Move Fast/Break Things
➚ MOVE FAST: Rivalry. More tech giants are joining the race to develop artificial intelligence products, hoping to steal some thunder away from Microsoft and its buzzy new search service. Snap released a conversation bot for Snapchat+ subscribers, and Amazon Web Service said it was partnering with the AI startup Hugging Face to help developers create tools on its cloud.
➘ BREAK THINGS: Loyalty. Elon Musk let go of roughly 200 more Twitter workers, just months after he said there were no plans for additional staff cuts. This round impacted several employees who had publicly demonstrated their loyalty to Musk, including product executive Esther Crawford, who was seen sleeping on the floor of Twitter’s offices.
The number of smartphones shipped to Europe last year, representing a 17% decline from 2021 and the lowest overall lowest figure since 2012, according to Counterpoint Research. Given Russia’s war in Ukraine and a worsening global economy, it’s not surprising that fewer Europeans are excited these days about getting the newest iPhone or Samsung device.
How I learned to love our slightly creepy new AI overlords
Microsoft is moving full speed ahead on its AI efforts, with dozens of product announcements planned in coming months, according to people familiar with the matter, despite a backlash against its Bing Chat, whose responses have gotten, at times, a bit….creepy.
Microsoft plans to further integrate the AI models into other products, like possibly adding chat-like capabilities to Office 365 programs such as Outlook and Word.
Microsoft has instituted new safeguards and limitations after the chatbot-assisted search engine threatened to hack people, told journalist Ben Thompson that he was a “bad researcher,” and professed its love for Kevin Roose of The New York Times.
These instances are “hallucinations,” the term used to describe when large language models get confused and spew gibberish inspired by what real people have posted on the internet. The hallucinations sound so human and convincing that it has prompted some people to call for government regulation of the technology and others to describe it as “scary.”
The wacky hallucinations are a distraction from the real issues. The chatbots could be used for good or evil, and might even require regulation — but not because you can make them write you into your very own science fiction story.
At worst, the new misconceptions about Bing and OpenAI’s ChatGPT are amplifying the mythology, spread by some Silicon Valley technologists, that this advancement is on a path toward sentience, or “Artificial General Intelligence.”
The current AI models are impressive, but the technological breakthrough required to train a computer to think like a human hasn’t happened yet. It may never happen.
We should be having national and global conversations about how to deal with potential abuses of this technology, from using it to emotionally manipulate people to whether it violates intellectual property laws.
The nefarious uses will probably involve more behind-the-scenes, focused efforts. For example, advertising companies and nation-states could more efficiently generate content meant to manipulate online audiences. NordVPN, a security provider, said in a recent report that hackers on the “dark web” have been discussing ways to leverage the power of ChatGPT to create phishing attacks and create malware.
Yes, people have taken great pains to elicit responses from these services that sound like lines from Fatal Attraction or Terminator, and may have been drawn from them, but those responses don’t represent anything more than a math program arranging a bunch of letters based on context clues.
These chatbots, despite scanning the entire internet, aren’t capable of deciphering right answers from wrong ones. This widely-noted limitation illustrates how far these models are from true “general intelligence.”
While powerful, ChatGPT and Bing’s AI are, at their core, a new way of organizing information on the internet. These chatbots can’t infect your computer with a virus, or publicly discredit you. A human would need to do that.
As guest columnists Russell Wald and Jennifer King argued in Semafor last week, it’s important we put this technology under a microscope to better understand its strengths, weaknesses, and risks.
Right now, a lot of the media coverage about AI chatbots is doing a bad job of framing the issue. That’s in part because of muscle memory developed in the wake of the 2016 election, when misinformation and disinformation became the focus of technology coverage.
In hindsight, the hysteria over that issue turned out to be overblown. It’s an even bigger mistake to try and paint chatbot hallucinations into the latest Big Tech panic.
"Evil-looking computer in the style of Salvador Dalí." DALL-E
The biggest and most world-changing uses for the latest AI models behind products like ChatGPT and Stable Diffusion will be inside businesses — not directly in the consumer market.
OpenAI, for instance, sells access to its models to ventures that can use the underlying technology to power products they sell to other businesses.
Jasper, which sells AI tools for the marketing industry, has already taken off. And computer programmers are using AI tools like Replit to build software in record time.
Microsoft has spent billions of dollars building out its Azure servers with custom architecture meant to run AI models, not because it wants to build a better search engine, but because CEO Satya Nadella sees an opportunity to be the go-to place for businesses to spin up AI-enabled services.
When companies use these models, they’ll likely add additional “layers” atop the software. For instance, Walmart could hypothetically use its vast collection of customer data to create a chatbot limited to questions relevant to its business, which may ensure a level of accuracy that’s not possible with chatbots allowed to answer any question.
General-purpose chatbots will likely never be accurate or reliable enough to trust what they produce, and users will probably always have to check responses in search results.
The best consumer use case for AI is likely as a productivity tool. When it is incorporated into email, word processors, and communication tools, it will be like a search engine for your life. Instead of trying to find an old email, you’ll just describe and ask an AI to find it for you. Trying to figure out the right spreadsheet formula will be a thing of the past.
These models will probably never be a good conversation partner you can count on for emotional support. That’s another job still reserved for humans.
ROOM FOR DISAGREEMENT
Computer scientist Timnit Gebru, founder and executive director at the Distributed AI Research Institute, argues AI chatbots should not be used for internet searches.
The promise that an AI chatbot can become so intelligent that it can answer all questions, including medical ones, is “dystopian,” she said.
Instead, the technology should be used for “well-scoped, well-defined products used for a specific thing.”
THE VIEW FROM CHINA
As Louise recently explained, one of the reasons these newer, more advanced AI chatbots were designed in the U.S. first, and not in China, where AI chatbots have for years been more popular, is censorship.
Bing’s chatbot never would have been released in China if there was a possibility it might say something it shouldn’t say.
By taking that reputational risk, Microsoft has now gathered invaluable data on how to control its AI and will make further advances with the feedback.
In China, it will take much longer for companies to get that data.
Despite government bans, Chinese surveillance equipment is still being sold and used in places like Taiwan and Australia, highlighting how challenging it can be to rid global supply chains of sensitive tech from the People’s Republic.
In some cases, Chinese-made devices are simply hiding in plain sight. Earlier this month, an audit by Australia’s Minister for Cyber Security found the government had installed hundreds of products manufactured by Hikvision and Dahua, two major Chinese surveillance firms the U.S. government has accused of aiding Beijing’s brutal crackdown on Uyghurs and other Muslim minorities in the western region of Xinjiang.
Other times, Chinese surveillance equipment is disguised as homegrown. When reporters from the Taiwanese publication CommonWealth Magazine took apart a digital recording device (DVR) sold under the local brand name Benelink last year, they found it had software and hardware identical to another product made by Hikvision. CommonWealth concluded the DVR had “Taiwan skin, mainland bones.”
When Elon Musk acquired Twitter last year, some users said they were fleeing the service to competitor Mastodon and shared links to their Mastodon accounts in their Twitter bios. (Musk eventually banned those links and later un-banned them.)
When this first happened, Mastodon seemed like anything but a major challenger to Twitter. Now, it’s looking like Mastodon is more significant than it seemed, but less because of the Musk-owned platform and more because of how it may affect the internet writ large.
On Tuesday, the news app Flipboard announced it was diving head-first into Mastodon. Flipboard users will now be able to sign up for and post to Mastodon directly from Flipboard, and vice versa.
The move comes about a month after publishing service Medium did something similar with Mastodon. Other services have also indicated they might be next. What’s important about these changes isn’t Mastodon but the open protocol that it runs on, called ActivityPub.
Nikhil Iyengar/Wikipedia Commons
ActivityPub is meant to enable decentralized digital social networks. Services like Twitter, Facebook, and TikTok are controlled by companies in a largely closed system where users of Facebook Messenger, for instance, can’t send a message to someone’s TikTok account.
ActivityPub is like the worldwide web of social networks. Nobody actually owns it. In fact, it was developed by the World Wide Web Consortium, which sets internet standards. The content built around ActivityPub is referred to as the “Fediverse” and it is accessible via services like Mastodon.
When ActivityPub became an official standard five years ago, it didn’t make much of a splash. At the time, people loved their closed social networks like Facebook.
It took Musk buying Twitter to make ActivityPub attractive to some big players in tech.
Flipboard relies on Twitter’s API, which gives outside companies a way to incorporate Twitter content, but Musk has been charging for API access. Flipboard CEO Mike McCue, who once served on Twitter’s board, told Semafor the uncertainty of Twitter’s API was a driving factor in his company’s move to the Fediverse.
If the Fediverse can get some momentum, with major tech firepower developing software, it will become much more than a Twitter replacement. It could usher in a new wave of tech innovation and entrepreneurship that has been somewhat stifled by the walled gardens that dominate Web 2.0.
A Chinese state newspaper asked whether Elon Musk was biting the hand that feeds him. In a WeChat post, the Global Times criticized the Tesla CEO for tweets endorsing the theory that Covid-19 originated in a Chinese lab. “His comments have been continuously used by American right-wing and anti-China media who are hostile to China,” the post read. China is Tesla’s second-largest market and the location of one of its most important manufacturing plants.
Are you enjoying Semafor Tech? The more people read us, the better we’ll get. So please share it with your family, friends and colleagues to get those network effects rolling.
And hey, we can’t inform you on what’s happening in tech from inside your spam folder. Be sure to add firstname.lastname@example.org (you can always reach me by replying to these emails) and email@example.com to your contacts. In Gmail, drag this newsletter over to your ‘Primary’ tab.
Thanks for reading.
Want more Semafor? Explore all our newsletters at semafor.com/newsletters