• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we have a scoop on the work by the company’s research division, which shows less͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
November 1, 2023
semafor

Technology

Technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome back to Semafor Tech.

We have a scoop about the new capabilities Microsoft has been able to pack into its tiny AI model called Phi 1.5. But this isn’t a product announcement scoop. We’re not suggesting you download Phi 1.5 and start running it on your laptop (although let us know if you do).

The topic of small AI models fits into a theme that we’ve been covering in the newsletter. The ingenuity of AI researchers has moved faster than the increases in available compute power. They know how to build AI models that far surpass the ones that the general public can access. But they don’t have the infrastructure to roll them out to everyone.

So for now, the ingenuity that really matters in a practical sense is on the infrastructure side: How do we get AI models to scale? One way is to build more GPUs to slot into ever-growing data centers. But then you run into energy constraints. Another is to fit more capability into smaller AI models. And that’s where some of the best AI researchers are focusing their attention these days.

Microsoft Research gave us a peak under the hood for this story, and it’s fascinating. But there’s another side of this that goes beyond infrastructure. When researchers figure out how to make small models better, they’re also making some fundamental discoveries about how AI models really learn. And that has implications not just for the practicalities of running AI models, but also the frontiers of AI research. We thought this was so interesting that we’ll tell you more about it in Friday’s newsletter.

Move Fast/Break Things

➚ MOVE FAST: Chips. The semiconductor industry is seeing the light at the end of the tunnel after a supply glut hurt sales. The AI boom is helping to spur overall demand, lifting the outlook for Intel, TSMC, Samsung, and others.

➘ BREAK THINGS: Chairs. WeWork shares are in free fall today after the Wall Street Journal reported the firm could soon file for bankruptcy. But that may be good news for people looking for a deal on used office furniture.

Reuters/Kate Munsch
PostEmail
Artificial Flavor

A “startup” called Del Complex announced earlier this week that it would build massive floating compute clusters in international waters, ensuring that advanced AI models can be trained and deployed “free from the constraints of regulatory bodies.” The project captured the techno-utopian fantasies of AI optimists, while simultaneously borrowing imagery and language from activist groups concerned that AI could one day kill humanity. In other words, it perfectly encapsulated both sides of the current discourse around AI, which was the entire point.

It turns out Del Complex is an elaborate stunt by an artist named Sterling Crispin that includes AI-generated LinkedIn profiles, custom merchandise, hidden easter eggs, and of course, an NFT collection. Who knew the rise of AI would make crypto cool again?

Del Complex
PostEmail
Reed Albergotti

Microsoft’s big leap on small AI models

THE SCOOP

Microsoft’s research division has added a major new capability to one of its smaller large language models, a big step that shows less expensive AI technology can have some of the same features as OpenAI’s massive GPT-4.

In an exclusive interview, Microsoft researchers shared that the model, Phi 1.5, is now “multimodal,” meaning it can view and interpret images. The new skill added only a negligible amount to the model’s already diminutive size, they said, offering a roadmap that could help democratize access to AI technology and help ease shortages in graphics processors used to run software like ChatGPT.

GPT-4, which powers ChatGPT, also recently became multimodal, but requires exponentially more energy and processing power. Phi 1.5 is open source, meaning anyone can run it for free.

“This is one of the big updates that OpenAI made to ChatGPT,” said Sebastien Bubeck, who leads the Machine Learning Foundations group at Microsoft Research. “When we saw that, there was the question: Is this a capability of only the most humongous models or could we do something like that with our tiny Phi 1.5? And, to our amazement, yes, we can do it.”

GPT-4 has about 1.7 trillion parameters, or software knobs and dials used to make predictions. More parameters means more calculations that must be made for each token (or set of letters) produced by the model. For comparison, Phi 1.5 has 1.3 billion parameters. If parameters were expressed in distance, GPT-4 would be the size of the Empire State building and Phi 1.5 would be a footlong sub sandwich.

Dall-E

The pursuit of small AI models endowed with the powers of much larger ones is more than just an academic exercise. While OpenAI’s GPT-4 and other massive foundation models are impressive, they are also expensive to run. “Sticker shock is definitely a possibility,” said Jed Dougherty, vice president of platform strategy for Dataiku, which services companies utilizing AI technology.

While individuals tend to ask ChatGPT to draft an email, companies often want it to ingest large amounts of corporate data in order to respond to a prompt. Those requests can be costly. The maximum price for a single GPT-4 prompt is $5 and other providers are in a similar range, Dougherty said. Typically, companies pay about $100 per 1,000 prompts. “When you apply LLMs to large datasets, or allow many people in parallel to run prompts … you’ll want to make sure you’re taking pricing into account,” he said.

For Reed's view and the rest of the story, read here. →

PostEmail
Semafor Stat

Amount that China’s state-backed semiconductor fund recently invested in a memory chip startup called Changxin Xinqiao, according to a public filing seen by Reuters. China is currently trying to raise about $41 billion (300 billion yuan) for a new state chips fund, which will be its third since 2014. The money is part of a wider government effort to develop China’s domestic chips market as U.S. authorities choke off Beijing’s access to foreign manufacturers.

PostEmail
Plug

Patent Drop is a newsletter that makes it easy to keep up with what the hottest tech companies are currently working on. Their twice-weekly digest, based on submissions to the U.S. Patent and Trademark Office, reveals the latest innovations from Nvidia, Meta, Apple, and many other companies. Sign up for Patent Drop here.

PostEmail
Watchdogs

There’s been an avalanche of AI regulatory news as regions compete and cooperate over governance of the rapidly advancing technology. After the White House issued an executive order this week that would put guardrails on AI development, Vice President Kamala Harris today announced the new U.S. Artificial Intelligence Safety Institute, which will lead the government’s efforts on AI safety and trust.

During a speech in London, where she is attending the UK AI Safety Summit, Harris said while AI has the potential for profound good, “it also has the potential to cause profound harm, from AI-enabled cyberattacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions.”

Wikimedia Commons

Meanwhile, U.K. Deputy Prime Minister Oliver Dowden told Bloomberg Television that it’s not appropriate for Chinese officials to attend some of the summit’s sessions designed for “like-minded countries,” while also simultaneously touting Beijing’s presence at the event. The Chinese Communist Party’s Global Times blasted the politicization of China’s attendance, but it also praised the White House executive order. A senior Beijing official said at the summit that China wanted to talk to all sides and contribute to a global AI governance framework.

On that front, the U.K. announced that nearly 30 countries and regions signed on to the Bletchley Declaration that calls for identifying shared AI safety risks and creating policies to address them, like testing tools. It remains to be seen if all that talk will turn into collective action or fragment as overall tech regulation has according to each government’s national interests.

PostEmail
What We’re Tracking
  • Apple warned at least 20 people in India, including opposition politicians and journalists, that they were the victims of state-sponsored cyberattacks. Apple didn’t specify what country may have been responsible for the intrusions, but public records show that India’s domestic intelligence agency has previously purchased equipment from NSO Group, an Israeli company whose spyware has been used to target human rights activists and journalists around the world.
  • European AI startup Mistral is trying to raise $300 million from investors just months after raising a $113 million seed round, The Information reported. The “OpenAI of Europe” has taken a distinctly different approach from its San Francisco competitor, choosing to fully open source its models and release them with few safety guardrails by default.
PostEmail
Hot On Semafor
  • Some of the Hamas militants who attacked Israel on Oct. 7 were fueled by a synthetic amphetamine. U.S. and Israeli officials believe it was used to suppress fear and anxiety during the rampage, and stimulate the willingness to kill civilians.
  • Inside the last-chance effort to sell Republicans on anyone but Trump.
  • An up to $1 billion fine may end the scandal that has captivated Wall Street.
PostEmail