• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


Feb 16, 2024, 12:46pm EST
tech

Tech companies go dark about AI advances. That’s a problem for innovation.

David Madison/Getty Images
PostEmailWhatsapp
Title icon

The Scene

Jack Clark, co-founder of AI startup Anthropic, recently traveled to Washington to warn lawmakers that the original innovators in artificial intelligence — academia — were in danger of being left behind unless billions are invested in resources for them. And if it’s not fixed, the overall AI industry would suffer.

“If we don’t find a way to fund large-scale, public compute, we are going to be in a really bad time, because we will have a really narrow set of actors with a bunch of insights that are hard to publish,” he told Semafor. “It’s a real issue, but we can mitigate this problem.”

Advances in artificial intelligence are rapidly accelerating, but that is mostly taking place in the private sector. In just the past week, Meta published a powerful new AI framework called V-JEPA, Google DeepMind released its latest multimodal large language model, Gemini 1.5, and OpenAI offered a sneak peek at Sora, a text-to-video service.

AD

That has only heightened concerns among some AI researchers because the amount of information companies are sharing about these achievements is shrinking, potentially hurting future innovation.

While tech companies are often secretive about their work, AI had been an outlier. For the past decade, big technology companies lured academics over to the private sector, not just with larger pay packages but with the promise that they would continue to publish their breakthroughs.

Those advances were then filtered back into academia, creating a virtuous cycle of innovation.

AD

The release of ChatGPT, which turbocharged the race to commercialize cutting edge AI research, has changed the calculus for tech companies. At the top AI firms, employees are now being asked to keep their accomplishments secret, and in some cases, stop publishing research papers altogether, upsetting the symbiotic relationship between the pursuit of profits and scientific discovery.

Daniel Zhang, the senior manager for policy initiatives at the Stanford Institute for Human-Centered Artificial Intelligence, helped establish the Foundation Model Transparency Index “to highlight the increasingly opaque nature of AI systems.”

Zhang said there are still a lot of AI research papers being published by big tech companies, but they are less likely to reveal how the most advanced AI models work, making resources for academia even more important.

AD

“One of the things we hear about most frequently from Stanford researchers is they don’t have enough compute for their PhD thesis,” he said. “And this is at Stanford, which is pretty well resourced.”

Title icon

Know More

For decades, theories about deep learning and neural networks — the AI techniques driving the current wave of advances — percolated and evolved in academia. It was not until the social media era, when cloud computing and massive data collection provided a proving ground for AI research, that tech companies started luring the best minds in the space away from academia.

Even as rock star researchers like Geoffrey Hinton, formerly of Google, or Meta’s Yann LeCun, left their university posts, the knowledge they gained with the practically infinite resources of Big Tech still found its way back to academic researchers in the form of published papers.

Google’s Attention is All You Need is one example. It laid out the theory behind transformer architecture, and its wide dissemination supercharged the development of large language models, including OpenAI’s GPT.

But as competition has heated up, Google has shared little about the inner workings of its most advanced family of AI designs, called Gemini. OpenAI, which began as a nonprofit AI research lab, now releases very little about its most advanced work.

Meta is the lone outlier. It has made its research public and partially open-sourced its family of LLMs, called Llama, for almost anyone to use.

Anthropic, which makes some of the most advanced frontier models, says it is still committed to publishing all of its safety research, but holds back on its capabilities findings for competitive purposes.

“The lack of openness has definitely been noticeable over the past year and it’s put a damper on academic investigations,” said Josh Albrecht, chief technology officer at AI startup Imbue. “If you look at most of the largest, most successful AI companies right now, they’re really building off stuff that was developed at Google or in academia.”

One possible countermeasure is providing expensive compute resources to academic researchers, which would help level the playing field between the private sector and academia.

Last month, some of the biggest names in tech, from Microsoft to Nvidia, pledged to donate resources to the government’s National AI Research Resource, which would give academics and startups access to free compute power to test and run new AI models.

But there’s still a sense in both academia and the private sector that it isn’t enough. Deep Ganguli, an AI research scientist at Anthropic who has spent time working in academia and nonprofits, said it’s likely that the next big breakthroughs will come from researchers and not the private sector — but only if academic researchers have the necessary compute resources and access to adequate data sets.

“We want them to be competitive,” he said. “I think that’s a better world, if we can all be playing together. And we’re not there yet.”

Title icon

Reed’s view

In a recent meeting with the CEO of a major AI company, I asked whether the firm planned to publish research papers on its discoveries. The CEO looked at me like I was crazy. In other words, no way.

It struck me how much times have changed from just a year or two ago, when tech companies seemed to want to show off their scientific chops. We’re now solidly in the commercial phase, where all that hard research starts to pay off in the form of profits.

The side effect is that it tends to push scientific research aside to make way for the production of the technology.

But since the end of World War II, this yin and yang of science and capitalism has been one of the big secret sauces of American innovation.

I love the story of the reduced instruction-set computing, or RISC, processor. The technology was conceived by IBM in 1975, but it didn’t serve a business purpose so it was set aside.

Five years later, researchers at Berkeley found out about it and, with funding from government agencies, spent years developing the technology. RISC-based processors now run in practically every smartphone.

When it comes to the development of AI, it is going to be difficult for the science side to make meaningful contributions if universities don’t have the taxpayer-funded resources to push the boundaries of the technology.

One person I interviewed compared the funds we are devoting to public AI research to the Apollo program. In 1966, its $45 billion inflation-adjusted budget consumed 4.4% of the overall federal pot. We’re spending nowhere near that to boost the development of AI technology, which could be just as consequential as landing on the moon.

Title icon

Room for Disagreement

Open research in AI has its downsides and detractors, who argue the risks may outweigh the benefits. David Evan Harris, a researcher at Berkeley, makes that argument in this article: “The threat posed by unsecured AI systems lies in the ease of misuse. They are particularly dangerous in the hands of sophisticated threat actors, who could easily download the original versions of these AI systems and disable their safety features, then make their own custom versions and abuse them for a wide variety of tasks.”

Title icon

Notable

  • Here’s a good, brief history of deep learning, which developed in academia and then was revived by big tech companies.
Semafor Logo
AD