• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we look at how the private sector is being more secretive, making resources for ͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
February 16, 2024
semafor

Technology

Technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome back to Semafor Tech.

Even for highly trained computer scientists, it can be difficult to get a job at a company like Google or Meta. But there’s another post that may be even more competitive: a university professor.

That has always seemed odd to me; universities can hire brilliant AI researchers at a fraction of the salary that major tech companies are willing to pay. It’s also a great deal for taxpayers, who largely foot the bill for basic research in academia. The amount of economic growth that has come from university-funded discoveries is incalculable.

And for years, Big Tech has benefited from some of academia’s great AI minds going to work in the private sector. The one consolation has been that many of those people have continued to publish papers that have helped push the science forward.

But as the commercial value of the technology has become more apparent, the amount of knowledge being publicly disseminated from AI researchers has dropped. That was readily apparent Thursday when OpenAI launched its new text-to-video tool, Sora, offering few details on how it was made. I wanted to explore that issue and what to do about it. I’d love to hear your thoughts on the article below.

Move Fast/Break Things
Reuters/Dado Ruvic

➚ MOVE FAST: Too Late. Apple is continuing to play catchup in the AI race. It’s looking at an AI-enabled coding aide to rival Microsoft’s GitHub Copilot. It’s a belated but smart move to keep developers from fleeing Apple’s programming language.

➘ BREAK THINGS: Too Little. Clubhouse, the pandemic-era app that fizzled when the Covid era ended, is hoping it can go viral once again with a pivot to new AI tools, including one that turns texts into voice messages.

PostEmail
Artificial Flavor

Corey Perrine/Getty Images

Capitalizing on the AI hype extends to academia. The University of Pennsylvania said it will become the first Ivy League school to offer an undergraduate degree in artificial intelligence.

It said the education will provide “the mathematical and algorithmic foundations of AI techniques, along with hands-on experience in programming as well as using AI tools and foundation models.” Courses include Introduction to Artificial Intelligence, Principles of Deep Learning, and Trustworthy AI.

University officials focused on the positive possibilities of the technology, saying students earning the degree will help develop AI “in service to humanity.”

PostEmail
Reed Albergotti

The AI industry’s innovation problem

THE SCENE

Jack Clark, co-founder of AI startup Anthropic, recently traveled to Washington to warn lawmakers that the original innovators in artificial intelligence — academia — were in danger of being left behind unless billions are invested in resources for them. And if it’s not fixed, the overall AI industry would suffer.

“If we don’t find a way to fund large-scale, public compute, we are going to be in a really bad time, because we will have a really narrow set of actors with a bunch of insights that are hard to publish,” he told Semafor. “It’s a real issue, but we can mitigate this problem.”

Advances in artificial intelligence are rapidly accelerating, but that is mostly taking place in the private sector. In just the past week, Meta published a powerful new AI framework called V-JEPA, Google DeepMind released its latest multimodal large language model, Gemini 1.5, and OpenAI offered a sneak peek at Sora, a text-to-video service.

That has only heightened concerns among some AI researchers because the amount of information companies are sharing about these achievements is shrinking, potentially hurting future innovation.

While tech companies are often secretive about their work, AI had been an outlier. For the past decade, big technology companies lured academics over to the private sector, not just with larger pay packages but with the promise that they would continue to publish their breakthroughs.

Those advances were then filtered back into academia, creating a virtuous cycle of innovation.

The release of ChatGPT, which turbocharged the race to commercialize cutting edge AI research, has changed the calculus for tech companies. At the top AI firms, employees are now being asked to keep their accomplishments secret, and in some cases, stop publishing research papers altogether, upsetting the symbiotic relationship between the pursuit of profits and scientific discovery.

Daniel Zhang, the senior manager for policy initiatives at the Stanford Institute for Human-Centered Artificial Intelligence, helped establish the Foundation Model Transparency Index “to highlight the increasingly opaque nature of AI systems.”

Zhang said there are still a lot of AI research papers being published by big tech companies, but they are less likely to reveal how the most advanced AI models work, making resources for academia even more important.

“One of the things we hear about most frequently from Stanford researchers is they don’t have enough compute for their PhD thesis,” he said. “And this is at Stanford, which is pretty well resourced.”

David Madison/Getty Images

REED’S VIEW

In a recent meeting with the CEO of a major AI company, I asked whether the firm planned to publish research papers on its discoveries. The CEO looked at me like I was crazy. In other words, no way.

It struck me how much times have changed from just a year or two ago, when tech companies seemed to want to show off their scientific chops. We’re now solidly in the commercial phase, where all that hard research starts to pay off in the form of profits.

The side effect is that it tends to push scientific research aside to make way for the production of the technology.

But since the end of World War II, this yin and yang of science and capitalism has been one of the big secret sauces of American innovation.

I love the story of the reduced instruction-set computing, or RISC, processor. The technology was conceived by IBM in 1975, but it didn’t serve a business purpose so it was set aside.

Five years later, researchers at Berkeley found out about it and, with funding from government agencies, spent years developing the technology. RISC-based processors now run in practically every smartphone.

When it comes to the development of AI, it is going to be difficult for the science side to make meaningful contributions if universities don’t have the taxpayer-funded resources to push the boundaries of the technology.

One person I interviewed compared the funds we are devoting to public AI research to the Apollo program. In 1966, its $45 billion inflation-adjusted budget consumed 4.4% of the overall federal pot. We’re spending nowhere near that to boost the development of AI technology, which could be just as consequential as landing on the moon.

Read here about how the U.S. government and tech companies are trying to address this problem. →

PostEmail
Semafor Stat

The value of Nvidia’s U.S. equity holdings as of the end of December, according to an SEC filing this week. Many of its investments are in companies that use AI to make advancements in fields outside of that technology, like Recursion Pharmaceuticals, which is looking to shake up the way drugs are discovered. That portfolio serves as a cheat sheet for other investors to see what startups could be the next big thing.

PostEmail
What We’re Tracking

The Elders, an organization founded by the late Nelson Mandela, teamed up with The Future of Life Institute, to pen a letter warning of the long term, existential risks of AI. Future of Life is the same organization that wrote another letter last year calling for a pause in the development of frontier AI models. That missive had the support of some well-known names, such as Elon Musk.

This most recent letter’s signatures include several notable ex-world leaders, such as former U.N. Secretary General Ban Ki-moon, and some celebrities like Annie Lennox and Cate Blanchett. It calls on world leaders to “think beyond short-term political cycles and deliver solutions for both current and future generations.”

It’s difficult to argue with that kind of sentiment but it may not make any meaningful difference. The other “pause” letter got a lot of attention, but advances have continued at a brisk pace.

PostEmail
Obsessions
Kevin Dietsch/Getty Images

OpenAI’s snappy new text-to-video product, Sora, overshadowed a new research paper from Meta that might, in the long term, be more consequential. Called V-JEPA, for Video Joint Embedding Predictive Architecture, it’s the latest step in Meta’s attempt to reach artificial general intelligence.

Rather than just make AI models bigger, Meta’s chief scientist for AI, Yann LeCun, believes a whole new approach is needed. And it involves thinking like a baby.

The idea is that babies can learn much faster than current AI architectures simply by observing the physical world. Once a baby sees a cat or two, it can recognize pretty much any cat and know more or less how it will behave. Today’s AI models need boatloads of data and compute power to do the same thing.

The problem might be that today’s AI algorithms are just too detail-oriented. Computers, unlike humans, can dissect the world pixel-by-pixel. That’s not how people think. When we’re swimming in the ocean, we don’t count the number of water droplets to determine whether a wave is about to crash on us.

With the V-JEPA method, Meta researchers have found a way to, in a sense, ask less of the algorithm. They don’t want the software to find the patterns in every moving pixel of a video. Instead, they remove a key section of the scene and ask the software to guess, in general terms, what’s missing.

This makes intuitive sense. If I removed a flying plane from a picture of the sky and asked you what was missing, you would just say “the plane.” You wouldn’t count the number of clouds and patches of blue sky.

What ends up happening with this method, according to the paper, is that, as the AI learns to predict what should go in the blank space, it gets a big picture sense of the world, rather than a pixel-by-pixel one. LeCun calls this a “world model.”

The next step will be mixing in audio. If LeCun is right, this could be the first move toward computers really understanding things much faster and with fewer resources.

PostEmail
Hot on Semafor
PostEmail