• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


The new Hollywood: Runway’s AI models edge closer to simulating reality

Reed Albergotti
Reed Albergotti
Tech Editor, Semafor
Aug 22, 2025, 12:57pm EDT
Technology
PostEmailWhatsapp
Title icon

The News

When Runway ML started in 2018, the company’s founders envisioned using AI to aid in the creation of art. The company’s AI models were among the first to generate synthetic video for movies like Everything Everywhere All at Once.

But as Runway’s models have improved, they’ve begun to do something unexpected: model the laws of physics simply by observing two-dimensional video.

“The more we put compute and data behind scaling those models, the more capable they become at simulating reality,” said Runway co-founder and CTO Anastasis Germanidis in a wide-ranging interview with Semafor.

Germanidis said Runway’s customers already include most of the top film studios and ad agencies, but the ability to simulate complex phenomena like fluid dynamics has attracted a new kind of customer: robotics companies that want to use the technology to create realistic, three-dimensional depictions of the world on which to train their own machines.

AD

“Simulation is the big bottleneck that’s going to be required to solve challenges that are in robotics and self-driving, or any real-world problems that one might care about,” he said.

Title icon

Step Back

Early AI-generated video models were full of obvious flaws. Models like the original version of OpenAI’s Sora were shockingly good, but could barely go 15 seconds before displaying some glitch that ruined the magic trick.

Newer models like Runway’s Gen-4 and Google’s Veo 3 can produce videos nearly indistinguishable from reality. Part of the realism comes from the models’ ability to predict the movement of the physical world, including the way light reflects off objects and how water behaves.

AD

Because AI models are “black boxes,” and their inner workings are something of a mystery, there’s no way to explain exactly how they seem to replicate real life so well.

For tasks like complex engineering and traditional weather forecasting, scientists use systems known as physics models to make decisions. Those models require huge amounts of computation to make very narrow predictions.

Neural networks can be used as a kind of shortcut to approximate the work of physics models. Recently, companies have made huge progress in using neural networks to predict parts of the physical world, like the weather and the operations of machines.

AD

But AI models used in video creation aren’t trained specifically to model physics. They do it as a byproduct of ingesting huge amounts of video.

There’s some evidence that video models are doing more than just mimicking the countless videos they’re trained on, Germanidis says.

Runway tests its models by asking them to generate videos of simple things, like a ball hanging from the ceiling, and then comparing the trajectory of that generated object to a real-world version that acts as ground truth.

The models still make mistakes and are far from being able to replace traditional physics modeling. But Germanidis said they haven’t come close to hitting a plateau in capabilities. “We still haven’t trained models on the vast majority of the observations of the world, like the outcomes of scientific experiments and the sensor data we have available to us,” he said. “The moment we do that, we’re going to build more powerful simulators of the world.”

Title icon

Reed’s view

Germanidis also described how Runway’s products are creating a new kind of human actor.

The company’s newest product, Aleph, isn’t a video generator. It’s built on top of its Gen-4 model, allowing customers to edit real-world footage with AI-generated special effects. One of the tool’s use cases is transforming the way actors look, like a digital makeup and costume department.

Today, a movie star’s appearance is an essential part of the job. Some actors might be incredibly talented but lack a certain look. But already, Germanidis said, Aleph is opening new creative roles for actors who may be particularly well-suited for an AI-generated skin. “One way of putting it is that you have a bit of unbundling of those skills. So if you’re amazing at the performance itself, you can just focus on that.”

Rather than replace actors outright, what AI might do is challenge the definition of what it means to be a good actor, like the advent of sound in 1920s films.

Hollywood might need to change the way it holds auditions so that it can isolate the underlying acting ability necessary for a specific role, rather than the right look.

It’s possible AI models will get so good that they’ll replace humans altogether. But for high-quality art, the subtle emotion conveyed in an actor’s voice, facial expressions, and movement will be difficult to replicate with AI alone.

Title icon

Room for Disagreement

One view is that it doesn’t matter if AI is “just a tool” for creatives. The tools themselves matter.

From a deep dive by The Hollywood Reporter’s Steven Zeitchik: “The compact of cinema for its roughly 125 years of existence is that we accept all the trickery onscreen because we know it was created by humans standing behind it — that whether the Death Star is being blown up or Chow Yun-fat and Michelle Yeoh are flying through the air in a sword fight, those born of flesh-bound mother and in possession of human brain came together, puzzled over a problem and figured out its solution to give us the art that we now see. Whatever didn’t happen involved a lot of people to make happen.”

Title icon

Notable

  • Wired’s Zoë Schiffer argues the rise of AI in Hollywood is already here.
  • The Wrap predicts AI in filmmaking “could be the key to reinvigorating a Hollywood” that’s relied for too long “on safe blockbusters.”
AD
AD