• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


Updated Feb 2, 2024, 1:56pm EST
tech

Meta is bucking just about every AI trend, including the ‘boys club’

Meta
TweetEmailWhatsapp

Sign up for Semafor Technology: What’s next in the new era of tech. Read it now.

Title icon

The Scene

In the high stakes, brutally competitive AI race, Mark Zuckerberg’s Meta has been an outlier. While most of the big foundation model players tightly guard their methods and charge fees for using their service, Meta’s offerings under the Llama umbrella are free and mostly open source, so almost anybody can experiment with them.

That strategy has helped Meta quickly catch up to an early lead gained by companies like OpenAI, Anthropic and Cohere, and established it as a guiding light for those in the industry who feel open research is the right path for the development of AI.

There’s another contributing factor to Meta’s against-the-grain approach, which has gone mostly unnoticed, even inside the company: Its Fundamental AI Research lab, responsible for Llama and other breakthroughs, is made up largely of women. Around 60% of its leadership team are women and some reporting chains, according to interviews with people inside the organization, are female from top to bottom.

AD

AI has helped revive the company’s business image after Zuckerberg’s turn to the metaverse. On Friday, the company reported 25% revenue growth in the fourth quarter of 2023, and issued its first dividend.

In addition to its AI path, the gender diversity inside FAIR, as it’s called, is also an industry anomaly. For instance, Time Magazine’s list of the top 25 leaders in AI includes only six women. A December article in the New York Times titled “Who’s Who Behind the Dawn of the Modern Artificial Intelligence Movement” does not mention a single woman.

In a field in which counterintuitive insights have led to some of the greatest discoveries, a diverse set of researchers can be a superpower. “When you have a more diverse group of people, it’s not that they come up with totally different solutions, but they ask different questions,” said Joelle Pineau, who leads FAIR after becoming a prominent AI researcher and professor at McGill University. “Especially in research, the question you ask is really, in many ways, the most important thing.”

AD

It has also helped shape the company’s approach to building artificial general intelligence, or AGI, which Zuckerberg recently indicated is a central focus of the company.

Pineau says the FAIR team looks at AGI differently than other research labs, and while she can’t be entirely sure, it may be connected to the team’s more diverse makeup and ability to ask different questions.

For instance, many of the tech companies striving to create AGI talk about it as if the ultimate result is a singular entity with intelligence beyond that of any human.

AD

But part of the AGI agenda at Meta is focused on leveraging the effects of collaboration between multiple AI agents, none of which have superintelligence, but can collectively accomplish quite a lot.

“If you look at humans and animals, a lot of our intelligence doesn’t reside in a single individual. It resides in a community — a collection of individuals and how we work together to solve harder problems and complex problems,” said Pineau, who played viola in the Ottawa Symphony Orchestra before studying engineering at the University of Waterloo and earning a PhD in robotics from Carnegie Mellon.

The multiple agent approach to AGI has other benefits, she said, because it would give humans more control over how the AGI operates by giving each agent limited abilities and siloing information so that no agent has all of it.

“It flips around this conception that General Intelligence needs to be achieved in a single box, which has all of the world’s information and has control over all the world’s levers,” she said. “We don’t want that in companies. We don’t want that in governments, in individuals or in any of our institutions. So why would we want that in our AI system?”

Title icon

Know More

FAIR was founded in 2013 by Yann LeCun, now Meta’s chief scientist in charge of AI. Often referred to as one of the “godfathers of AI,” LeCun helped pioneer deep learning while working in academia and joined Facebook that year to run its AI efforts.

FAIR made an almost immediate impact on the field of AI by publishing cutting edge research in areas like general adversarial networks, a crucial component of many generative AI image generators, computer vision and language translation. It also created PyTorch, an open source machine learning framework that is now ubiquitous in the field.

The research conducted at FAIR, while academically inspired, also helped Meta when AI techniques built in the lab became useful tools for various teams throughout the sprawling company.

But in November 2022, when OpenAI released ChatGPT, generative AI tools suddenly became central to the business models of big tech giants. While Google, OpenAI, Anthropic and other leaders in large foundation models have treated their techniques as closely guarded secrets, Meta’s FAIR group has stuck to its guns, continuing to openly publish work in a way that other companies might view as giving away the farm.

In July, FAIR released Llama 2, a family of free, open-source AI models that have similar functionality to top-tier versions. Llama, according to conversations with dozens of people who work on implementing AI models at companies, has spread like wildfire and is the fastest growing model in the corporate world.

It has also quickly become a foundational tool, allowing other AI companies to use it to build their own, customized models. Heavily funded French startup Mistral revealed earlier this week that it, too, had used Meta’s work to develop models for its customers.

Kim Hazelwood, director of infrastructure at FAIR, said the fact that Meta has remained open has made it an inviting place, and that’s helped recruiting sought after researchers who want their work see the light of day. Hazelwood says that culture of openness is connected to the diversity of FAIR. “When you’re doing closed research, that is exclusive work,” she said. “We’re not going to draw a bunch of boundaries, we’re not going to put up a bunch of walls. And that really just comes out when you get together a group of people that have really had to make their life’s work taking down these walls that have been put up artificially.”

When Nikhila Ravi, a research engineering manager, joined the group about six years ago, she had never had a female engineering manager before.

At FAIR, her entire reporting chain all the way up to Pineau was sometimes composed of women. Ravi said the diversity has been an asset. For example, when the team achieved a major breakthrough in computer vision called Segment Anything, the emphasis was on the research paper and all the nitty gritty technical details, not the demo showing how it worked.

Ravi thought that since the technology could be useful in so many fields, making the demo appealing to the masses was crucial. She had taken classes in web development and used those skills to make a public-facing product that allowed anyone to upload a photo and see Segment Anything in action. She says It was a hit and has now become the default way of showcasing new breakthroughs. “My manager really created a space for me to be able to do that,” she said.

Title icon

Reed’s view

Until recently, I had no idea Meta’s FAIR team was such a unique place in the AI field when it comes to diversity. After someone at the company pointed it out to me, I searched for articles or information about it and couldn’t find anything.

It’s something that appears to have happened organically, without people really thinking much about it. And Meta has not promoted this fact. When I reached out about it, Meta’s communications team said I was the first reporter to ask about it.

I agree with Pineau and others that the diversity in that group is an asset. But I also think it’s a symptom of creating an organization that seems to be traveling down a path all by itself.

That starts with LeCun, who created it. This is somebody who seems unbothered by prevailing theories or conventional wisdom. He spent years working on deep learning and neural networks when that field was the laughing stock of computer science departments. In hindsight, the field he helped pioneer has really taken over the world.

In 2016, he helped create The Partnership on AI, an organization focused on AI safety, ethics and alignment. This was back before AI safety was the cool thing to care about.

But when his former colleagues Geoffrey Hinton and Yoshua Bengio became media darlings after raising alarm bells about killer AI, LeCun took the unpopular position that it wasn’t really a thing we need to worry about.

LeCun is comfortable in his convictions, even when they’re unpopular, so it’s not surprising that FAIR stands out among other major AI research shops.

Zuckerberg has moved FAIR closer to product development teams at the company, he said in a recent interview. Having the attention of a CEO known for “moving fast” will only accelerate what’s happening there.

Now that the research lab’s stock has risen within Meta, we’ll likely see more talent gravitate there and more public, open source research will come out of it. And a certain percentage of that work will become the industry standard, like PyTorch. And that will be good for Meta’s bottom line.

Title icon

The View From Other AI shops

While Meta is an outlier on diversity, it’s worth flagging that other foundation model companies have women in leadership positions. OpenAI has CTO Mira Murati and head of safety Lilian Weng, among others. And Anthropic has a female co-founder, Daniela Amodei.

Those companies have also published research and open sourced some models.

Title icon

The View From Europe

Led by French startup Mistral, Europe seems to be leaning into the open source, open research philosophy when it comes to AI. From a competitive standpoint, the philosophy makes sense. If everyone develops AI models in-house, the likely outcome is that the U.S.-based tech giants will dominate the AI boom that’s about to happen.

Semafor Logo
AD