Mark Surman, president and executive director of the nonprofit Mozilla Foundation behind the Firefox browser, has spent the last three years promoting what his organization calls “trustworthy” artificial intelligence. As generative AI tools like ChatGPT take off, he’s accelerating that work, focusing on a new venture capital fund and an open-source AI startup called Mozilla.ai.
Surman talked to Semafor about what companies Mozilla is investing in, the dangers of consolidation in the AI industry, and why small tech matters. The conversation has been edited for length and clarity.
The View From Mark Surman
Q: What does it actually mean for AI to be safe or “trustworthy,” as Mozilla puts it?
A: Trustworthy AI comes down to two simple things: agency and accountability. There’s conscious design decisions that give people agency. An old-school example would be a recommender engine on YouTube, and being able to say, “I want to see less content like this,” or “I don’t want to see stuff that’s violent.” Being able to train the AI back and tell it what you want is probably the simplest way to think about agency.
Accountability is if an AI system obviously causes harm, like it makes a biased decision about a loan, a job interview, or a government benefit program, or if an AI system actually has some broader social effect — like spreading misinformation or undermining elections. The people who deploy that system should be accountable for that harm. And that accountability needs to be there to encourage people to think about AI safety and think about the impacts of the systems they develop when they deploy them.
Q: In 2020, you pivoted your activism to focus entirely on AI, a decision that I think some people will view as early. How has the explosion in generative AI over the last few months changed your approach?
A: What we and other people have been seeing is that automated systems and data-driven computing are the technologies that will define how things get built, like the web was 25 years ago. It’s not just chatbots, it’s the core ingredient of this era of computing. As an organization that cares about whether computing is good for humanity, we thought focusing on where AI goes is like focusing on where the web went 20 years ago — it just felt important.
AI has come into people’s consciousness because of things like ChatGPT, but it’s also just all around us and in everything. And in that sense, not much has changed, right? The big players are still in control and using the fact that they’re in control to consolidate things.
We’re still in move-fast-and-break-things mode, and rolling stuff out without enough consideration for questions about agency and accountability. And it’s harder for smaller players, in particular, players who want to do more trustworthy things, to really break through.
Q: Mozilla Ventures, the fund you announced last year, is investing $35 million in early-stage startups that share Mozilla’s values. Are there any companies you’re really excited about?
A: There’s Secure AI Labs, and their whole thing is to be a relationship broker between communities that struggle with health issues, and pharmaceutical or health researchers who want to solve health issues. It’s about delivering value to people and unlocking things in research with privacy at the core of how the AI works. Another company is at the forefront of AI safety, making it easier to test models for questions of fairness and responsibility as they go through the deployment cycle.
And then there’s Lelapa AI, which is a South African company focused on AI for Africans, by Africans. Their first play is conversational AI in African languages. I think that one, in particular, is interesting because a lot of this gets framed as like, the arms race between Google and Microsoft. But how does AI respond to people who aren’t at the core of how big tech companies are building these things? Smaller-use cases are actually a big part of building a more rich, human-centric AI ecosystem.
Q: The Information reported earlier this month that Mozilla was planning to prominently feature a chatbot in Firefox to give users a more conversational search experience. How do you think Firefox’s approach differs from big tech giants who are doing something similar right now?
A: We’re really early on in figuring this out. People are starting to look for different things in search and in the ways they interact with the digital world. Chatbots give them some of those different things in terms of summarized answers, or a quick way to know something. So if that’s what people are looking for on the internet, Firefox wants to give it to them, and we’ll do that in a way that we do everything: with care and with privacy at the core.
Q: What are you worried about with AI right now?
A: I’m both worried and excited. This feels like when I was doing internet stuff in 1994, when the first visual, graphical web browsers came out, it feels like that moment at the beginning of the web, which is that something cool is possible. I can play with it, you can play with it. I don’t know what I can make with it yet.
On the flip side, my worries are kind of boring, which are that we don’t wean ourselves off the move-fast-and-break-things mentality, and roll stuff out to hundreds or millions or billions of people without thinking about what the side effects could be. And we saw what social media wrought in terms of misinformation, in terms of addiction, in terms of the kinds of misaligned set of incentives between people and platforms.
The imbalance between people’s interests and the interests of platforms remains the big, big thing to be worried about, much more than the fear of god-like AI. Yes, these things will be powerful, but they’re controlled by a set of companies and it’s too much power being in the hands of too few players. That is the immediate concern.