• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


Silicon Valley veteran Nikesh Arora on tech’s role in government, and the future of AI

Updated Nov 20, 2024, 1:34pm EST
tech
Palo Alto Networks’ CEO Nikesh Arora
Courtesy of Palo Alto Networks
PostEmailWhatsapp
Title icon

The Scene

While Palo Alto Networks CEO Nikesh Arora and his firm are more under the radar than Silicon Valley’s consumer companies, his experience has turned him into a valuable voice on the industry, including the AI revolution.

He was an early executive at Google who rose to become one of its most powerful leaders when the company was transforming the internet. He then left to become president of SoftBank, which upended the venture capital industry.

And since he joined Palo Alto Networks in 2018, he’s steadily grown the company’s market value from around $18 billion to $127 billion today as he tries to take the fragmented world that is cybersecurity and create a single platform to rule them all.

AD

He spoke to Semafor about the tech industry’s changing relationship with the US government, how AI is going to change everything, and the future of cybersecurity.

Title icon

The View From Nikesh Arora

Reed Albergotti: A lot of tech people see [Trump’s reelection] as an opportunity to reform government. What should tech’s role be in this administration?

Nikesh Arora: We have the FedRAMP process, which is designed to make sure things are tested and the tires are kicked. But over time, these things take longer and longer, and you know how quickly technology moves nowadays. Two years ago, everyone talked about ChatGPT, and today we’re talking about $300 million AI clusters.

AD

If we apply the traditional FedRAMP process to AI clusters, it won’t get approved for another three years. So the question is: Does that mean we don’t deploy AI across the government to make things efficient and faster? I think we do. How can we adapt the processes while keeping the principles alive of making sure it’s secure, making sure it’s manageable, yet deliver the benefits of technology to the government? I think you’re going to see a lot more of that.

People in the tech industry would have to divest their stock, and there are other things that make it challenging for them to serve in government. Is it too difficult?

I don’t know. That’s possibly true for some people, but at the end of the day, why do people leave companies and go to different companies? Because they find more challenging work, and equal opportunity from an economic perspective, and a set of people around them who are motivated to make change happen. It’s unfair to tar everyone with the same brush. There’s so many more people who are mission-driven today who would want to go make a difference. The better question is: Can we make it an exciting place to be where we’re actually having meaningful impact? Then we can figure out the divesting question and who goes there.

AD

Palo Alto Networks works with government agencies. Are there things we need to do to change the way cybersecurity works in this country, to help protect businesses and individuals better?

It’s important to understand, despite all of our points of view, that the United States government still has one of the best offenses and much better defense than most other countries in the world. Let’s make sure we don’t throw the baby out with the bath water. We’ve done a reasonably good job getting there.

Now, could we do better? Of course, but every company can do better. It’s possibly opportune timing that as we go into a new inflection point with a whole new wave of technology, that there’s a bunch of changemakers who want to get involved and see how we can adopt technology faster. That’s a good sign.

At least the intention seems to be in the right direction. That’s why I said on CNBC that we are more likely to get an AI-positive administration than we might have had in the past. That doesn’t mean it’s going to be smooth sailing, because there’s a lot of things that we figured out, and that still need to be figured out.

You mentioned offensive hacking. AI isn’t a huge problem, at least until capabilities improve. But do you think we hit a point where AI just becomes too dangerous to make widely available?

Right now, we’re just creating a smarter and smarter brain. Today, it speaks every language. It understands every language. It knows all the data in the world out there, but it’s still developing a sense of right versus wrong and good versus bad, because the internet is sans judgment. You can ask a question, and all these guardrails are being put in by model developers, but AI itself doesn’t have its own guardrails. Now, we’re all waiting for the big reveal of when it goes from just being able to tell you what it knows to being able to be smart enough to infer things it doesn’t know. Can AI be Albert Einstein? Not yet. Can it be Marie Curie? Not yet. But the moment AI starts building curiosity, that’s the next step.

Then the question is, who’s going to put the guardrails on this brain, and who’s going to have access to the brain? That’s also more worrisome than where we are today, but not as scary as it could be. Now, let’s take the next step.

In the case of [self-driving car company] Waymo, we let the scary brain take control. That’s the biggest fear. If you let AI take control, how do you know it’ll always come to the right thing? How do you know that Waymo won’t lock the doors and keep driving and take you to Sacramento just because I commanded it to? Those are the things we have to think hard about. How do you make sure when you get this super intelligence that it is only used for good? Who has access to it? Then the question is, when do we give control to super intelligence that is only used for good, and do we have the ability to manage it in such a way that we can sometimes at least have guidance control?

I know security is a cat and mouse game. But what if you had an AI model that could think or reason, and it could write code, basically like Stuxnet [the malicious worm that targeted Iran’s nuclear program] with the ability to adapt and think once it’s in the system. How would you combat that?

We’re already going in that direction. We’re trying to build models of what is normal behavior because all the Stuxnets of the world come and think they’re going to have to do something out of the norm to be able to breach us.

And typically, there’s some abnormal behavior that happens. Like, when Nikesh logged in this morning, he tried to download five terabytes of data onto his personal server. That doesn’t sound normal. The problem is, today, we don’t have a good sense of what is normal, what is abnormal, and what should I do when it’s abnormal. There’s so much noise in the system that nobody actually has a clear sense of what is noise and what is signal.

I’ll give you an example. A few years ago we had the SolarWinds incident.

SolarWinds was a hack where a nation state decided, why bother hacking one company at a time? Let’s go hack a piece of hardware [and] everybody who has it will be fair game. Now, this piece of hardware technically sits in most companies. But we discovered through our user behavior analysis that this thing never talks to anything outside. But today it’s trying to, so we stopped it.

We stopped a zero attack. And then we looked and said, ‘Wow, what’s going on here?’ So we actually called the vendor and said, ‘Guys, what happened here?’ They replied, ‘Nothing’s wrong, [it] must be in the infrastructure.’

We had to hire a third party to come in and do an investigation, and eventually found out that they had been hacked. That’s an example of how once you have clear signal, you can separate the noise from signal. With good signal, you can put remediation into place.

I guess the probabilistic nature of AI kind of makes that interesting. Because now you can see across this big area, and the worst that can happen is you get a false positive, right?

In our business, false positives are dangerous things. In case of a false negative, I’m breached. In case of a false positive, I’ve disrupted a business process while thinking it was a bad thing. In both cases, I have a challenge, so I have to be as accurate as possible. We actually have patented the term “precision AI.” [Unlike generative AI], precision AI is not allowed to hallucinate, because any hallucination can cost a life. Imagine if your Waymo hallucinated. ‘Sorry, didn’t realize that was the bay!’

Today there’s a huge debate on agentic AI. That is the most fascinating thing ever. It’s kind of like the Holy Grail, because the only way you can give AI control is through agentic AI. It’s like your Waymo has an AI agent taking control of the car. After all the inferencing and figuring out, it controls you. Tomorrow, we’re going to have agents controlling everything, which we can let AI manage through some freakish moment.

That is a huge buzzword now, this agentic thing. But you’re right, it’s an incredibly powerful concept. What are these systems going to look like in the age of agentic AI?

In the past, those things were called automated tasks. They happen around you. When you leave the house, and if you have some fancy new auto alarm system, it sends the doors down for five seconds and if nobody in the house is moving, it turns on the alarm. That sounds like an alarm agent. It’s a piece of code. Or I can take my Tesla app and I can make it turn the temperature up. So when it’s really, really cold, it’ll turn on.

With agentic AI, I can be like, ‘I’m flying to New York. Book me an airline ticket. When I land there, get me an Uber, make sure I get a reservation for dinner using OpenTable. And tomorrow night, I don’t want to go out. Just have DoorDash drop me food.’ You just named five apps, what’s the big deal? But I’m going to empower an agent to do all those five tasks for me. So now it looks like a bunch of automated workflows. There’s going to be a whole bunch of automation. They’re going to a bunch of API calls, a bunch of brokerages, negotiations between my agent and those agents. It’s going to be a fascinating new world.

From a security perspective, I imagine the telemetry will look different.

There’s going to be a little bit of settling in the industry. Who’s the super agent? Who’s the task agent? All that stuff has to be figured out.

Somebody’s in the enterprise case, because I have role-based control for everything. Now in security, funnily enough, we have a product called XSOAR. We’re the market share leaders. It does something called security orchestration and remediation. When I find a flaw in your security infrastructure, I have an automated script that runs and solves the problem. I already have an agentic, automated workflow backbone in my SIM.

I think the biggest part of the technology puzzle from a security perspective that will have to be solved is agentic AI becoming more prevalent. There’ll be a lot more, I call it “agent brokerage” needed, or an agent clearing house, [saying] ‘Who are you and who allowed you to talk to me?’

Is it almost like a re-imagining of the API, in a sense?

Some of it will be API access. I don’t know if you saw Claude and Google are trying to create desktop agents, which is good for every app that is accessed from the desktop. If I can have a desktop agent, it’ll talk to DoorDash or talk to Spotify as you.

I started to use the Claude one, and I wanted to figure out a way to make it do my expenses for me, create my own little Expensify. But then it was just too scary. I was like, I can’t do this. It’s too much of a risk. But I’m sure some people will.

In this case, you’re going past the UI. You actually are going to a logical interface, as opposed to the UI interface. If there are six parameters you are going to pass to me, it doesn’t matter where on the screen they are. I can pass these parameters to you, it’s no longer a UI issue. I think the real challenge is going to be, how do we do a handshake? The Claude agent will have to eventually handshake with the expense software, and that’s where the debate is going to happen. That’s a security issue.

That’s the better way to do it, right?

Yes, but because your expense software has not been built to interact with agents, only with humans, it has to figure out a way for API authentication architecture. How do I authenticate that your agent has the same security clearance as you do? That’s a whole security debate. There’s authentication to get it technically working, and then there is permissioning, and then there is the emotion of the app owner.

With all the deepfakes and voice cloning out there, how do you guard against that? All the social engineering attacks?

That’s a whole different kettle of fish.

How do you stop it? I wonder if the phone companies need to do something, because that’s where all this stuff is traveling through.

The phone companies cannot really regulate phone calls.What they can do, which they don’t do, is if you are a corporate customer, I can stop you from going to a bad IP address. If all the traffic from a telco went through that kind of security control, could we stop consumers going through the [bad] website? If that’s true, then the phone companies should be doing that. The reason they don’t is because there’s a 30% cost overhead of inspecting everything in traffic, so you have to have a 30% bigger network. That’s expensive.

I wonder if there’s some regulation that would prevent them from doing that.

I don’t think so. There’s no regulation for preventing you from inspecting traffic from malware.

This is more on the nation state level, but there are ways to just hack into someone’s phone without them doing anything.

The problem is that a typical hack, generally, is an app-level hack, not a noise-level hack. You can’t block Instagram downloading the update.

AD
AD