• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, Semafor interviews the elder statesman of Silicon Valley about AI past, present ͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
thunderstorms San Francisco
sunny Beijing
cloudy Redmond, Wash.
rotating globe
March 29, 2023
semafor

Technology

Technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome to Semafor Tech, a twice-weekly newsletter from Louise Matsakis and me that gives an inside look at the struggle for the future of the tech industry.

I recently stayed up late one night preparing to interview Vinod Khosla, reading everything he’s written on AI and other topics. He was so specific about some of his predictions that I gained a new kind of respect for the way his brain works.

It also made me a little more excited about the future, and a little more fearful. If he’s right about where we’ll be in a decade, there will be new technologies that will reshape humanity for the better. But we’ll also have to find a way to deal with change so rapid that it will no doubt be destabilizing.

On another AI front, Gina Chua, our executive editor, has been a great sounding board on the latest technology. Lately we’ve been going back and forth about how large language models might change spreadsheets. It resulted in an intriguing experiment she conducted to test the models’ ability to handle the chore of crunching numbers.

And Ben Smith, our editor-in-chief, tried out a new group chat feature that brings an AI chat bot into the conversations. You group chats are about to get weird.

I spent an hour Tuesday hosting a Twitter Spaces conversation with Replit cofounder Amjad Masad. I hope to do more of these and encourage you all to join next time and ask questions. Follow me so you’ll be notified when they happen.

Housekeeping: This newsletter will send from a new email address starting next week. To make sure we make it to your inbox, add technology@semafor.com to your contacts.

Are you enjoying Semafor Tech? Help us spread the word!

Move Fast/Break Things

Reuters/Clay McLachlan

➚ MOVE FAST: AI partnerships. Google announced it has struck a deal with Replit, a software startup that uses AI to write code (the topic of Semafor’s first Twitter Spaces). Replit’s new Google-powered service will compete with a similar offering from Microsoft called Copilot, which was developed through its partnership with OpenAI.

➘ BREAK THINGS: AI porn. It’s easier than ever for people to generate non-consensual deepfake porn featuring a real person’s face with an AI-generated body. The material is advertised on Google and Discord and can be paid for with a Visa or Mastercard, reports NBC News.

PostEmail
Semafor Stat

The number of independent businesses Alibaba said it would divide itself into as part of a major restructuring process. The new units will each have their own CEO and include cloud computing, Chinese retail, overseas retail, on-demand services such as food delivery, logistics, and media. The historic shakeup was announced after Alibaba co-founder Jack Ma appeared in China for the first time in a year.

PostEmail
Q&A

Sam Altman and Vinod Khosla

At nearly 70, Vinod Khosla is an elder statesman in Silicon Valley but he’s been on top of the most cutting-edge trends, including artificial intelligence.

When ChatGPT-maker OpenAI decided to switch from a nonprofit to a private enterprise in 2019, Khosla was the first venture capital investor, jumping at the opportunity to back the company that, as we reported last week, Elon Musk thought was going nowhere at the time. Now it’s the hottest company in the tech industry.

If you go back and read what Khosla wrote about artificial intelligence a decade ago, it sounds remarkably — even eerily— like what people are saying today.

That kind of foresight is why he has had staying power in the tech industry. He created Sun Microsystems in 1982 (its programming languages like Java are still used today) and joined venture capital firm Kleiner Perkins in 1986, where he was an early backer of AMD, Juniper Networks, and Excite. He launched Khosla Ventures in 2004, where he’s been a leader in cleantech investing and has scored home runs on Impossible Foods, Instacart, and DoorDash.

I spoke to him about a wide range of topics, including how he seemed to see our AI future so long ago. The following is an edited excerpt of our conversation.

Q: In 2014, you were talking about AI helping you with creativity and other things that seem obvious now, but at the time sounded a little crazy.

A: In January, 2012 — I know because I tore my ACL skiing the day before Christmas — I started writing because I had nothing else to do. I wrote a blog called “Do We Need Doctors?” I wrote another one called “Do we need teachers? That was my 25-year forecast and I still stick with it. I think it’ll happen a lot faster.

By 2014, it was clear to me most media would be generated by an AI. In our 2017 fundraising deck, we used the term “synthetic media” — which is now DALL-E.

Five years ago we invested in this company called Splash in Australia. This sounds ridiculous, especially five years ago, but the founder said to me, ‘I want a top 10 music hit, produced by me, composed by AI, the instruments by AI, sung by an AI. No humans touching the music.’

It sounds inconceivable, but I could see why AI would generate media. There was a term called style transfer. You take a photograph. Can I turn it into Picasso’s style? Could you do my portrait in the Mona Lisa style? This idea of transferring the style from an artist or a painter, it was highly probable it would evolve to more and more capability.

Q: Transformer models (the models used to create DALL-E and ChatGPT) hadn’t even been invented yet.

A: But new models were being invented all the time. Which ones would be a quantum jump? I didn’t know, but I did know the following: The best talent out of every university was going into AI. And AI was making quantum leaps.

Q: You were the first venture investor in OpenAI. You talk a lot about judging founders by how much they change and their ability to adapt and grow quickly. How has Sam Altman changed since you invested?

A: There’s a part of Sam people don’t know. He’s working on AI. He’s also working on fusion in a company called Helion. I’m working on Commonwealth Fusion, a competitor to Helion. We both agree and we’ve talked about it. AI is really, really important to the world and fusion’s really important.

The common part is we are working on important problems that would make the world a lot better if they worked. So in some sense, the probability of success almost doesn’t matter because if it’s successful, it’s transformative to the world in a very Nassim [Nicholas] Taleb [author of The Black Swan: The Impact of the Highly Improbable] sense. It’s 1x your money downside and huge upside that is societally transformative. And that’s what Sam cares about and that’s what I care about.

Q: But it was a non-profit right before that. What do you think of the criticism from some people like Elon Musk?

A: Sam was looking for other ways. He cared about the mission and what AI could do for humanity.

It was clear it was going to become expensive and you needed a lot more money. Google could afford to do it. And the Chinese could afford to do it.

Q: In other words, you saw this as a geopolitical issue, too?

A: I’ve always thought it’s a huge geopolitical issue. In 25 years, 80% of all jobs will be capable of being done by an AI.

This large transformation is the opportunity to free humanity from the need to work. People will work when they want to work on what they want to work on. That’s a utopian vision. But getting from here to that utopia is really disruptive and it is terrible to be the disrupted one. So you have to have empathy for whoever’s being disrupted. And the transition is very messy. It hurts people, hurts lives, destroys lives.

Reuters/Kevin Lamarque

Q: If China develops AI first, what is the world that we end up living in?

A: Whoever wins the technology race in 20 years is up for grabs. The Chinese Communist Party’s most recent five-year plan commits to dominance in AI.

These are asymmetric technologies. A country like Rwanda can’t afford to have their own AI. Even Brazil can’t afford to have their own AI. Whether Western values win the technology race and hence the economic race will determine what political philosophy is dominant on the planet.

It’s higher stakes than a war or a cyber war, and that bothers me. I do want us to be sensitive to the fact that a couple of these technologies — AI being a dominant one and I think fusion is like that — will determine whether in 2050 we are looking at Western values increasing in the world or Chinese values increasing.

They have a very different political philosophy. I’m not critiquing their philosophy. I just don’t want it to win.

Q: That’s a critique in itself I guess. You’ve also talked about the need for fundamental research. Typically that’s a government role. What role do the VCs play versus the government in this race?

A: Fundamental research is important. Germany has some of the best research. Cambridge in the UK is great at research. In Japan, there’s really good research. They’ve not been able to commercialize it and turn it into societal impact [at the same rate that you see in the U.S.].

Pat Brown [Founder of Impossible Foods] took a bicycle from Stanford, came to our office, and said: “I want to change animal husbandry on the planet.” We worked with him on starting the whole thing that is now called plant proteins. That is the venture community’s traditional role.

Q: I sometimes wonder why people in SIlicon Valley are not bigger cheerleaders of government-funded research. Instead, there’s sort of this anti-government vibe.

A: There’s a vocal constituency that is anti-government. It’s not the majority of tech. Just like Trump voters aren’t the majority of voters. But they’re louder. And the press likes to amplify either Trump or AOC [Alexandria Ocasio-Cortez], but not Mark Warner, who’s sitting in the middle sort of being sensible.

Fundamental research is key. Bob Mumgaard, who’s the founder and CEO of Commonwealth Fusion Systems, wasn’t an entrepreneur when I met him. He was a senior research fellow at the MIT Plasma Science and Fusion Center. This is why fundamental research is so critical — so people like me can come along and say, boy, I’d like to fund that. Let’s try and start a company.

Q: You’ve talked a lot in the past about what makes a good venture capitalist. It seems we got to really see what VCs were made of recently when Silicon Valley Bank was collapsing and a lot of companies didn’t know whether they’d be able to make payroll. Any takeaways?

A: Unless you have been through the experience, you have no empathy for founders. And you don’t know how to advise an entrepreneur because it’s almost like academic advice, if you don’t know the reality of being an entrepreneur and all the internal pressures.

Bring it to the [run on SVB] and the number of VCs who hid and said nothing.

If you had empathy, the answer was obvious. It didn’t cost you that much to be supportive versus, ‘oh, I can now get more, put more money in the last round’s price,’ which is so common. Jefferies coming out saying, we’ll buy your bank balance for 60 cents on the dollar. You know, that just galled the hell out of me.

PostEmail
Evidence

PostEmail
Gina Chua

How math could revolutionize the world of chatbots

Nikolas Kokovlis/NurPhoto via Getty Images

THE NEWS

Microsoft recently announced it was integrating generative AI into its Office 365 suite of software. Putting it into Excel may be a game changer.

Such AI systems — large language models, or LLMs — are built to handle and manipulate language; they don’t, as a general rule, have calculators built into them, and that’s why they routinely botch simple math questions. That limits their capabilities: They can draft emails, summarize reports, and create presentations, but can’t take on tasks that require 100% accurate addition or subtraction — adding up a list of employee salaries, say.

Integrating GPT-4 into a spreadsheet program like Excel could change all of that. It would give the chatbot access to a system built entirely around manipulating numbers and other forms of structured data, like names, locations and so on, and which produces consistent, accurate results. 

Imagine if you could, in plain English, get Excel to create a budget for your organization, and then you could ask it to explore what-if scenarios; or if you could get it to build a spreadsheet of contacts, and ask it to list only all the people that had addresses in a certain city; or if you could ask it to look for patterns in the data — say, how salaries in a company for the same job title vary by location, gender or ethnicity.  None of which an LLM can really do on its own right now.

I tried an exercise, albeit with a simple problem about dividing the check fairly at a large meal, and while Google’s Bard and Anthropic’s Claude both bombed, GPT-4 came back with some impressive results. And if that can translate to a spreadsheet, it could unlock a host of new and powerful capabilities.

PostEmail
One Good Chat

Wavelength is a new ChatGPT-powered group chat app with novel features like intuitive threading. Semafor Editor-in-Chief Ben Smith used Wavelength to message with the app’s creators about what they’re thinking about life after social media — and why we need to include AI bots in our group chats.

PostEmail
Parameters

Elon Musk, Apple co-founder Steve Wozniak, and a number of prominent artificial intelligence researchers signed an open letter calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” The signatories say the drastic measure is necessary to give AI labs time to “implement a set of shared safety protocols for advanced AI design.”

The letter was officially unveiled Wednesday, but had leaked online the night before. People began adding the names of other famous tech leaders, including Bill Gates and Sam Altman, which then quickly disappeared. Someone even added “Rahul Ligma,” the fictional fired Twitter engineer. Yann Lecun, the chief AI scientist at Meta, publicly denied signing the letter after his name appeared on it.

The organization behind the letter, The Future of Life Institute, says its goal is to “steer transformative technologies away from extreme, large-scale risks and towards benefiting life.” The “longtermist” non-profit was co-founded by Jaan Tallinn, a wealthy technologist who helped develop Skype.

While the letter contains some good ideas, it’s unlikely to accomplish much. The biggest and most powerful tech companies in the world are already locked in a race to deploy new AI tech as fast as possible, and that probably won’t change any time soon.

Louise

PostEmail
How Are We Doing?

Are you enjoying Semafor Tech? The more people read us, the better we’ll get. So please share it with your family, friends and colleagues to get those network effects rolling.

And hey, we can’t inform you on what’s happening in tech from inside your spam folder. Be sure to add reed.albergotti@semafor.com (you can always reach me by replying to these emails) and lmatsakis@semafor.com to your contacts. In Gmail, drag this newsletter over to your ‘Primary’ tab.

Thanks for reading.

Want more Semafor? Explore all our newsletters at semafor.com/newsletters

— Reed and Louise

PostEmail