• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, two experts at the Stanford Institute for Human-Centered AI argue the U.S. gover͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
cloudy Palo Alto
cloudy New Delhi
thunderstorms Washington, D.C.
rotating globe
February 22, 2023
semafor

Technology

Technology
Sign up for our free newsletters
 
Louise Matsakis
Louise Matsakis

Hi, and welcome to Semafor Tech, a twice-weekly newsletter from Reed Albergotti and me. Artificial intelligence is having a moment, but the majority of the resources in the sector are being hogged up by Big Tech, argue two Stanford researchers in today’s guest column.

Reed is on vacation this week, which means I get to share with you a long-distance kissing device from China, an unsolved ChatGPT mystery, and my take on Meta’s new subscription service. Plus, I text with a law professor about an important case underway in the U.S. Supreme Court.

And lastly, we would be grateful if you told us what you think about this newsletter in our new reader survey. Your feedback will help us focus on what you want to read about the most.

Move Fast/Break Things

➚ MOVE FAST: Remote surgery. A woman in Xinjiang became the first person in China to have her gallbladder removed via remote surgery. The procedure was carried out by a doctor over 2,000 miles away using “a four-arm laparoscopic robot,” demonstrating how high-speed internet and robotics can help bring healthcare to underserved areas.

➘ BREAK THINGS: Remote kissing. An inventor from a university in China’s Jiangsu province unveiled a strange silicone device he claims can facilitate remote kissing for long-distance couples. Not only is the smartphone accessory weird, it’s also unoriginal: London researchers debuted a similarly wacky “Kissenger gadget” back in 2016.

Weibo
PostEmail
Semafor Stat

The number of betting and lending apps the Indian government recently banned, largely over their links to China. Since 2020, New Delhi has blocked access to hundreds of Chinese-owned smartphone apps, most notably TikTok. This time, however, several Indian companies got caught in the fray, and the Ministry of Electronics and Information Technology reportedly had to ask Google to unblock them again.

PostEmail
Guest Column

ChatGPT shows the U.S. government needs to step up on AI

Russell Wald is Managing Director for Policy and Society and Jennifer King is the Privacy and Data Policy Fellow, both at the Stanford Institute for Human-Centered AI (HAI). They are co-authors of Building a National AI Research Resource: A Blueprint for the National Research Cloud.

We’re in the midst of a public awakening to the power of generative artificial intelligence.

In what feels like the span of a few weeks, conversations about the transformative technology of ChatGPT and other generative AI applications have moved from conference rooms to dining rooms. Already, it is upending some of our most basic institutions and causing whole sectors like education to consider how to regulate its use.

Frankly, we’re concerned.

In the face of this, it’s time for the U.S. government to take the necessary steps to secure our nation and ensure we’re building AI responsibly and in ways that can benefit all people.

As much as the average person now has access to these powerful tools, the advancement itself really represents Big Tech’s AI moment. For academia, the sector that invented AI and gave society the internet, a similar moment is impossible under the current circumstances. This is because presently only the biggest players in the commercial industry have access to the computing power and datasets necessary to conduct research and development that will advance AI.

In the past few years, the scale and scope of AI models have achieved such immense complexity that they require a level of computing power inaccessible to the public sector. If we want generative AI applications to advance fairly, equitably, and in a manner that positively impacts society, we need academia and civil society to have a seat at the table.

Democratizing access to AI is a good thing. We support making AI systems more transparent and increasing public access. But such a step must be taken responsibly and safely, and be driven by more than a few large players in private industry, or worse yet, by hostile nations.

That’s why this is the U.S. government’s moment to step up, govern and invest in creating an infrastructure to expand access to the tools necessary to perform R&D beyond Big Tech.

To be clear, Washington has taken important first steps to advance the use of AI with increased funding for AI R&D, support for American manufacturing with the CHIPS Act, and coordinating AI policy through the National Artificial Intelligence Initiative. The White House also released an AI Bill of Rights last year but it  lacks the power of law.

While these are laudable steps, it is simply not enough.

Unsplash/Alexander Mils

Similar to investing in transformative technologies, like particle accelerators and supercomputers, the government needs to take an active role in shaping the future of AI and its impact on our nation and our allies. Short of this, the economic, cultural, and physical security of our nation will be subject to other nations’ whims, quite possibly those who do not share our democratic values. Ultimately, the American people will be left woefully unprepared for the reality — or alternative reality powered by misinformation — these technologies are creating.

Among the risks to our national security is the race to accelerate the development and uses for AI by nations around the world. The CHIPS Act was a step in the right direction by limiting authoritarian countries’ access to certain integral hardware, but that’s only part of the answer. We must also focus on accelerating our own abilities and vastly expanding our nation’s R&D capabilities.

The good news is there is now a roadmap for how to do so. The National AI Research Resource (NAIRR) Task Force, a federal advisory committee established by the National AI Initiative Act of 2020 and made up of members from government, academia, and private organizations, has released its final report.

In the report, the taskforce outlines how to create critical new national research infrastructure that will make essential resources available to AI researchers and students. This includes access to computing power, high-quality government data, educational tools, and user support, all of which can usher in an era where America is exploring and managing the possibilities of AI unencumbered by short timelines and a focus on profit.

We believe researchers should have access to government data, but in a tiered system dependent on how sensitive the data is. For example, National Oceanic and Atmospheric Administration data on hurricane analysis would be on the low end, while military veterans’ health data would require more vetting.

How exactly researchers will access this resource remains an open question. In a separate report we jointly authored, we advocate a hybrid model that relies on net-new government computing power and subsidized cloud computing options from industry.

We are urging Congress to pass legislation and appropriate the $2.6B over six years recommended by the NAIRR Task Force.

We think of the life-changing research that scientists in healthcare can pursue, and its impact on humanity. We think of the insights we can all gain on everything from potential cures for diseases to natural disaster mitigation. And we think of the protection we can offer our nation by staying at the forefront of AI’s potential.

PostEmail
Evidence

PostEmail
One Good Text

The U.S. Supreme Court began hearing oral arguments yesterday in Gonzalez v. Google, a high-stakes case that could put Section 230 in jeopardy, the 1996 landmark provision shielding internet firms from liability for what their users post.

So far, the justices appear to be less than convinced by the arguments for why Google should be held responsible for ISIS propaganda shared on YouTube. To make sense of the hearing, we contacted Evelyn Douek, a scholar studying the private and public regulation of online speech.

Douek is an assistant professor at Stanford Law School, a senior research fellow at Columbia University’s Knight First Amendment Institute, and the host of the Moderated Content podcast.

PostEmail
China Window

Twitter

There’s a major mystery about ChatGPT that remains unsolved. Despite being trained largely on material and human feedback in English, the chatbot is somehow exceptionally good at producing text in Chinese. A resident in the Chinese city of Hangzhou, for example, recently generated a phony government press release using ChatGPT that was convincing enough to go viral.

For now, it will likely be difficult for outside researchers to figure out how it achieved that feat, since ChatGPT’s creator, OpenAI, hasn’t publicly released the training data it used to build the program. But even its own employees, including Jan Leike, the leader of OpenAI’s alignment team, can’t figure out what’s going on.

Louise

PostEmail
Coming Up

Journalist Eric Newcomer picked an opportune time to host his first conference next month, the Cerebral Valley AI Summit. Speakers include Stability AI CEO Emad Mostaque, Jasper AI President Shane Orlick, and Quora CEO Adam D’Angelo. Newcomer, who struck it out on his own after leaving Bloomberg a couple of years ago, said he thought of organizing the event while planning his bachelor party (as one does). About 200 participants are expected at the San Francisco venue.

— Reed

PostEmail
Obsessions

Reuters/Dado Ruvic/Illustration

Meta has declared that not only are its users the product, they now get to pay for it, too. The social media firm unveiled a new subscription service, Meta Verified, which gives users a blue checkmark and direct access to customer service representatives for $11.99 a month (or $14.99 on iOS devices, thanks to Apple’s 30% fee for in-app purchases). On one hand, finding alternative revenue streams is a good idea for any business.

But Meta’s subscription service is peculiar for a few reasons. The people most likely to benefit from it are influencers and celebrities who are worried about impersonation or want to raise their public profiles. For over a decade, companies like Meta have been trying to court this group of people by paying them — not the other way around. The content they post is what keeps people coming back to Facebook and Instagram, and keeps them viewing the ads in their feeds.

It’s also interesting that Meta chose to monetize verification and customer service in particular, two areas where creators have long complained the company has fallen short. But instead of fixing these issues in earnest, Meta has rebranded the solutions as premium features. Elon Musk is trying a similar strategy with Twitter, and so far, its Twitter Blue subscription service hasn’t been a big success.

But there’s a chance that Meta, with its much larger user base, may be able to attract more subscribers. The risk is that by turning social media into a pay-to-play game, Meta could wind up alienating the loyal creators who have long been sharing content for free.

Louise

PostEmail
How Are We Doing?

Are you enjoying Semafor Tech? The more people read us, the better we’ll get. So please share it with your family, friends and colleagues to get those network effects rolling.

And hey, we can’t inform you on what’s happening in tech from inside your spam folder. Be sure to add reed.albergotti@semafor.com (you can always reach me by replying to these emails) and lmatsakis@semafor.com to your contacts. In Gmail, drag this newsletter over to your ‘Primary’ tab.

Thanks for reading.

Want more Semafor? Explore all our newsletters at semafor.com/newsletters

— Reed and Louise

PostEmail