• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


In today’s edition, we look at a group of well-funded organizations that have sprung up in recent mo͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
October 20, 2023
semafor

Technology

technology
Sign up for our free email briefings
 
Louise Matsakis
Louise Matsakis

Hi, and welcome back to Semafor Tech. Happy Friday! It’s been a big week for artificial intelligence policy. The United Kingdom is gearing up to host a high-profile summit on AI safety in two weeks that is reportedly being snubbed by some European leaders, though officials from China are still expected to attend.

Speaking of Beijing, authorities there put out a strikingly detailed list of draft guidelines outlining the government’s views on the safety of generative AI models. (Hint: It wants to ensure they’re sufficiently censored.) Here in the United States, I tracked the rise of new think tanks that want to convince lawmakers about what they say are grave dangers posed by AI.

The groups have ties to effective altruism, a charity movement that has poured hundreds of millions of dollars into preventing an AI-related catastrophe. I was struck by how many of these organizations, which have confusingly similar acronyms, had seemingly popped up overnight. For more, check out the story below.

Move Fast/Break Things

Corbis via Getty Images/Stephane Cardinale

➚ MOVE FAST: Big screen. The highly anticipated Apple-backed film Killers of the Flower Moon, directed by Martin Scorsese, opens today. It could help distract from the news that Jon Stewart’s Apple show has abruptly ended, reportedly after his team clashed with the tech giant over content related to China and artificial intelligence.

➘ BREAK THINGS: Small screen. Apple CEO Tim Cook is making his second trip to China in about seven months as iPhone sales slow there. On top of the tepid Chinese economy, Apple is also facing growing regulatory and political pressure amid U.S.-China tensions.

PostEmail
Artificial Flavor

Unsplash/Ross Sneddon

When Hollywood actors went on strike in July, some began participating in an “emotion study” to help build AI training datasets for Meta and a London-based company called Realeyes. For $150 an hour, the actors were asked to do things like “share a sad story” or “tell us something that makes you angry,” according to MIT Technology Review.

A job listing for the study noted it didn’t count as “struck work,” meaning it didn’t directly interfere with the strike, but it still highlights how actors are increasingly being recruited to help create AI tools that might one day replace them. Background actors, for example, are now being asked to participate in body scans that could later be used to depict crowd scenes. The way studios should use AI remains a point of contention as the actor’s strike moves into its fourth month.

PostEmail
Louise Matsakis

The new think tanks influencing AI policy in Washington

THE SCOOP

Several well-monied think tanks focusing on artificial intelligence policy have sprung up in Washington, D.C. in recent months, with most linked to the billionaire-backed effective altruism (EA) movement that has made preventing an AI apocalypse one of its top priorities.

Funded by people like Facebook co-founder Dustin Moskovitz, their goal is to influence how U.S. lawmakers regulate AI, which has become one of the hottest topics on Capitol Hill since the release of ChatGPT last year. Some of the groups are pushing for limits on the development of advanced AI models or increased restrictions on semiconductor exports to China.

One previously unreported group was co-founded by Eric Gastfriend, an entrepreneur who runs a telehealth startup for addiction treatment with his father. Americans for Responsible Innovation (ARI) plans to “become one of the major players influencing AI policy,” according to a job listing. Gastfriend told Semafor he is entirely self-funding the project.

Another organization, the Institute for AI Policy and Strategy (IAPS), began publishing research late last month and aims to reduce risks “related to the development & deployment of frontier AI systems.” It’s being funded by Rethink Priorities, an effective altruism-linked think tank that received $2.7 million last year to study AI governance.

That money came from Open Philanthropy, a prolific grant-making organization primarily funded by Moskovitz and his wife Cari Tuna. Open Philanthropy has spent more than $330 million to prevent harms from future AI models, making it one of the most prominent financial backers of technology policy work in Washington and elsewhere. The Center for AI Safety (CAIS), another group funded by Open Philanthropy, recently registered its first federal lobbyist, according to a public filing.

IAPS has been coordinating with at least one organization that has also received funding from Open Philanthropy, the prominent think tank Center for a New American Security (CNAS), according to a person familiar with the matter. CNAS and IAPS did not return requests for comment.

Semafor/Al Lucca

IAPS is not to be confused with the similarly named Artificial Intelligence Policy Institute (AIPI), an organization launched in August by 30-year-old serial entrepreneur Daniel Colson. The group is also aiming to find “political solutions″⁣ to avoid potential catastrophic risks from AI, according to its website.

AIPI said it’s already met with two dozen lawmakers and is planning to expand into formal lobbying soon. Over the last two months, research and polling published by the group have been picked up by a plethora of news outlets, including Axios and Vox.

Colson, who previously founded a cryptocurrency startup as well as a company for finding personal assistants, said that AIPI was initially funded by anonymous donors from the tech and finance industry and is continuing to raise money.

“The center of our focus is on what AI lab leaders call the development of superintelligence,” Colson told Semafor in August. “What happens when you take GPT-4 and scale it up by a billion?” That kind of powerful AI model, he argued, could destabilize the world if not managed carefully.

For the View From China and the rest of the story, read here.  →

PostEmail
Evidence

One of the biggest debates in AI right now is whether companies should make their models open source, or fully available to the public for free. But even when tech giants go the open route, they often leave out crucial information, like what data they used to train their programs. The Foundation Model Transparency Index is a new project from researchers at MIT, Stanford, and Princeton measuring how transparent AI companies are really being. It looks at 100 different indicators, such as whether companies disclose the wages they paid to their AI trainers or the computing resources that were used to develop a model. The results are then compiled into a score from 0-100.

Stanford CRFM
PostEmail
Quotable

“The debate on existential risk [from AI] is very premature until we have a design for a system that can even rival a cat in terms of learning capabilities, which we don’t have at the moment.”

— Meta Chief AI Scientist Yann LeCun to the Financial Times

PostEmail
What We’re Tracking

Korean Central News Agency via Reuters
  • U.S. companies unwittingly funneled millions of dollars to North Korea’s ballistic missile program through wages they paid to thousands of purported remote IT workers, according to the FBI and Justice Department. North Koreans allegedly used false identities to obtain the contract roles, and sometimes paid Americans to use their Wi-Fi connections to make it seem like they were in the U.S.
  • Universal Music Group and two other music publishers sued Anthropic in a Tennessee court for allegedly using copyrighted song lyrics to train its chatbot Claude. The case is part of a wave of similar lawsuits filed against OpenAI, Stability AI, and Midjourney. If the judges disagree about whether training AI models is considered fair use, the issue could end up being decided by the Supreme Court.
PostEmail
Obsessions

A big question looming over generative AI is what, exactly, the application layer will look like. I read Satya Nadella’s annual letter to shareholders, published yesterday on LinkedIn, and I think we have an idea.

In the corporate setting, it will look a lot like Microsoft. The company is weaving generative AI into everything from Office to Teams to Bing. This isn’t a secret, and we’ve been following the changes all year. But when you see them all listed in one place, it’s kind of astonishing. The tools Microsoft is building will be enough AI for a lot of people.

That raises another important question: Is there any AI oxygen left for new entrants, for disruption? I think it will happen, but it might be a while. Foundational models are so big that even Microsoft is having to work in overdrive just to meet the demand on its data centers. This is a monumental effort and those usually benefit the big players.

At the same time, two things are occurring. First, AI researchers all over the world are learning from these gigantic models and figuring out ways to make them smaller, more customized, and efficient. Second, chip makers are working on ways to run those models even faster. Some of these efforts are occurring at Microsoft.

It’s those kinds of pushes that will really put the power of AI into the hands of anyone. It may not happen for a year or two. But that’s when a college kid will be able to start an AI-enabled company with just a laptop.

— Reed

PostEmail
Hot On Semafor
  • A crackdown on greenwashing is coming. Top officials in both the U.K. and Australia separately told Semafor that they were readying new legal frameworks and punishments for companies found guilty.
  • Six months after a panic that killed four of them and threatened others, regional banks still aren’t in the clear — and their problems are coming for giant lenders.
  • Israeli and American officials are alarmed that China hasn’t condemned Hamas, seeing in it an attempt by Beijing to use the conflict to isolate the U.S. from its Arab and regional allies
PostEmail