• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we have a scoop on startup EQTY Lab’s new method for tracking the inner workings͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
February 9, 2024
semafor

Technology

Technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome back to Semafor Tech.

The AI industry, and those who want to regulate it, made some big moves this week. Google launched the much-anticipated Gemini Ultra while the U.S. Commerce Department unveiled a new AI safety consortium.

But for all the excitement, large language models still haven’t gone from the experimental phase to the mission critical phase, where businesses can’t get by without them. That will likely happen, but the question is when.

What’s becoming clear to a lot of people I talk to is that it won’t happen without some new ideas on how to improve the reasoning ability of AI models. Right now, LLMs are knowledgeable, but they aren’t intelligent. Therefore, we can’t trust them to do much of anything that’s useful.

At the same time, regulators seem to be looking beyond the current capabilities, too. They’re asking companies employing AI tools to offer ways to peer into the models so they can be tested for things like bias and toxicity. But the technological infrastructure for that doesn’t yet exist.

Today, I highlighted one company that’s trying to build some of that foundation in a pretty novel way, by employing cryptography and the blockchain. While that fact alone is enough for some people to stop reading, I’ve found it interesting to see crypto technology become effective during its supposed “winter.” Read below for more.

Move Fast/Break Things

Beata Zawrzel/NurPhoto via Getty Images

➚ MOVE FAST: Rebrand. Google wrote off the name of its AI chatbot, Bard, and replaced it with Gemini, the name of the underlying technology. It comes with a $19.99 a month subscription to compete with OpenAI. One Google advantage is its Android phones, which will offer Gemini as the default aide on the devices.

➘ BREAK THINGS: Repeat. U.S. lawmakers are once again trying to rein in TikTok, this time by urging the Commerce Department to place parent company ByteDance on an export blacklist. After four years of government efforts to ban the video platform or force its owners to sell, TikTok has only grown in popularity while Washington continues to go back to failed playbooks.

PostEmail
Artificial Flavor

We’ve covered how AI is shaking up drug discovery. A new study shows how that technology could help millions of people who suffer from one of the most common mental health issues: depression.

Patient reactions to anti-depressant medication varies widely, which then means it can take a while to find a treatment that works. The research by Amsterdam UMC and Radboud University Medical Center used AI algorithms that combined multimodal MRI data with clinical information, which helped better predict the effectiveness of an anti-depressant medication for patients.

That sped up the evaluation of how well a treatment was working by up to eight weeks, compared to traditional methods like brain scans. The study’s results could help create ways to better personalize medications for depression and do that faster.

PostEmail
Reed Albergotti

Accenture tests tool for LLM compliance

THE SCOOP

As global regulators increasingly scrutinize artificial intelligence, massive consulting firm Accenture is testing a startup’s technology in what could become a standard method of complying with rules to ensure responsible and safe innovation.

Los Angeles-based EQTY Lab created a new method, employing cryptography and blockchain, to track the origins and characteristics of large language models, and provide transparency on their inner workings, so companies and regulators can easily inspect them. The idea is to automatically examine the model as it is being created, rather than focus on its output.

“What we’re doing is creating certainty,” that a model works the way it was intended, said EQTY Lab co-founder Jonathan Dotan. “​​Right now, we’ve done the first step, which is proving this is possible.”

EQTY Lab’s AI Integrity Suite is being evaluated in Accenture’s AI lab in Brussels to see if the software could be scaled to serve the firm’s thousands of clients, many of whom are in the Fortune 100.

The work is being done as countries propose ways to address the promise and risks of AI. On Thursday, the U.S. Commerce Department announced a consortium of more than 200 tech companies, academics and researchers that will advise the new government AI Safety Institute, which will develop “red team” testing standards and other guidelines directed by a White House executive order on AI last year.

“Responsible AI is absolutely critical. It’s on top of everyone’s mind,” said Bryan Rich, senior managing director for Accenture’s Global AI Practice. “But how do you go from talking about responsible AI to actually delivering it?”

EQTY Lab

REED’S VIEW

The White House AI executive order and other regulations, including those proposed in Europe, make it seem like watchdogs see AI models as the kinds of products that can be inspected and stamped as either safe or unsafe, like a car or a consumer gadget.

In reality, large language models are like a perpetual stew, with ingredients from many places constantly thrown in together.

Also, the idea of using just one model is antiquated. For a single AI product, more models are increasingly being employed as developers glom together specially trained ones to carry out specific tasks.

We’re already likely approaching a place in which it will be difficult and time consuming for companies to vet every AI model they use.

That’s why, in theory, EQTY’s idea makes sense: A cryptographic signature would allow developers to retain trade secrets while simultaneously offering some transparency into how the models were put together.

For instance, Meta’s Llama 2 model does not disclose the contents of the data that was used to train it. That’s led to tension as the company faces lawsuits alleging it violated copyright law by including protected work in its training data. Let’s say that Meta, in a purely hypothetical scenario, wanted to prove that a specific set of copyrighted work was not included in the data. EQTY says it is developing a way that Meta could prove that without having to divulge the entire training set.

Read here for a Room for Disagreement on whether tracking AI models makes sense. →

PostEmail
Friends of Semafor

Garbage Day, a Webby Award-winning newsletter from tech reporter Ryan Broderick, delivers a curated dose of internet gems three times a week. Memes, viral videos, internet drama, AI updates — it’s all packed into one email. Make your inbox a little less professional and remember what it was like to have fun online again. You can sign up here.

PostEmail
Semafor Stat

The revenue OpenAI has raked in, as of December, the Financial Times reported. The startup believes it can more than double that figure next year. Still, that pales in comparison to the trillions CEO Sam Altman needs for an initiative to expand global chip production capacity.

PostEmail
What We’re Tracking

Yesterday, U.S. Commerce Secretary Gina Raimondo told Semafor’s Morgan Chalfant that she is concerned about the risks of AI being used to disrupt U.S. elections this year. She made those remarks after unveiling a new consortium that will advise the agency on AI standards.

It’s an issue that’s on the minds of AI companies, too. Midjourney is considering banning users from creating political images, particularly ones of Joe Biden or Donald Trump. Earlier this week, Meta said it would label AI-generated images used with technology from Google, OpenAI, Microsoft, Adobe, Midjourney, and others. Content created by Meta’s services already come with a “Imagined with AI” tag.

Meanwhile, Microsoft reported this week that for the first time, it detected an Iranian influence operation where AI played a key role in the messaging. It said that in December, Iran had interrupted streaming TV services that reached audiences in the UAE, the UK, and Canada. They were replaced with a fake video featuring an AI-generated news anchor talking about the fighting in Gaza.

Screenshot via Microsoft

With dozens of elections taking place across the globe this year and geopolitical tensions rising on multiple fronts, AI will be an easy target for blame, whether it was the actual culprit or not. That could lead to overreactions and a game of whack a mole to solve problems that deserve more thoughtful deliberation.

PostEmail
Hot on Semafor

PostEmail