Lieu says the AI industry can be divided into two big categories: Things Congress needs to regulate and things that fall outside its purview. The former includes self-driving cars and AI-powered nuclear weapons, while glitchy consumer gadgets are of less concern. Lieu and Republican Rep. Jay Obernolte, who spoke together at the event, are among the handful of members of Congress who have technical expertise in AI. Both California lawmakers have made the technology a focus of their legislative agenda over the past year: Lieu is pushing Congress to rein in law enforcement’s use of facial recognition, while Obernolte is working on a long-anticipated national privacy law. Walker was responding to a question about OpenAI’s board shakeup last month. Central to the drama was Microsoft’s reliance on the startup for its most capable AI models. Google has invested in OpenAI competitor Anthropic, but that’s a different situation, led by the company’s cloud division. The contract requires much of the investment to come back in the form of cloud computing costs that Anthropic will spend with Google. That’s good for its cloud unit’s top line, but the real benefit is that training foundation models will help Google improve its in-house chips, called Tensor Processing Units. It’s clear that Google DeepMind is going to power the company’s advanced AI products, which allowed Walker to gloat a bit. If this were the NFL, team Microsoft would put Walker’s quote up in the locker room. Allen-Kyle was among several speakers at the event who encouraged caution when it comes to AI, given that algorithms have repeatedly been shown to be biased against people of color and women when it comes to criminal justice outcomes, accessing loans, and other areas. Dr. Namandjé Bumpus, chief scientist for the U.S. Food and Drug Administration, noted that while AI can help advance drug discovery, it can also amplify inadequate data that leads to shortfalls in treatment for certain people. Anthony Aguirre, executive director of the Future of Life Institute, outlined an extreme scenario in which humans have allowed AI to decide whether to deploy nuclear weapons. His group organized a public letter earlier this year calling for a pause in AI research signed by Elon Musk and other tech luminaries. Aguirre said he stands by that view and has been heartened by the reaction from policymakers. Himes’ remarks come amid a battle in Congress over how much lawmakers should restrict U.S. investment in China, with hawks calling for a sweeping approach beyond a recent White House executive order tackling the issue. Himes argues that AI ventures in China will get funded, with or without U.S. money, so being more involved through American investments that come with board seats and the like at startups there would actually be helpful instead of harmful. His voice may be more in the minority, though Republican Congressman Patrick McHenry has similar thoughts. Himes is in the majority in railing against the slow-moving acquisition process at the Pentagon, which he says hurts American interests as AI plays a growing role in national security offense and defense. “Everything is software now, and none of these entities have moved into a world where they need to understand … that developing software is a radically different proposition than building a tank,” Himes said. The transition to generative AI, happening across all industries, is going to lead to massive increases in data center energy usage. But it’s not all bad news. As Dr. Richmond noted, artificial intelligence is also helping to mitigate climate change. The technology can lead to new discoveries of materials that could be helpful in building better batteries, solar panels, and potentially even nuclear fusion. The Department of Energy is in the middle of all of those efforts. Companies are racing to adopt AI, and Centoni says the effort is often being driven by CEOs and other top executives. But when Cisco recently surveyed 8,000 business leaders, only a small fraction said they felt fully ready to introduce AI-powered tools into their organizations. That’s because many forms of practical AI software are only now entering the market, and it will take time for companies to figure out the most effective ways to incorporate them. Over the last year, OpenAI and other firms have demonstrated that general purpose large language models have tremendous potential, but it will be up to the rest of the tech industry to figure out how best to apply them. |