Feb 22, 2023, 12:00pm EST

Russell Wald and Jennifer King: ChatGPT shows the U.S. government needs to step up on AI


Sign up for Semafor Technology: The fight for the industry’s future. Read it now.

Title icon

The Writers

Russell Wald is Managing Director for Policy and Society and Jennifer King is the Privacy and Data Policy Fellow, both at the Stanford Institute for Human-Centered AI (HAI). They are co-authors of Building a National AI Research Resource: A Blueprint for the National Research Cloud.

Title icon

Guest Column

We’re in the midst of a public awakening to the power of generative artificial intelligence.

In what feels like the span of a few weeks, conversations about the transformative technology of ChatGPT and other generative AI applications have moved from conference rooms to dining rooms. Already, it is upending some of our most basic institutions and causing whole sectors like education to consider how to regulate its use.

Frankly, we’re concerned.

In the face of this, it’s time for the U.S. government to take the necessary steps to secure our nation and ensure we’re building AI responsibly and in ways that can benefit all people.


As much as the average person now has access to these powerful tools, the advancement itself really represents Big Tech’s AI moment. For academia, the sector that invented AI and gave society the internet, a similar moment is impossible under the current circumstances. This is because presently only the biggest players in the commercial industry have access to the computing power and datasets necessary to conduct research and development that will advance AI.

In the past few years, the scale and scope of AI models have achieved such immense complexity that they require a level of computing power inaccessible to the public sector. If we want generative AI applications to advance fairly, equitably, and in a manner that positively impacts society, we need academia and civil society to have a seat at the table.

Democratizing access to AI is a good thing. We support making AI systems more transparent and increasing public access. But such a step must be taken responsibly and safely, and be driven by more than a few large players in private industry, or worse yet, by hostile nations.

That’s why this is the U.S. government’s moment to step up, govern and invest in creating an infrastructure to expand access to the tools necessary to perform R&D beyond Big Tech.

To be clear, Washington has taken important first steps to advance the use of AI with increased funding for AI R&D, support for American manufacturing with the CHIPS Act, and coordinating AI policy through the National Artificial Intelligence Initiative. The White House also released an AI Bill of Rights last year but it  lacks the power of law.


While these are laudable steps, it is simply not enough.

Unsplash/Alexander Mils

Similar to investing in transformative technologies, like particle accelerators and supercomputers, the government needs to take an active role in shaping the future of AI and its impact on our nation and our allies. Short of this, the economic, cultural, and physical security of our nation will be subject to other nations’ whims, quite possibly those who do not share our democratic values. Ultimately, the American people will be left woefully unprepared for the reality — or alternative reality powered by misinformation — these technologies are creating.

Among the risks to our national security is the race to accelerate the development and uses for AI by nations around the world. The CHIPS Act was a step in the right direction by limiting authoritarian countries' access to certain integral hardware, but that’s only part of the answer. We must also focus on accelerating our own abilities and vastly expanding our nation’s R&D capabilities.

The good news is there is now a roadmap for how to do so. The National AI Research Resource (NAIRR) Task Force, a federal advisory committee established by the National AI Initiative Act of 2020 and made up of members from government, academia, and private organizations, has released its final report.

In the report, the taskforce outlines how to create critical new national research infrastructure that will make essential resources available to AI researchers and students. This includes access to computing power, high-quality government data, educational tools, and user support, all of which can usher in an era where America is exploring and managing the possibilities of AI unencumbered by short timelines and a focus on profit.


We believe researchers should have access to government data, but in a tiered system dependent on how sensitive the data is. For example, National Oceanic and Atmospheric Administration data on hurricane analysis would be on the low end, while military veterans’ health data would require more vetting.

How exactly researchers will access this resource remains an open question. In a separate report we jointly authored, we advocate a hybrid model that relies on net-new government computing power and subsidized cloud computing options from industry.

We are urging Congress to pass legislation and appropriate the $2.6B over six years recommended by the NAIRR Task Force.

We think of the life-changing research that scientists in healthcare can pursue, and its impact on humanity. We think of the insights we can all gain on everything from potential cures for diseases to natural disaster mitigation. And we think of the protection we can offer our nation by staying at the forefront of AI’s potential.