Evan Vucci/AP Photo
President Joe Biden signs an executive order on artificial intelligence in the East Room of the White House, October 30, 2023, in Washington, as Vice President Kamala Harris looks on.
“To realize the promise of AI and avoid the risk, we need to govern this technology.” With that statement, Joe Biden signed an executive order on Monday outlining the federal government’s approach to regulating artificial intelligence.
The executive order’s breadth reflects how the White House recognizes that the use of artificial intelligence across the economy is not a tech sector marketing pitch. Instead, the document acknowledges that artificial intelligence’s adoption could leverage immense benefits for the public and productivity gains in the economy, while it’s just as possible that unchecked AI tools could exacerbate already-existing concerns over cybersecurity, data privacy, discrimination, and the exploitation of consumers. That balancing act—how to maximize the promise of the technology while minimizing the risk—is at the heart of the administration’s strategy.
The biggest impediment to such an approach is actually knowing what the tech companies have in mind. So the administration turned to a novel application of existing law to make sure that the AI era doesn’t feature the usual move-fast-and-break-things ethos of Silicon Valley, where forgiveness is sought instead of permission.
Using powers under the Defense Production Act, the government will require any company building an AI model that has “national security, national economic security, or national public health and safety” implications to notify the federal government and give up the results of all risk assessments and safety tests. That’s broad enough to encompass virtually any large-scale AI model.
More from Jarod Facundo | David Dayen
Those assessments, known as “red-teaming,” involve tests to identify flaws and vulnerabilities in AI models. They are usually closely guarded corporate secrets. The Prospect wrote last month about how autonomous-vehicle companies operating in California refused to release safety information to the public via public records requests. The state Department of Motor Vehicles has subsequently suspended one company, Cruise, from operating on California roads, after it withheld video of a pedestrian crash.
Under this executive order, within 90 days, risk and safety information will have to be presented to the federal government in advance. It firmly designates who holds the regulatory power for AI, and subordinates the tech companies to the regulators. Tech companies were not told in advance about this provision, according to published reports.
The National Institute of Standards and Technology (NIST), located within the Commerce Department, will establish standards AI systems must reach before being publicly released. That includes standards for red-team testing and availability for so-called “testbeds” to support the practices. The standard-setting also includes expansions of NIST’s existing risk management and software development guidelines, as well as guidance for how to audit AI capabilities.
A relatively obscure agency, NIST will take on new significance with this responsibility. NIST has 270 days to develop these standards, in coordination with the secretary of commerce and the Departments of Energy and Homeland Security.
THE ORDER NOTES THAT THE COMMERCE DEPARTMENT is tasked with developing guidance for authenticating content, labeling it through a watermark, and detecting inauthentic content. The timeline for this is long—more than a year and a half to put the guidance into practice. But if effectively implemented, it could help assuage concerns over “deepfakes,” referring to fake audio and video content. Deepfakes can be used for jokes and satirical purposes, but could also be used against consumers to override safety protections over their finances, or even for mass manipulation through information channels.
The White House recognizes that AI can adopt existing racial discrimination and biases at an expedited rate. To prevent this, the Department of Justice and “Federal civil rights offices” will provide “best practices for investigating and prosecuting civil rights violations related to AI.” But the administration also expects law enforcement departments across the country to adopt AI technology in their operations. A recommendation from the Civil Rights Division of the Justice Department on best practices for AI in sentencing, parole and probation, pretrial release and detention, surveillance, predictive policies, and more must be made within one year.
Additionally, the order announces the creation of an AI Safety and Security Board, an advisory board to the Department of Homeland Security. The board will be made up of “AI experts from the private sector, academia, and government, as appropriate.” It will be worth watching whether the board’s makeup mirrors that of the first two “AI insight forums” put together by Senate Majority Leader Chuck Schumer (D-NY), which have been criticized for disproportionate representation of tech industry representatives rather than those representing the public interest.
Schumer responded to the executive order by calling it a “crucial step,” but added that “all executive orders are limited in what they can do, so it is now on Congress to augment, expand, and cement this massive start with legislation.” He has previously said that any legislative package would not be ready until next year.
Inside the federal government, agencies will have to strengthen “privacy guidance[s]” on any commercially available information they collect, to account for “AI risks.” The language reads dry, yet it illustrates that personally identifiable information already in the hands of data brokers can be exploited, underscoring the president “call[ing] on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids.” A fact sheet about the executive order continues: “AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems.”
There’s also mention of the application of artificial intelligence in the U.S. military, as well as efforts to “counter adversaries’ military use of AI.”
Recognizing how the technology could reshape labor markets, the order has a section titled “Supporting Workers,” which requests a report on “AI-related workforce disruptions” and options for what federal support for workers could look like if they lose their jobs to AI.
Critically, this recognizes that AI’s ability to streamline tasks across the economy doesn’t have to be a zero-sum game where workers are left out. As the fact sheet states: “AI is changing America’s jobs and workplaces, offering both the promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement.”
Taken together, the executive order adds weight to the voluntary commitments Biden secured from Amazon, Google, Meta, Microsoft, OpenAI, and other tech companies earlier this summer. Going forward, eyes will be set on Vice President Kamala Harris’s visit to the United Kingdom for an AI summit.