Phelan M. Ebenhack via AP
Amid pitched discussions of whether artificial intelligence–powered technologies will one day transform art, journalism, and work, a more immediate impact of AI is drawing less attention: The technology has already helped create barriers for people seeking to access our nation’s social safety net.
Across the country, state and local governments have turned to algorithmic tools to automate decisions about who gets critical assistance. In principle, the move makes sense: At their best, these technologies can help long-understaffed and underfunded agencies quickly process huge amounts of data and respond with greater speed to the needs of constituents.
But careless—or intentionally tightfisted and punitive—design and implementation of these new high-tech tools have often prevented benefits from reaching the people they’re intended to help.
In 2016, during an attempt to automate eligibility processes, Indiana denied one million public assistance applications in three years—a 54 percent increase. Those who lost benefits included a six-year-old with cerebral palsy and a woman who missed an appointment with her case worker because she was in the hospital with terminal cancer. That same year, an Arkansas assessment algorithm cut thousands of Medicaid-funded home health care hours from people with disabilities. And in Michigan, a new automated system for detecting fraud in unemployment insurance claims identified fivefold more fraud compared to the older system—causing some 40,000 people to be wrongly accused of unemployment insurance fraud.
The United States has long put up administrative burdens to deter the poorest applicants from making use of government help.
These cases are the modern face of an old problem. The United States has long put up administrative burdens to deter the poorest applicants from making use of government help. The New Deal–era distinction between “earned” social benefits—like Social Security—aimed at working white men, and social assistance programs for the poor has translated over time to especially high barriers for the Americans most in need. AI now offers tools to turbocharge this restrictive approach, as the scholar Virginia Eubanks’s prescient work has documented. Well before the release of ChatGPT, Eubanks chronicled how government agencies administering child welfare, social assistance, and law enforcement were using faulty and biased automated systems to make decisions affecting vulnerable communities. The result is a public assistance system increasingly plagued not by oversubscription, but by chronic underuse.
Recognizing this trend, the Biden-Harris administration pursued policies to make sure AI will strengthen, and not undermine, the social safety net. Last year, President Joe Biden directed two of the largest safety net agencies to work with states and localities to root out bias in AI tools and increase transparency and human oversight of these systems.
Policy often delivers messages that politicians don’t say out loud. Biden’s approach acknowledged that the problem with AI’s use in the social safety net is the underlying logic of punitive benefit administration. In other words: Tech shouldn’t be used to restrict qualified people from accessing services, from nutritional assistance to Medicaid, but instead should be a tool for maximizing access to these programs for those who qualify.
The next president has a powerful opportunity to build on this commitment and strengthen the social safety net in the age of AI. On day one, a new administration could bring all benefits programs across the federal government—including those serving people seeking housing assistance, unemployment or disability insurance, student aid, or disaster relief—under the Biden-Harris paradigm. But protecting the most vulnerable in an increasingly algorithmic world requires a broader effort.
First, we must address the take-up problem—low use of social benefits by qualified people in need—in the AI age. As American society speeds toward wider adoption of powerful data-driven tools, we need to know much more about the role AI and algorithmic technologies are playing in blocking—or expanding—the reach of benefits programs to qualified applicants. Social safety net agencies should regularly publish estimates not just of improper payments to ineligible individuals, but also of those who fail to take up benefits for which they are eligible, especially due to automated decision-making systems, building on the Biden-Harris effort to reduce burdens in access to social programs.
Second, the people who rely on safety net programs must have a more influential voice in decisions about deploying these technologies. Data-driven tools are evolving more quickly than traditional public input processes may be able to operate. A new administration can set up councils, composed of representatives from communities most affected by safety net programs, to more dynamically seek input on proposed changes to benefit systems, including the use of AI and automated systems, to gauge their on-the-ground impacts. Building on Biden’s progress in expanding public participation, agencies should compensate those individuals for their time, as the Department of Health and Human Services has begun to do. Organized labor, representing the frontline local, state, and federal agency workers administering these programs, should be key partners in this effort.
Finally, it must be easier for people to challenge a bad determination made by or with AI. A new administration should create clearer pathways for people to push back when they suspect incorrect decisions, whether they are made by a human or a machine, and connect with attorneys for support. One forum for contestation could be the U.S. government’s equal opportunity offices, an existing infrastructure for ensuring legal protections for people who use federal programs. These efforts could be supplemented by collaborating with civil society organizations—including groups like TechTonic Justice and the Benefits Tech Advocacy Hub that are already leading this work—to set up a national legal aid network for people who may have been affected by AI-driven harms, including unjust denial of their benefits.
Presidential leadership, of course, has limitations. Congress will also have a vital role to play in making these changes durable by adding safeguards to the statutes that govern public benefits programs, as well as devoting the resources local, state, and federal agencies need to deliver benefits in a timely and equitable manner. But these early steps can help put the U.S. on the path to protecting the social safety net in the age of AI.
Nine in ten Americans will rely on public assistance at some point in their lives. Technology should not be another hurdle in our most difficult hour. When that day comes, every American should know: Help is on the way.