AP Photo/Ross D. Franklin, File
This April 28, 2014 file photo show the Phoenix VA Health Care Center in Phoenix. Fake appointments, unofficial logs kept on the sly and appointments made without telling the patient are among tricks used to disguise delays in seeing and treating veterans at Veterans Affairs hospitals and clinics. They’re not a new phenomenon. VA officials, veteran service organizations and members of Congress have known about them for years.
Corruption has been a part of American government since its inception. In principle, computers and electronic record-keeping promise greater transparency and honesty. E-government tools are now part of the most central tasks of citizenship, including voting, registering births and deaths, and paying taxes.
Successful examples range from the utterly mundane: E-ZPass provides more effective traffic control, simpler toll payment for drivers, and collects real-time data about the use of public infrastructure-to the absolutely extraordinary: a telecenter in Uganda's Bwindi Impenetrable National Park protects mountain gorillas, monitors human-to-gorilla disease transmission, and creates local economic growth. If well-designed and implemented, these tools can increase government efficiency, transparency, and accountability, facilitate agency communication, improve service quality, and promote public participation.
Yet digital government is only as honest as the people who design, implement, and operate it. A vivid example of the use of computers to promote dishonesty is the still-unfolding Phoenix Health Care Services scandal in the U.S. Department of Veterans Affairs.
On May 28, V.A. Acting Inspector General Richard J. Griffin released an interim report chronicling "gross mismanagement of V.A. resources" and criminal misconduct at the Phoenix Health Care System (HCS). The report focuses on irregularities in the electronic wait list that resulted in significant delays in care-veterans waited on average 115 days for their first primary care appointment-while allowing the Phoenix HCS to report wait times of only 14 days and qualify for performance bonuses. When the inspector general's investigation began, only 1,400 of the 3,100 veterans seeking care at Phoenix HCS were entered into the electronic scheduling system; many of the rest were languishing on "alternative" wait lists used to falsely suppress wait time data.
As early as 2005, a report from the V.A. Office of the Inspector General showed that inaccurate recording of veterans' wait times, multiple waiting lists, and irregular outpatient scheduling procedures were a problem nationwide. In one of the most egregious examples, a veteran visited a primary care clinic at the Atlanta V.A. Medical Clinic on August 3, 2003, and received a referral to an eye clinic. The patient requested an immediate appointment, but instead of entering "next available," a scheduler looked forward in the system to find the next opening in the calendar-not until June 21, 2004-and entered that the patient wanted an appointment on June 15, 2004. The wait time was logged as only 6 days (June 15-21), rather than the 321 days that actually elapsed between August 3 and June 21 appointments.
By 2010, a memo from William Schoenhard, the agency's deputy under secretary for health operations and management, was circulating internally and had been leaked to a watchdog website. It catalogued seventeen common "scheduling practices to avoid," and called for immediate action to eliminate misuses of the electronic wait lists. In the memo, Schoenhard wrote, "Workarounds have the potential to compromise the reliability of the data as well as the integrity and honesty of our work. Workarounds may mask the symptoms of poor access and, although they may aid in meeting performance measures, they do not serve our veterans."
In the Phoenix case, schedulers bowed to pressure from supervisors to "fix" records exceeding the 14-day wait-time goal.
They kept alternate wait lists, set desired appointment dates within a few days of the next available appointment, deleted or canceled consults, and overwrote old appointments with new ones to "reset the clock." According to Griffin, a direct consequence of these practices was that the Phoenix HCS leadership significantly understated wait times in their performance reports, "one of the factors considered for awards and salary increases."
The usual understanding of corruption involves the use of political office for personal gain: selling government assets, getting your brother-in-law a cushy job, or taking a bribe in exchange for delivering a sweetheart contract. Malfeasance in e-government may look different, but the objective is the same: Public servants exploit the shortcomings of political systems for advancement and financial rewards. Is intentional fudging of government statistics-what we might call "gaming the state"- the new face of government corruption in the 21st century?
Emily Shaw, policy manager at the Sunlight Foundation, which uses technology to create more open, accountable governments, is quick to point out that we shouldn't blame computerized record keeping for such finagling. "The [Phoenix HCS] system may have been gamed by people who misclassified things intentionally, but any system can be gamed if there's no oversight," she says. Similarly, Sharon Dawes, a senior fellow at the Center for Technology in Government, a public-private research partnership based at the University at Albany, SUNY, points out that e-government systems are influenced by policy, management, and quality of data. The Phoenix V.A. case was a perfect storm of management failures, bad data, and policy that linked single performance indicators-like appointment wait times-directly to rewards.
But technology did play an enabling role. When you get out a paper appointment book in front of a suffering veteran and flip through more than three hundred pages to the next available appointment, it makes problems in the system tangible, insistent. When you deliberately enter data in a scheduling program, it's easier to obscure the human cost of inaction.
Phoenix is hardly a unique case, nor is the problem limited to any single domain of government. In New York City in the mid-1990s, law enforcement moved to a computerized statistics system (CompStat), which evaluated precinct performance by two major indicators: whether "quality of life" arrests were increasing, and whether violent crime was dropping. ("Quality of life crimes" is a term that refers to "relatively minor, nonviolent, illegal behaviors that collectively undermine people's sense of well-being and public safety in an area," according to a brief by the Minnesota House of Representatives Research Department.) In a case now made famous by a Village Voice exposé, police officers in the Bedford-Stuyvesant neighborhood of Brooklyn used stop-and-frisk tactics to increase their quality-of-life citations, while miscategorizing violent and major crimes like rape as lesser crimes like trespassing. This made their performance indicators look good at precinct accountability meetings, but it didn't make Bed-Stuy safer. But the fact that the system was computerized gave it credibility, even if the reality was GIGO – garbage in. garbage out.
In Indiana a decade later, a changing food-assistance policy, a nationwide recession, and natural disasters overwhelmed a new $1.4 billion automated welfare-eligibility system. To meet contract targets for timely decisions, call center workers used a catchall reason, "failure to cooperate," to deny benefits to thousands of Hoosiers. In the most shocking case, a woman was kicked of Medicaid for missing an appointment with a caseworker-because she was in the hospital with congestive heart failure. The "failure to cooperate" workaround made the public-private partnership running the new system (IBM was the private contractor) look like it was meeting performance targets, but deprived citizens of access to needed medical care, food, and childcare.
In all three cases, public employees and private companies reaped rewards-police officers and precinct commanders were promoted, IBM received half a billion dollars on its contract, and Veterans Administration administrators got raises-while citizens were denied safety, justice, and lifesaving social support. All with the click of a button.
Analog corruption requires face-to-face contact, linking action and consequence, and a certain amount of moral clarity. Folding a bill around your drivers' license to get out of a speeding ticket is a clear ethical choice and an explicit bribe. Under digital government, the moral choices are murkier, the impacts muted by distance and abstraction.
So, how do we make government technology less corruptible? Sharon Dawes and Emily Shaw offer a few lessons for maximizing efficiency and accountability while minimizing the likelihood of finagling:
- Beware single, quantitative indicators for performance. Campbell's Law holds that the more narrow the indicator used for social decision-making, "the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." Judging a policy's success by wait times, test scores, or increasing quality of life crime arrest rates will leave it open to attempts to game the system: multiple wait lists, cheating scandals, stop and frisks.
- Openness of e-government information should be proportional to power. Emily Shaw argues that "the more power you have, the more we need to make your decision-making transparent. Conversely, the less power you have, the more attention we need to pay to whether your privacy rights are being protected." Sharon Dawes points out that, ironically, giving patients or families access to the wait time data on a site like data.gov might have identified the problem in Phoenix earlier. People closest to the problem would have seen that the wait times they were experiencing were far out of line with wait times that were being reported. This life-saving information was instead uncovered by an old-fashioned audits, hotlines, and whistleblowers.
- Don't try to use technology to solve problems that aren't primarily technical. The real problem in the Phoenix case is that the Veterans Administration has too few doctors and too many patients to meet two-week wait-time targets. In a May 30 national access audit, the Veterans Health Administration wrote: "Meeting a 14 day wait-time performance target for new appointments was simply not attainable given the ongoing challenge of finding sufficient provider slots to accommodate a growing demand for services."
In the New York Times, Richard A. Oppel Jr. and Abby Goodnough wrote, "At the heart of the falsified data in Phoenix, and possibly many other veterans hospitals, is an acute shortage of doctors, particularly primary care ones, to handle a patient population swelled both by aging veterans from the Vietnam War and younger ones who served in Iraq and Afghanistan." Political decisions, not scheduling problems, rob veterans of the care they deserve.