Every day, without even knowing it, you share intimate personal details about your life with people you’ve never met. The medical symptoms you search online follow you: first to the pharmacy where you pick up a prescription, then to a database of specialists looking to add you as a patient, or to an insurance company creating a risk pool. The car you’ve researched on the Web has been broadcast to your local dealerships before you’ve even left the house. When you walk in the door, the salesman already knows which color you want—as well as your salary and driving history—and pulls the shiny new car of your dreams around front.
Americans worship privacy, railing when our favorite websites alter their terms of service to collect just a bit more information about us. Yet from the moment you swipe your rewards card at CVS or update Facebook, there are companies you don’t even know exist—often referred to as “data brokers”—watching, taking notes, and connecting the dots between the virtual you and the real one, using sophisticated technology to create vast and detailed personal profiles of hundreds of millions of American consumers. These databases are available to law enforcement and welfare agencies, to marketers, to banks and insurance companies, to employers looking for background information on their employees, to anyone with a credit card. There is no one watching the data brokers, no one verifying the information they hold and sell is accurate. Worse, there is no way for you to know what their dossiers contain about you, and no easy way to remove yourself from the databases of the hundreds of companies regularly engaged in buying and selling your personal details.
“We have a completely unfettered market in the sale of personal information,” says Chris Calabrese, legislative counsel for the ACLU and an expert on data brokers. “It’s like the wild, wild West out there to buy Social Security numbers, names, my preferences, the magazines I subscribe to, the places I go to online, my tax records, the amount of money I make, my medical issues.”
The largest data broker is a billion-dollar-a-year business with a cunningly bland corporate name. Arkansas’s Acxiom Corporation stores up to 1,500 relevant data points for each of its 500 million consumer profiles across a network of 23,000 computer servers, processing a staggering 50 trillion data transactions per year, according to a report by Natasha Singer for The New York Times. The company got its start 40 years ago as Demographics Inc., combing through phonebooks to compile voting lists and consumer data for direct marketers. Today, data brokers legally purchase your information from nearly every site you browse, through loyalty card programs at the grocery stores, from the DMV and other government agencies, from payroll exchanges—the list is ever growing—combining what they buy with every public record they can get their hands on.11 This is nothing new: As far back as 1899, credit bureaus were compiling information about potential clients—information on sexual proclivities, political leanings, and religious affiliations. But it wasn’t until the tech boom of the late 1990s, amid a potent mixture of cheap digital storage and growing Internet use, that data brokering exploded.
“It’s like a data gold rush,” says Ashkan Soltani, an independent privacy and security researcher formerly of the Federal Trade Commission. “After we move our communication from analog to digital, we have logs or data trails associated with all this activity. Not to sound paranoid—and I try to curb the rhetoric—but generally everything we do is now recorded and stored for some purpose.”
The law has failed to keep up with the rapid expansion of data harvesting in the same way it has stepped in to protect other sensitive information like credit reports—even though many data brokers were born from the credit bureaus. There was a time not long ago, before the Fair Credit Reporting Act (FCRA) passed in 1970, when consumer credit files were riddled with erroneous data and speculation—and when Americans had no legal recourse to correct these mistakes.
The FCRA demanded credit bureaus be accountable for the financial information they collected and gave consumers free access to their credit-report information, along with the power to contest any errors. As Senator William Proxmire told The New York Times in 1971, “Unfounded rumors and innuendoes from neighbors that sometimes creep in [to credit reports] can be weeded out. From now on if you are refused credit … you will be able to find out why.” Most important perhaps, the FCRA guaranteed the protection of that sensitive data—consumers would now have to consent before a bank or anyone else peered into their personal records. If a bad credit report was grounds for being denied a loan or not being hired for a job, the credit holder would have the right to know.
The now unemployed credit-bureau workers took up new roles as digital eavesdroppers in the brave new world of data brokerage. The bureaus either sold off their lucrative market-research firms or transformed them into subsidiaries, continuing to collect consumer profiles but largely sidestepping the law. “Marketing escapes all legal barriers for information sharing,” says David Jacobs, consumer protection counsel at the Electronic Privacy Information Center (EPIC). “There are also affiliate sharing rules that allow one company that has a corporate relationship with another to freely share information,” which could even include data such as Social Security numbers. Beth Givens, the executive director of The Privacy Rights Clearinghouse, an advocacy group that tracks data brokers and their policies, calls it “a gigantic loophole.”
The business’s unregulated nature leads to misuse and abuse. So-called people search engines—websites like Spokeo and US Search and White Pages—often advertise background-checking services, even though official background-check companies are highly regulated under the FCRA. Background-check companies must follow the same rules as the credit bureaus: Namely, they must obtain authorization by the subject of a background check before the check is run and provide the results of those checks once a year free of charge to anyone who asks, a valuable consumer protection against wanton abuse and fraud.
BeenVerified.com—which promotes itself as “Your people search and background check answer,” is a common example of how data brokers may skirt the law. A prominently displayed link, “Background check FREE,” allows anyone to search a name, recovering last known addresses, names of parents, criminal records, property-ownership records, telephone numbers, bankruptcy, lien and divorce records, social-networking accounts—all this without a hint of prior consent from the subject of the search. To pull this off, a disclaimer across the bottom of the page reads, “BeenVerified does not provide private investigator services and this information should not be used for employment, tenant screening, or any FCRA related purposes.” Translation? If you use a service marketed as “Background Check FREE” for a bonafide background check, you are in direct violation of federal law. 22 In June of 2012, the data broker Spokeo agreed to an $800,000 settlement with the FTC for exactly this—targeting advertisements directly at hiring managers and human resource departments, urging personnel to "Explore Beyond the Resume," according to FTC documents. The agency alleged that Spokeo failed to “make sure that the information it sold would be used only for legally permissible purposes” and further failed to ensure information was even accurate.
The implications are troubling: With a quick online search, hiring managers or landlords can find bankruptcy documents and arrest records, even if those public documents have been officially expunged. (Although government agencies and credit bureaus are required to seal certain types of files—juvenile arrest records and personal bankruptcy filings after seven years, among others—there is nothing stopping data brokers from gathering this information and holding it in their systems indefinitely.) “We think there is a lot of noncompliance,” Givens told me, referring to employers checking up on potential employees without their consent.
There are also implications for public safety. “We’ve heard from police officers, from stalking and domestic-violence victims,” says Givens. “It’s particularly upsetting to people who must protect their personal safety and keep their home address private. But you don’t have to be a stalking victim to be upset about the fact that all of this information is available to basically anyone.”33 And available to anyone it is. In 2005, Choicepoint, a leading data broker that was subsequently bought by the information giant Lexis-Nexis, inadvertently sold 145,000 private records to an identity-fraud ring. That lapse resulted in at least 800 confirmed cases of identity theft and warranted an unprecedented $15 million fine by the FTC. At the time, most people had no idea how their personal details had even made it to Choicepoint’s servers and were stunned to find that their identities were being bought and sold on an open market without their knowledge or consent.
Politicians have begun to take notice. Representative Ed Markey, the Massachusetts Democrat, led an inquiry into the secretive methods of data collection during the last House session. The boilerplate responses he received fell short of actually explaining in any detail how data brokers obtain the targeted consumer details that they do. An Acxiom spokesperson later told me that it gathers data primarily from three sources: the public domain, surveys, and “approved third parties,” which it “carefully vet[s].”
There is also the question of whether state and federal agencies should be contracting with unregulated data brokers to cross-reference applicants for welfare programs such as TANF and Medicaid—which they do. According to a 2009 report by the Medicaid Institute, case studies indicated that while some states simply matched application information to tax records and other federal statistics to ensure eligibility, others relied on data brokers to verify critical eligibility requirements. The state of Georgia contracted with Choicepoint, at a cost of roughly $3 million a year, to run nightly background checks on new applicants “to improve the accuracy of eligibility determinations and simplify the income and resource verification process for Medicaid eligibility workers.” In doing so, “The data broker provided new sources of income and asset information previously unavailable to Medicaid eligibility staff,” according to the report.44 Those data points deemed relevant included the possible roommates and relatives of applicants, bankruptcies, liens, and judgments against them, and child support payment history. Texas, in its employee guidelines for the use of data brokers posted to the state’s Health and Human Services website, instructs its welfare eligibility workers to use data brokers to hunt for “clues to unreported income, resources and living arrangements.”
Welfare agencies need all the help they can get; they are often underfunded and overburdened, and legitimate means of weeding out those who would seek to take advantage of the system can only help those truly in need. But the dilemma lies in a reliance on unregulated businesses with zero effective transparency to aid the government in determining that eligibility. Because there’s no way for an individual to check if the information contained in a data-broker profile is even accurate, the potential for a transference of cost—in the form of time, energy, and money—from the state to the needy is high.
When data brokers provide information to government agencies, it also gets them around laws designed to make the government a safeguard for your sensitive data. The Driver's License Protection Act, for example, exists explicitly to forbid the sale of driver’s licenses to private companies. “You would think that this would mean you don’t have driver’s license information in these databases,” Calabrese at the ACLU told me, “yet you find that you have tons and tons of driver’s license information in these databases, and it’s puzzling at first—you know there’s a law that prohibits this. And then you read the law and there is a whole laundry list of permissible uses.” Calabrese finds the language of the bill and reads it back to me: “For use by any government agency including any court or law enforcement agency, in carrying out its functions, or any private person or entity acting on behalf of an agency in carrying out its functions.”
The most worrisome outcome of the data revolution—and the hardest to quantify—is the discrimination that follows in this treasure trove of information’s wake.55 We're seeing this already. Staples sells the same Swingline stapler for about a dollar less to online shoppers based on their locations, The Wall Street Journal reported late last year: “Areas that tended to see the discounted prices had a higher average income than areas that tended to see higher prices.” Acxiom rates consumers on a 70-point scale, essentially creating a caste system where individuals deemed unworthy never see specific offers, while being disproportionally targeted for others. "It seems like that isn’t that big of a deal, but it is," says Calabrese. "If you start to make decisions about people based on their location, etc.—how is that any different than insurance red-lining?” Marketers have not always been the best stewards of personal information, of course. In the past, advertisers have infamously compiled “sucker lists”: groups of gambling addicts, gullible seniors, and other demographics that will buy whatever you throw at them. “What's at stake," wrote Singer, "is the risk that wholesale data collection creates an algorithmic system that assigns some people better offers like low interest rates while using an invisible scoring system to prevent others from getting loans, insurance or jobs. The risk is discrimination by statistical inference."
Marketing, in a sense, is statistical discrimination. It’s about pinpointing where and for whom advertising dollars will have the greatest impact. But when the balance of power is as skewed as the relationship between consumers and data brokers, when an asymmetry exists that will only grow as more data becomes available, marketing becomes predatory. “At the end, it’s trying to differentiate between which customers are more valuable,” says Ashkan Soltani. “And that can mean different things. If you’re an insurance company, the person that gets sick less is much more valuable.” But it can also mean providing income data to debt collectors—or to credit card companies; the potential is a world where your credit is frozen the moment you lose your job.6
“The takeaway is that this isn’t some natural law, this is just bad law,” says Calabrese. “We could choose to control personal information and the way it’s used; there are data protection laws in Europe. We have laws that work very well around the Census and taxes. What we don’t have right now is the political will to create those kinds of laws to protect consumer information.”
Slowly but surely, the political will is growing. Early last year, President Barack Obama introduced a “Privacy Bill of Rights,” a blueprint that the White House says will improve consumers' privacy protections and give them greater control over what information is gathered about them, a gesture heralded by privacy advocates. The FTC has also aggressively stepped up its prosecution of data brokers who abuse their sensitive positions, either by selling information for illegitimate uses or through false marketing. Politicians such as Senator Jay Rockefeller and Ed Markey have launched inquiries, hoping to bring light to the shadows in which data brokers do business.
Providing ready access to this information—like the centralized database credit agencies offer for credit reporting—would be a good start. At the very least, it would bring a degree of transparency to an opaque system that is largely unknown to the American public. But what would that system look like? Unlike credit reports, data-broker profiles can be used for marketing, and allowing consumers to correct false information would just make their marketing lists more accurate.
The real solution may be the rethinking of what public and private mean in the Internet age, what freedom of speech means in a world of corporate surveillance. In light of the tragedy in Boston, security experts and civil-liberty advocates have thrust the specter of video cameras on every street corner back into the national conversation. The debate is a stark one: Is the benefit of constant public scrutiny worth the cost to our privacy? But just last month, Facebook announced a marketing deal with four major data brokers that would allow those companies access to your private Facebook profile for the purpose of targeted advertising, combining that data with all the other information they gather. This frontier—extremely personal information posted to an ostensibly public website—blurs privacy lines completely, and it’s much more insidious than a CCTV camera staring you in the face. The founders, after all, “did not foresee a time when public records would become electronic,” when they would be aggregated and combined into meticulous personal profiles to be bought and sold, says Beth Givens. “Their purpose was to enable citizens to monitor their government, not to monitor each other.”