Budrul Chukrut/SOPA Images/Sipa USA via AP
People are rightly concerned about the potential for new technology to foster discrimination. You can load parameters into an algorithm, or allow artificial intelligence to make lending decisions based on a statistical analysis of the creditworthiness of people with certain characteristics, and you’ve created a kind of digital redlining. No human hands would dictate the racial or gender prejudice, it would just happen through the bot.
Regulators are attuned to this. “We have tried to make abundantly clear that there is no AI carve-out in the nation’s consumer financial protection laws,” Consumer Financial Protection Bureau director Rohit Chopra told me in a recent interview. But an order from Chopra’s agency yesterday shows that the innovation in racial discrimination can simply be a matter of who to discriminate against.
In an enforcement action yesterday, the CFPB accused Citibank—not some fly-by-night operation but the nation’s third-largest bank—with violating the Equal Credit Opportunity Act from at least 2015 to 2021, by deliberately denying credit cards to … Armenians. The organization had apparently decided Armenians were all criminals prone to “bust outs,” who would rack up charges and then leave the country. The CFPB had records of employees referring to “Armenian bad guys” or the “Southern California Armenian Mafia.”
So how did Citi pull off this novel racism? Was it AI? Was it machine learning? No, they pulled any application for a credit card or an increased line of credit whose name had an -ian or -yan suffix, or applications around Glendale, California, home to about 15 percent of all Armenian Americans. And then they would just deny credit to those people, or place holds on their account, or send the application to the fraud prevention unit, or ask for income and asset verification that they wouldn’t need for anyone else.
Supervisors and trainers instructed the line-level workers to hide this blatantly illegal conduct, “including by telling Respondent employees not to discuss it in writing or on recorded phone lines.” They were then told to make up fake reasons for denying credit to Armenians. If employees didn’t flag Armenian names or Glendale-area residents, they would be reprimanded.
It turns out that racism has a history that predates AI! You may not actually need a robot to read the last three letters of the last name, and throw the ones with -ian or -yan in the reject pile. Sure, AI might lead to marginally quicker discrimination, but I don’t actually think the marginal efficiency of the robot scanning the applicant’s last name or address is all that improved.
Given that there are about a half a million Americans of Armenian descent, and not all of them bank with Citi (and none of them will after this), the fine is relatively small. Citi agreed to pay $1.4 million to consumers harmed by these practices, and a $24.5 million fine. This is peanuts to a bank with $1.7 trillion in assets.
But the lesson here is that Citi is big enough that they could have buried this “do not sell to Armenians” directive in lines of computer code. That they went old-school and just searched for suffixes suggests that there wasn’t much to be gained from the whiz-bang algorithm. Our zeal to regulate new and exciting developments in rip-offs and lies should not overlook the old standby of random stereotyping. Some things never go out of style.