Jakub Porzycki/NurPhoto via AP
I have not tried ChatGPT, nor have I felt any urges to do so. If I believed the drumbeat of media reports over the past several months, my abstinence would be particularly foolish. This new technology, I’m told, represents a sword hanging over my head and the heads of my fellow workers in the knowledge economy. Highly refined artificial intelligence and large language models are coming for my job and the job of anyone who thinks for a living, and we’d all better figure out how to use this tool to complement our work rather than be run over by it.
The casualties have begun to mount, in fact. About a month ago, Chegg, a subscription-based tutor for high school and college students, watched its stock fall by half, after revealing in an earnings report that its growth had been stunted by its client base moving to AI. The stock has pretty much stayed in the same position from there; it’s hard to see a future for a company that employs humans to perform a task that is tailor-made for a computer. It’s never good when your CEO has to say, “This is not a sky-is-falling thing,” as Chegg chief executive Dan Rosensweig reassured us in May.
Chegg has scrambled to build its own AI competitor called CheggMate, using the advanced GPT-4 platform. But given that anyone can sidle up to Google or Bing, or run a GPT engine on their phone, paying Chegg for the privilege of using theirs seems improbable. Other education technology companies are putting on a brave face, but they clearly fear an apocalypse for their business from AI—and probably should.
But should we really mourn the demise of Chegg, a company that exhibited some of the most notorious hallmarks of late capitalism, including outsourcing, data insecurity, and outright deception? A runaway media narrative has become so caught up on the dangers of AI that the dangers of ordinary fraudulent business practices have faded into the background.
JUST TO BE CLEAR, Chegg is a slightly higher-tech way for kids to engage in the time-worn strategy of cheating at school. The investigative reporters at The Capitol Forum have been documenting the inner (and outer) workings of this company since 2019, when they found that students were using Chegg “to do their homework and even cheat on tests, sometimes in the middle of an exam.” This violates not only the company’s stated policy, but university honor codes and state laws.
Chegg offers an “Expert Answer” service, hooking up students with one of tens of thousands of subject matter specialists based in India, who would then just give the answers to homework questions. The Indian subcontractors make between $1 and $3 per answer. Students often used this during exams, asking urgently for help with very specific questions, in most cases holding up the exam sheet with the question on it. One online ad unearthed by The Capitol Forum requested students to “Post A Pic of Your Accounting Homework.”
A runaway media narrative has become so caught up on the dangers of AI that the dangers of ordinary fraudulent business practices have faded into the background.
The answers are then kept in a searchable database that students could peruse to find millions of homework and exam questions. Requests would increase volume at the end of a semester—that is, during exam season—a clear signal that it was just a cheating site. This is a fear articulated by several teachers about AI, but it’s rather obviously been already happening for years.
Some schools sent takedown notices if their exam questions were sent to Chegg; others tried to use software to monitor for students pulling out phones during exams, or anti-plagiarism apps. But Chegg rolled along for years as a publicly traded company feted by Wall Street. The pandemic became a particular high point for the company, as distance learning made it even easier to cheat.
Chegg put out an “Honor Shield” program to allegedly reject exam questions, but of course this is a solution to a problem the company itself created, and it required professors to give Chegg their questions in order to block students from accessing answers to them. This helped Chegg’s business by giving the company more data and test materials to use with other students. Chegg eventually stopped cooperating with schools on cheating investigations; Rosensweig, in an earnings call in early 2021, blamed teachers recycling test questions for the epidemic.
Pearson Education, the textbook publisher, sued Chegg in 2021 for copyright infringement, by publishing answers to Pearson’s textbook questions, available for students to cut and paste into their homework. The case remains active, unlike other copyright cases Chegg has settled.
YOU SHOULDN’T BE SURPRISED to learn that a company with a business model of enabling cheating also was alleged to have engaged in some cheating itself. Students pay a subscription fee of between $15 and $20 per month to access Chegg; many complained that they would still be billed after canceling their account. Others have said that Chegg offers “suspensions” when school is out of session instead of cancellations, but then charges users during the suspension period. Chegg often issues refunds when called on this behavior, but that relies on users to spot the problems.
The Better Business Bureau site for Chegg remains littered with hundreds of complaints like this; one alleges that the answers they were given from Chegg “were wrong, illogical, not in proper English.” A class action lawsuit filed last December also charges that Chegg uses an auto-renewal process to keep payments coming without authorization, gives no way for students to easily cancel, and fails to provide clear terms for the subscription, instead burying them in fine print. The plaintiff purchased an eTextbook from Chegg in August 2022 for $19.99, and was charged another $19.99 for a “Study Pack” she did not want that October.
To add injury upon injury, Chegg then exposed its users to four separate data breaches since 2017, with information on 40 million customers revealed. Data was not encrypted and was accessible to third-party contractors. Three of the data breaches involved Chegg employees falling for phishing attacks. The Federal Trade Commission took action against Chegg, ultimately forcing the company to set up better security features, limit the data it collects, and allow users to have their data deleted.
Chegg experienced a post-pandemic lull in growth, and the AI revolution has brought it ever closer to extinction. I think this review of the corporate history should leave everyone saying, so what?
Companies built for cheating, which exploit workers, exploit customers, and neglect security that leaves customers open to abuse, should not be protected from the march of technology. And the endurance of companies like that may pose greater risks than ones infused with artificial intelligence.
Of course, many of the problems raised by Chegg will just be made worse with AI at everyone’s fingertips. Cheating could accelerate, with answers delivered in real time rather than waiting for the Indian subcontractor to complete them. But a tech-enabled academic fraud network, powered and cheered on by investor capital, was already with us. And the adaptation should not be more fraud detection but an education system that puts a premium on developing actual reasoning rather than rote memorization.
More broadly, the issues AI raises—data security risks, impersonation fraud, intellectual-property violations, theft, and increasing market power—have been at the top of challenges for regulators for many years. Their relative failure to deal with them should give us concern, not the newness of the threat of AI.
Fortunately, we have a reinvigorated FTC and other agencies that have been taking on the challenges of old industries and can apply them to new ones. The key would be to understand their mission: to protect consumers, workers, and business partners from rip-offs and fraud. I get that AI can be nerve-wracking. But focusing on the actual harms of corporate America is what it will take to make AI more of a utility and less of an existential threat.