Though Americans are coming to grips with the impact of artificial intelligence on their lives, there’s still a significant amount of distrust. A 2025 PDK Poll, an annual survey of American attitudes toward the public schools, finds diminishing confidence in AI where it intersects with K-12 education.
Over the past year, the poll found declining support for AI in the classroom across a range of areas, from tutoring and standardized test preparation to homework assignments. The sharpest decline involved lesson plans. In 2024, 62 percent of respondents supported teachers using AI to “review and use” in preparing lesson plans. This year, only 49 percent did.
As for teachers, a recent informal Education Week poll of 700 instructors noted that while many people accept AI use in the classroom experience, others have doubts. One reason for concern is the troubling fact that as long as students have access to their phones, there’s a good chance that homework or another assignment might be AI-generated.
Related: The Republican plot to destroy education research
Students, of course, are all in. The earliest adopters and heaviest users of AI chatbots have been middle and high school students. When students aren’t in the classroom on weekends and during the summer months, there’s a noticeable drop in usage of OpenAI’s ChatGPT.
State education officials have long lagged behind tech developments, and now they’re playing catchup on establishing guidelines for AI use in K-12 classrooms. The 2025 State Educational Technology Directors Association (SETDA) report finds that AI is the top priority of state education tech leaders. Sixty percent of the education officials surveyed said their states already have professional training efforts focused on AI—but funding remains the biggest obstacle.
More than half the states have issued AI guidelines for teaching and learning. Most of those guidelines are voluntary and provide similar recommendations: Some suggest spending more time teaching students about online safety, while others emphasize professional development for teachers on state-of-the-art technologies. Ohio has the most detailed set of guidelines, which serve as a road map for educators to develop their own AI policies—something every school in the state will have to do next year.
Nearly every set of state guidelines touches on AI literacy. Last year, California lawmakers unanimously passed a bill incorporating AI literacy into K-12 curricula, defining it as the “knowledge, skills, and attitudes associated with how artificial intelligence works.” Utah has provided basic AI training to more than 4,500 teachers in partnership with Intermountain Health, the largest health care company in the state, while Wyoming, New Mexico, and North Carolina have joined a multistate initiative to develop their own AI literacy trainings. Massachusetts offers a self-paced course for educators that covers the basics of AI, from how it works to protecting student data.
Randi Weingarten, the American Federation of Teachers president, told the Prospect that her biggest concerns about AI are “safety, privacy, disinformation, misinformation, the substitution of these companion chatbots for kids.” Nearly every state addresses privacy concerns, but only North Carolina suggests a parent guide that explicitly mentions “the potential harmful use of AI companions by children” and strongly recommends keeping kids away from chatbot companions.
Over 20 states have taken up the massive challenge of dealing with cheating using AI apps, one of the factors that’s led to smartphone bans in classrooms. Jeff Wensing, president of the Ohio Education Association, says that even before the release of ChatGPT, “there were certain apps available … if you were to enter your calculus problem, the app would do it for you.”
Today’s apps, like Google’s Photomath, let students scan and solve their math problems in seconds, making cheating difficult for teachers to catch. Startups are racing to make it worse, with one company boasting that its software lets users “cheat on everything.” Three-quarters of education officials surveyed by SETDA reported that their state already had, or had plans to consider, phone bans.
Instead of working through difficult assignments, some students go straight to AI. One high school senior writing in The Atlantic describes her peers snapping photos of their readings or algebra problems, and scribbling down whatever the chatbot spits out. Even extracurricular activities like debate tournaments have been cheapened by AI, with some competitors using chatbots to draft counterarguments in real time.
Victor Lee, an associate professor at Stanford’s Graduate School of Education, points out that “levels of cheating have always been high,” and, he adds, “we’re seeing more that the methods are changing rather than the amount.” In a 2024 study that Lee co-authored, high school students self-reported roughly the same levels of cheating before and after the release of ChatGPT: Between 60 and 70percent admitted to cheating. Now, Lee says, more students are using AI to cheat, and fewer are copying off each other.
But there’s a gray area between using AI for help and using it to cheat. Many teacher-approved tools, such as Grammarly, are based on large language models. Other cases Lee suggested, like having an AI work out a difficult problem, also fall into the gray area between completing assignments autonomously and having an AI app do it instead.
West Virginia has an outright ban on using generative AI on assignments without approval. Its guidelines state that “teachers must be clear about when and how AI tools may be used to complete assignments.” Some states leave most of that work up to teachers.Ohio’s guidelines provide a few examples of student-teacher agreements on the usage of AI, while California suggests that students and teachers collaborate on building AI guidelines: A spokesperson for the California Department of Education said in a statement to the Prospect that schools should develop their own policies to “meet the needs of their local context.”
Some tech companies bill their AI detection software as the way to stop cheating. But these tools are unreliable, so much so that OpenAI shut down their own detection software. Besides the high error rate, students can re-engineer prompts to evade detection by asking AI to write like a high school student or to include more sophisticated language.
Students who are falsely accused of using AI—as well as those who get caught—often “further disengage” from the classroom, Lee says. He argues that trying to figure out ways to preserve old assessment structures is a missed opportunity “to look for new approaches.” AI forces teachers to re-examine the goals behind the assignments. The take-home essay may be obsolete, but some teachers have embraced research projects as a better way to encourage critical thinking.
“I look at AI like the printing press, or like the calculator,” says AFT’s Weingarten. “At the end of the day, human beings, not machines, should be in charge,” she says. Wensing agrees: “AI in no way, shape, or form should ever take the place of an experienced professional educator.”
How states design AI education frameworks takes on new importance after House Republicans failed to advance a decade-long moratorium on state and local AI regulation. Weingarten noted that the ban was “one of the few things we were able to defeat” in Trump’s mega-budget bill.
But as states race to craft AI policies and experiment with new ways to teach, funding is one of the biggest hurdles. Once COVID-19 stimulus funding runs out, the debilitating federal education budget cuts on tap will decimate statesand force edtech officials to make difficult choices about technology education. Building AI literacy, developing and complying with guidelines, training teachers, and creating classroom-friendly AI software all demand long-term investments that state education officials may be ill-equipped to provide as they wrestle with preserving the basics.

