Most US students now use AI. Meet the ones who are just saying no.

When OpenAI released ChatGPT in 2022, it ignited a storm among educators. It was a tool that, with a few guidelines, could gather tons of information, compose human-like sentences, and spit out an answer to seemingly any question. Students, they thought, would definitely use it to cheat.
As the popularity of artificial intelligence chatbots has exploded, so has concern over their potential misuses. In March, the Wall Street Journal told parents: “There’s a good chance your child is using AI to cheat.” » New York Magazine declared that “everyone cheats in college.”
For many students, these headlines ring true. But not for everyone.
Why we wrote this
As artificial intelligence creeps into everyday life, some students are turning back the clock. Their reasons range from profound to practical, and speak to preserving a sense of community – and humanity.
“What’s the point of going to college if you’re just going to rely on this thing to give you the right answers?” said Marie Norkett, a student at St. John’s College in Santa Fe, New Mexico. “You are not improving your mental abilities.”
Ms Norkett is part of a group of students who are choosing not to use AI in their studies. They give reasons that are both profound and practical. Ms. Norkett, for example, worries not only about how shortcuts might blunt her critical thinking skills, but also about the accuracy of what AI robots, which extract vast amounts of information from the Internet to mimic human cognition, produce.
These students are the minority on campus. In a September survey of college students by Copyleaks, the maker of an AI-powered plagiarism detector, 90% of respondents said they use AI for their school work. Of course, not all of these students were using it to cheat: the most frequently reported uses were brainstorming (57%) and writing plans (50%).
Yet, like many educators, some AI abstainers fear that robots will make cheating easier. In an internal report on OpenAI’s use of ChatGPT, about a quarter of 18- to 24-year-olds, the most active of the bot’s more than 700 million weekly users, said they use it for “exam answers.” A September report from Discovery Education found that 40% of middle and high school students have used AI without their teacher’s permission, and nearly two-thirds of middle and high school teachers say they have caught students using chatbots to cheat.
The true scale of the cheating problem remains a matter of debate. Victor Lee, an associate professor of education at Stanford University, says decades of research have put the cheating rate at between 60% and 80%. This has “remained pretty stable” since ChatGPT came on the scene.
Regardless, it’s clear that students are using technology – often. This reflects various tensions. Students feel immense pressure to succeed academically as they juggle school, extracurricular activities, work, and social commitments.
“There are also situations where [students] we just don’t know what the line is between acceptable and not acceptable,” adds Professor Lee.
Yet some students have resisted peer pressure to use AI – whether legitimately or illicitly. They have charted a path toward a more old-fashioned education that, for them, is fulfilling, meaningful, and decidedly humane.
“The full expression of a human being is not a robot. It’s a creative, interactive force,” says Caleb Langenbrunner, another St. John’s student. He said simply considering the answers provided by AI “doesn’t seem to really understand what it means to be human.”
Maintain a sense of community
Unlike many college campuses, St. John’s students say they rarely see their classmates using AI. This could be due to the school’s unique teaching methods. It offers only one degree, in liberal arts, and its entire curriculum includes a four-year reading list of what the college calls “the greatest books” in history. Titles include tomes like Plato’s “Republic” and Aristotle’s “Politics.”
Yet St. John’s students aren’t the only ones who see their peers’ overreliance on AI as a problem. Ashanty Rosario, a high school student from New York, says she doesn’t use AI, and she wishes her classmates didn’t either.
“I think we lose the sense of community in the classroom if we don’t actively engage in the work assigned to us,” she says. When students use AI instead of looking to their peers, “it harms not only the person using it, but also others who may well gain a different perspective that enhances their learning.”
The meteoric rise of AI-generated writing and art has also heightened concerns about the future of the humanities. Technology has entered the scene at a turbulent time for creative disciplines. The number of students graduating from college in the humanities fell 24% between 2012 and 2022, according to the American Academy of Arts and Sciences.
“Much of the humanities and arts is based on original thinking [and] creativity,” says Ms. Rosario. “It’s something that can’t be replicated, especially by a machine. So I think to maintain this cycle – of art and the dissemination of culture – it has to come from within.
Question of credibility
Abera Hettinga, a philosophy and psychology student at the University of New Mexico, says he doesn’t use AI because it would “do a disservice” to his future self. He also took a course in logic and critical thinking that shaped his perspective. Students in the class, he says, studied the accuracy of ChatGPT’s answers to different questions, and the chatbot didn’t impress him.
Sometimes, when ChatGPT gave him a questionable answer, he would push him to explain his logic. Mr. Hettinga found that the robot “often predicted what you would like it to say.”
OpenAI recognized that older models tended to tell users what they wanted to hear, even if that meant providing incorrect information. “That’s what shaped my confidence in his credibility,” Mr. Hettinga says. OpenAI claims to have updated ChatGPT software to combat “sycophancy”.
A writing professor at the University of New Mexico’s Center for Teaching and Learning, Mr. Hettinga has first-hand experience with how an overreliance on chatbots can rob students of learning how to craft a compelling argument.
“[AI] “It takes away the ability to structure an argument,” he says. “You lose that crucial ability – to think, to organize a paper, to know where to present your arguments, how to formulate a thesis, [and] also other crucial writing skills.
Stanford’s Professor Lee says charting a path toward more sustainable use of AI could start with how schools approach the tools — although he acknowledges it might seem difficult for educators to juggle the learning needs of dozens of students. Some teachers have already turned to old-fashioned testing methods, like asking students to put pen to paper and write essays by hand in class.
Another strategy “is to develop students’ knowledge of AI to help them learn how to use it responsibly, as well as its capabilities and limitations,” he explains.
Student surveys say AI robots have potentially beneficial uses. For example, they can be a useful starting point for starting research because they quickly compile and summarize large amounts of information.
Ultimately, Mr. Langenbrunner, of St. John’s, says he enjoys learning and finding answers on his own – and doesn’t want to miss a good time.
“You know, I think [AI is] “It’s pretty boring,” he laughs. “If I had to use AI to write all my articles, it would take all the fun out of it.”




