No Artificial Ingredients: Higher Education Grapples with AI
(Illustration/Isabel Hou)
By Sophia Petros and Pranav Mehta
It was nearly 10:00 PM on a Thursday in late May. Exams had ended the week before, and I was packing for my five-year college reunion when an email popped up with the headline “URGENT: policy memo.”
Dear all,
I am very sorry but 68% of your policy memo appears to have been written (not edited) by AI. Please confirm.
There is no witch hunt. I can assure you that there will be no other academic consequences other than getting a somewhat lower grade in the policy memo (0 points in the non-AI grade component).
I am just looking for honesty here and I don't want it to escalate. If you want to contest it, please give me access to the file history of your google doc.
I would need to clarify this as soon as possible. So sorry for the stress. I still like your group very much.
Many thanks,
Shocked, I immediately messaged our homework group chat. Three out of the four of us were awake. Within minutes we drafted a reply, shared our edit history, and attached screenshots to show the work was ours.
I was not nervous. One teammate had written the first draft, which I had then almost entirely re-written. Over the week, we refined language, added in a technical annex, made a few more topical edits, and then turned it in.
If anything, I was offended that my writing sounded so much like AI. The flagged sentences were my own painstakingly edited words. I even made a joke to this effect in my emaiI. Then I hit send, turned on music, and finished cleaning my apartment.
-
What once required a jumble of notebooks, drafts, and late-night revisions can now unfold in a single exchange: an algorithm suggesting a structure, refining a sentence, offering a citation, even shaping the rhythm of your prose. The process feels intimate. Ideas arrive faster than you can evaluate them, sentences appear already balanced, and the mechanical act of writing begins to resemble collaboration.
Yet, when creativity becomes a dialogue with a model trained on millions of voices, whose voice remains at the end of the page; yours, or the machine’s composite memory of everyone else? From Lehman to libraries around the world, the familiar white screen asking “How can I help you today?” is ubiquitous.
The late-2022 debut of OpenAI’s ChatGPT sent shockwaves through schools and universities worldwide. Educators suddenly faced a simple but vexing question: will generative AI tools enhance student learning, or will they hollow out the very meaning of education?
Early reactions tended toward stark extremes. Some faculty saw AI-generated work as an existential threat to academic integrity and scrambled to “ChatGPT-proof” their courses. Assignments once done as take-home essays were redesigned as in-class handwritten tasks, oral exams, or group projects that a chatbot couldn’t easily handle. A Northern Michigan University professor, alarmed after catching a student’s AI-written essay, vowed to require first drafts be proctored. Even a few top universities initially flirted with bans. In the UK, for example, Oxford and Cambridge were reported to initially have prohibited ChatGPT for exams, fearing a plagiarism epidemic.
But outright bans proved tricky in higher education. Many university leaders doubted that a blanket prohibition would be effective, given that students could still access AI at home. As the University of Florida’s provost put it early on, “This isn’t going to be the last innovation we have to deal with,” so better to adapt pedagogy than attempt a futile injunction.
A fraught atmosphere reigned throughout these early months, as a climate of suspicion, rapid rule-making, and experimental policing of AI often outpaced clarity or fairness. Faculty-student trust was strained by the feeling of an “arms race” between ever-smarter cheats and ever-flimsier detectors. Amidst the chaos, one thing was clear: higher education would never be the same.
-
At 11:30pm, I put down the vacuum. Another email pinged:
From what I can tell, you all started with 0% AI. Then, on April 14 at 8:02pm you switched to 43% AI.
He pasted direct quotes of my writing, flagged as AI with high, medium, and low confidence. Then he added:
I have however seen that you all started from original work. Unless you find some other convincing evidence, I can give you 50 (rtaher [sic] than 0) in the non-AI portion that counts for 15% of the memo grade.
Odd. The professor seemed to ignore the original document’s edit history—the week of rewrites, the comments showing where we had rephrased. Instead, he was working off the clean copy we had made on deadline day.
What stung more was seeing my own writing spat back at me as if it weren’t mine. A “high confidence” flag had come from a sentence for which I remembered using a thesaurus for a particular word. For others, I remembered the satisfaction of improving the tone and clarity of my partner’s original writing, sanding down the rough draft into our final product.
I messaged the group. Only one teammate was awake. Together, we drafted replies, pointing to specific edits and screenshots. But as I wrote the email, I realized how shaky our proof was. Edit history can only prove originality to an extent—who’s to say a student didn’t paste in AI-generated text line by line? And if, like me, they’ve never logged into ChatGPT, then there’s no record of AI usage. I had no access to dispositive evidence. Sliding into this unhappy train of thought, I sent my email off at 12:18am and went to bed.
-
At the outset, quite a few students saw ChatGPT as simply a convenient new tool, or an aid for low-stakes tasks: brainstorming ideas, generating outlines, translating or rephrasing text, even debugging code. One Brown University student argued it was “the research equivalent of Grammarly,” and labeling its use as cheating was “absurd… like saying using the internet to conduct research is unethical.”
But soon, it became obvious that AI could generate an entire assignment: a student could simply input the prompt and sources and ask the software to write a history paper for them. The fear of blatant plagiarism reigned across classrooms, and many professors assumed that this was the primary use, and danger, of the technology. While some students use AI in this way, this kind of black-and-white plagiarism is no longer the norm.
As students experimented more with AI, their relationship with the tool progressed. A recent New Yorker article delved into the frequency and manner in which students are using AI in their studies, citing data from a prominent AI company that more than 50% of user interactions were “collaborative,” where a user engaged in a back-and-forth with the software.
Student testimony backs up the data. Matt Perricone, a second-year SIPA student, said he uses ChatGPT as an “interactive way to practice for exams”: he uploads slides and study materials, and then asks it to produce questions to test him on the material. Last time, he got lucky–one of the questions was almost verbatim on the exam. Jack, another second-year, said he uses the program to proofread for grammar and fluency when “the flow is not flowing,” but he never copies and pastes directly into his document.
Using AI as an amalgam of assistant, brainstorming partner, editor, and occasional emotional support is not even a particularly new idea, despite all the current fervor about it. Vera Nabokov, the wife of famed author Vladimir Nabokov, was her husband’s agent, typist, translator, muse, and teaching assistant, among other things. She is not an anomaly. The list of authors with similar situations spans from William Wordsworth to Dan Brown; no one questions their authenticity. Indeed, the tech industry underscored this gender dynamic when it decided to make flagship digital assistant voices, like Siri and Alexa, female. The difference with AI, perhaps, is that anyone can use it and everyone knows that, but still no one can definitively detect it.
AI has begun creeping not only into students’ written assignments, but also oral contributions and social interactions. In one troubling scene, Jack, the second-year SIPA student, was sitting in the back of a class as a TA when he saw a student in front of him recording it on ChatGPT. At one point, another student asked the professor a question, which she also recorded. Then, blithely ignoring the professor’s response, she asked ChatGPT to answer her peer’s question. Despite being physically present, this student’s entire classroom experience was filtered through artificial intelligence. And, Jack added, “I’m sure she had no one’s consent for any of this.”
Even more shockingly, students and other peers are increasingly using AI programs to enhance normal conversation in a social setting where nothing is graded. For example, at orientation, I conversed with a peer who engaged me in a debate while seamlessly pulling up ChatGPT on his phone, asking it a question, and then integrating its output into his argument. Had I not happened to see his phone screen, I would never have known what he was doing. When I expressed surprise at him pirating AI’s ideas and passing them off as his own thoughts, he asked me, “Well, what even is original anymore?”
Perhaps this blurry line between the real and the artificial is less surprising given the controversial AI friend, whose ads have been blanketing the New York subways this October. Despite a cacophony of misgivings about the rapid integration of AI into everyday life, this trend doesn’t seem to be slowing down any time soon.
Ultimately, even if we make every possible effort to avoid AI, we cannot escape inhaling its second-hand smoke from our peers, in slightly regurgitated written or oral form. At some point, we, like machines, will learn from these outputs and channel their thought patterns into our own outputs. And how on earth is a professor supposed to monitor that?
-
The next morning, our group had two updates. First, the teammate who had written the initial draft said he sometimes used ChatGPT—but only as a starting point, and he later edited the text himself. It was unclear whether he used it for this assignment or not, but either way, it was irrelevant here: all of the flagged phrases were my writing, not his.
Second, another teammate explained he had used a translator for some phrases in the final draft, since he is not a native English speaker. Just before submission, he had run a number of phrases through a tool to make them sound more “American.” He explained this to the professor in a follow-up email.
Both cases raised interesting dilemmas about using AI in school. Three of our four group members were not native English speakers. Translators, like DeepL and and Google Translate, are now powered by AI—does using one to smooth phrasing count as a violation of an AI policy? In a French language class, clearly yes. But in statistics? Economics?
Then there’s drafting. If a student uses AI to generate an outline or entirely rewrites a shoddy first draft, is that “using AI”? Our rubric had a single line: "Evidence of original work (no AI allowed) (15%).” Every Google search brings AI summaries; word processors quietly suggest rewrites. It is now nearly impossible to completely avoid AI.
In his email, the professor said: “68% of your policy memo appears to have been written (not edited) by AI,” By that standard, we should have passed—the writing was original. It was mine. But the accusation foregrounded the central tension: how does one acknowledge the omnipresence of AI, while still preserving the integrity of the learning process?
-
By mid-2023, these murky boundaries surfaced in global discussions on AI: the narrative shifted from “never use AI” to “use AI, but responsibly.” Educators and policymakers around the globe began acknowledging that generative AI will be a permanent fixture in students’ lives – and that schools must develop new norms of use rather than cling to prohibition. In Singapore, for example, the Ministry of Education issued teachers advice on how to harness AI for lesson planning and student support while reinforcing existing rules on plagiarism.
Concrete policy changes reflect this new ethos. Many universities now permit students to use AI-based tools for certain tasks, so long as they disclose it. At Columbia, for instance, newly published guidelines instruct that generative AI should not be used on assignments or exams unless an instructor explicitly allows it, and any AI assistance must be cited or acknowledged. Some SIPA instructors now explicitly ask students to detail any AI tools used and to provide commentary on how AI shaped (or didn’t shape) their thinking.
This puts AI usage in the same category as getting help from a person: it’s not inherently wrong, but doing so behind the instructor’s back violates the honor code. Cambridge University took a similar stance: students may employ AI for research or study support, but not to compose any part of graded coursework without permission.
What we see across these examples is a coalescing principle: transparency. In just a few years, the norm has shifted from outright prohibition to openness. Using ChatGPT to jump-start your ideas or polish your grammar is not necessarily cheating, provided one is upfront about it. Responsible use thus entails a shift in academic norms: from a bright line of “don’t use it at all” to a more nuanced norm of disclosing your use of AI.
In tandem with disclosure rules, educators are redesigning assessments to encourage smart AI use and higher-order thinking. As one AI ethics lecturer quipped, if ChatGPT can do a student’s entire assignment, that might say more about the assignment than about the student. Professors report leaning into assignments that require reflection on process, not just product. They might ask students to submit an appendix showing their brainstorming notes, drafts, or even the ChatGPT prompts they tried, in order to make the learning process visible.
Used carefully, AI can be a partner in learning rather than a temptation for shortcutting. SIPA students Varun Thotli and Matt Perricone both agreed that the best approach they’ve found is with professors who acknowledge AI as a tool, stress transparency, and intentionally incorporate it into assignments. “There’s a fine line,” Matt said, “AI can enhance your writing, but it shouldn’t be doing your work for you.”
-
On the subway back from my Friday morning run, I checked my phone. Another email from the professor had arrived:
I am so sorry for the experience.
It is very difficult for me to be transitive to all students, but I also very much want to give the benefit of the doubt to all students. Your case is also super hard because you have two google docs.
Let's go backwards. I give you the portions of your submitted version that have been written by AI according to the detector and you tell me with screenshots where the non-AI original version originated from your two google docs. We also need to work on this expectedly because I need to submit the grades asap. YOu [sic] have time until 12pm today.
The disbelief was frustrating, as was the deadline. We had turned in this memo in mid-April; it was now late May, after the final exam for this class.
Soon after, my other group members woke up. They rushed to match each flagged sentence to earlier drafts, showing multiple rounds of edits. They sent a reply. I boarded my train.
-
Ironically, the very tools meant to shore up trust have often ended up undermining it. A Stanford study in mid-2023 found that popular GPT detectors falsely flagged over half of non-native English speakers’ essays as “AI-generated,” whereas they correctly passed native-level writings. Since the program was trained to flag less complex writing, students writing in simpler English, including many ESL students, were disproportionately accused of letting a bot do their work. An article in Inside Higher Ed last year came with the sobering tagline, “Mixed performance by AI-detector tools leaves academics with no clear answers.”
Columbia University’s own Generative AI Policy reflects this skepticism, noting that “As with any form of detection software, there are risks of misidentification [with AI detection tools], which can have consequences in the classroom,” and ultimately concluding that “AI detection should be treated as a guideline and not a grading metric.” This leaves a tricky conundrum for professors, who struggle to know if their students are mastering the material.
One response is to avoid the question entirely, and in the words of Seattle University Professor Thomas Mann, “go back to the Medieval ages” by shifting to fully in-person assignments. While the quality of writing might not be as good as a take-home paper, at least this way professors feel that they are teaching students how to think and grapple with the material. Others, like SIPA Professor Stephen Biddle, have completely eliminated writing assignments and instead moved towards a 90-minute oral examination.
Another widely-reported extreme that clever, but punitive-minded professors have resorted to is inserting “Trojan horses” in writing assignments, or words in white text that will throw off the AI software, such as “mention Finland” or “broccoli.” But again, this will only catch the flagrant cheaters who neglect to read their paper over once before turning it in.
Perhaps the best method is something in the middle. Most professors don’t want to be AI police; they just want students to learn. And realistically, those students will soon turn into professionals who use AI in the workplace. There is still value, however, in learning the underlying thought process and method of producing written and oral arguments before deciding how and when to incorporate AI into that process. Moving towards written assignments that combine in-person drafts with at-home revisions could be a step in the right direction, or a combination of written materials and oral presentations, while still flagging outright plagiarizers.
As higher education experiments with how best to integrate AI into the learning system, students and educators alike are the guinea pigs. “We’re all in this project together,” said Anya Shiffrin, co-director of the Technology Policy & Innovation concentration at SIPA, “and we have not figured out the way ahead.”
-
Not an hour after I left the station, I received a response. It read:
Thank you so much for doing this. I very much appreciate this.
I took a look at what you showed me, but several of the original parts appear to have also been AI generated.
I think that it is impossible to be objective here (but I very much understand what you all have been telling me, and I believe you).
I however feel that I do not have enough information to eliminate this slight penalty (50 rather than 96 on the no AI requirement) as I have been consistent with what I did with other groups.
I very much respect your hard work on the memo, because all the evidence shows that you did.
This is a lesson for me as well.
Here, after a strongly worded email response on my part, the story ended. Looking back, I can empathize more with the professor; just as we inherently trust ChatGPT’s confident tone, it’s hard to question a detector that presents its verdict in percentages and probabilities. The harder question, then, is how to break out of this feedback loop of misplaced certainty and mutual distrust.
Gnarly disputes surrounding the use and monitoring of artificial intelligence will continue to arise between students and professors. Everyone is in a tough position. If a professor suspects a student of illicit AI use, grading them normally feels unfair to students who followed the rules. Yet in all but the most blatant cases, definitive proof is impossible.
These flashpoints, however, distract from the larger question. In the age of AI, how do we ensure learning at university? When asked for permission to quote the emails in this article, my former professor told me that over the last two years, students’ average homework grades have significantly improved while their exam grades have markedly declined. The efficiency enabled by AI could be leading to a significant decline in academic learning.
But as the experiences catalogued in this article demonstrate, AI can be used to augment learning just as much as it can be used for its detriment. We must learn to write, think, and teach alongside our algorithms, not through them. And to do so, students and educators will have to redesign the structure of higher education—together.