Happy end-of-semester, for those of you who have reached that point. I know it’s rough out there, so it may be hard to celebrate. But I hope the next few months bring you some small joys before things start to ramp up again in August.
In the coming weeks, I’ll be working on a few summer projects. The biggest one is preparing for The Grading Conference, an event for which I serve on the organizing committee. This annual virtual gathering for alternative graders has historically been STEM-focused, but last year it opened its doors to all disciplines. It’s a great (and affordable) place to get practical inspiration for new grading systems, and I hope you’ll register to join us. The conference takes place online June 11-13.
My more immediate project is helping to co-facilitate an AI Institute for instructors at my own university in late May. It’s directed by my colleague
(whose Substack Rhetorica is the best place to keep up with new developments in generative AI and how they affect education). Day 1 of the Institute, the day I’m most involved with, is focused on “Practical Strategies for Curbing AI Misuse.” For this and other reasons, I’ve been reading a lot of recent pieces about AI in the classroom…AI misuse—yes, it’s a structural problem
This week, we (and by “we,” I mean everyone on Bluesky) are all talking about a piece by James D. Walsh for NY Mag’s Intelligencer: Everyone is Cheating Their Way Through College. It paints a bleak picture, and I don’t really recommend it if you’re already on the brink of panic or despair after a rough semester.
My unscientific impression is that Walsh overexaggerates the number of students who are intentionally or unintentionally cheating themselves out of an education. Note the title’s suggestion that “everyone” is doing this, a clickbait-y claim that the article itself does little to contradict. In my own experience, more students than we think are genuinely trying and genuinely want to learn.
But of course, the longer these students see their peers skating through college getting As and Bs for work they didn’t do, the harder it will be for them to resist temptation—especially if maintaining their integrity results in lower grades, forces them to sacrifice their mental health, or limits their ability to live their lives outside of school.
What I like about the piece, though, is that it pretty clearly points us toward the real problem, which is not generative AI per se. Here’s my favorite passage:
The ideal of college as a place of intellectual growth, where students engage with deep, profound ideas, was gone long before ChatGPT. The combination of high costs and a winner-takes-all economy had already made it feel transactional, a means to an end…In a way, the speed and ease with which AI proved itself able to do college-level work simply exposed the rot at the core. “How can we expect them to grasp what education means when we, as educators, haven’t begun to undo the years of cognitive and spiritual damage inflicted by a society that treats schooling as a means to a high-paying job, maybe some social status, but nothing more?” [Troy] Jollimore wrote in a recent essay. “Or, worse, to see it as bearing no value at all, as if it were a kind of confidence trick, an elaborate sham?”
I’m getting tired of saying it, but: there is no way out of this that does not involve students understanding the value of the work we ask them to do and actually wanting to do it. Unfortunately, as the article suggests, the biggest obstacle to this goal is structural. It’s the fact that students have been conditioned to see education as a transaction, a series of tokens to be exchanged for a credential, which can then be exchanged for a high-paying job—in an economy where such jobs are harder and harder to come by.
What’s to be done? Structural problems require structural solutions. Many of us know by now that the only real and sustainable way to solve this problem is a wholesale restructuring of society, and of higher education. Smaller classes, in which students and teachers can form deeper connections and have real conversations about the purpose of an education. More training, time, and support for teachers. No grades, or at least a fundamental shift in how they’re thought of and awarded. Changes to how college is marketed to students and to the messages they receive about it at orientation. More public funding for higher ed, to enable all of this to happen!
The fact that none of this has even been floated by anyone with the power to enact it suggests that actual learning (as opposed to degree completion, I guess) is very low on our list of priorities. In the predictable absence of strong leadership here, what can we do—besides, of course, advocating for structural change?
Some strategies that do not involve the wholesale restructuring of higher education
I think the best place to start is attempting to “undo the years of cognitive and spiritual damage inflicted by a society that treats schooling as a means to a high-paying job,” as Jollimore suggests. “Oh, is that all?” I can hear you saying.
I’m not sure whether I ought to be ashamed or proud of this, but undoing damage around schooling is basically the one true goal of my first-year writing class, the thing I spend the most cognitive energy on. The writing is kind of incidental. I used to feel guilty about that, but I don’t anymore. Students can’t learn anything, including how to write, until they adopt new mindsets about their education.
The second, and related, thing I focus on is helping them find purpose and satisfaction in the work we do. Again: they have to understand why learning has value. They don’t have to enjoy it all the time, but they fundamentally have to want to do it. That means, of course, I have to put a lot of effort into relationships, motivation, engagement, etc.
What does all that look like?
Removing grades, to the extent possible
Evaluating students, when needed, based on process and growth, not just products and performance
Having frequent, explicit, and honest dialogue with students about their experiences in school and suggesting new ways of thinking about it
Having frequent, explicit, and honest dialogue about AI, what it’s good for, what it’s not good for, how it could support their development, how it could impede their development
Having frequent, explicit, and honest dialogue about the reasons people learn to write and what they get out of it
Overexplaining the purpose of every task and how engaging in it benefits students now, in their everyday lives, rather than in some indeterminate future
Asking students to devise projects that have audiences and purposes beyond the classroom, things they might actually want to disseminate
Making as many personal connections with and among students as possible
Promoting trust wherever possible to ensure that students will come to me with any issues they’re having before turning to AI
Providing as much student choice and autonomy as possible
Engaging in democratic decision-making processes as a class, particularly around AI use policies and consequences for violating those policies
Requiring metacognitive reflection on writing but also on students’ development as learners and as human beings
This genuinely works in my classes, for the most part. I don’t think people believe me when I say it, but it’s true.
I’m not naive enough to think, however, that such strategies are enough in every case. For one thing, it’s more difficult to implement them effectively in large classes or when instructors are low on time. And even when students are entirely bought in, they can still find it difficult to resist the temptations of generative AI during moments of overwhelm, underconfidence, or desperation. Moreover, I think as AI use becomes more widespread, relying solely on these strategies may become less effective.
Some strategies to augment the backbreaking work of creating student buy-in
So, what safeguards or accountability measures can we put in place to ensure that students aren’t using AI? Or maybe I should say, what can we put in place without guilt? I could easily resort to assessing students exclusively through blue books, in-class essays, and oral exams. I could also use anti-cheating technologies and plagiarism or AI detectors.
The problem is that I don’t want to use these things. They feel artificial—when was the last time you did any timed writing outside of school? They prioritize skills I don’t particularly care about, like the ability to perform under pressure. They often exacerbate inequities. They feel like surveillance and punishment, not like learning. I don’t necessarily blame instructors who employ them, and I can see some situations in which in-class assessments, specifically, can be useful. But none of this feels aligned with my personal values as an educator or my goals for student learning.
Here’s what I have done—or am considering—instead:
Committing to AI-free or AI-integrated experiences
This is something I hatched last November and am planning to implement, in some way, for my fall class. The basic idea is that students will choose, at the beginning of the semester, whether they want to commit to an AI-free experience or an AI-integrated experience. In the former case, students will work with me to develop potential consequences for breaking their commitment. In the latter case, students will complete assignments using AI in targeted ways, submit their AI chatlogs along with their assignments, and compose brief reflections on the experience of writing with AI.
An occasional and revisable in-class assessment
Last year, I asked students to do regular rhetorical analysis exercises in the first half of the semester, something I’ve written about before. These were all done out of class. My idea for next semester is that students will do regular practice out of class (and get feedback on their practice attempts). But this practice will lead up to an in-class assessment in which they’re presented with a short article they haven’t yet read and have 75 minutes to complete a rhetorical analysis worksheet on it. Importantly, they’ll be able to revise their work, after more feedback from me, for inclusion in a final portfolio.
More in-class writing time
Abandoning long-form writing and research would be abdicating my responsibilities as a writing teacher. So, students will still write an argument paper, intended to be crafted mostly outside of class. But I’ll also provide lots of class time to write and to ask questions when they get stuck. I think if they get started on the work in class, they’re less likely to turn to ChatGPT outside of class.
Process reflection (not tracking)
There’s been some recent debate in my social media circles about process tracking: i.e., monitoring the process by which students write their essays to ensure that there is a process rather than just a copy-and-paste. You can do this kind of thing with certain Grammarly tools, I believe, or simply by consulting the “Version History” of a Google doc. I don’t love the idea of surveilling the process students use to write—it feels like an invasion of privacy. I do love the idea of asking students to access their own version histories and use those histories to reflect on and narrate their process to me. Is this fully AI-proof? No. Can I discover more or less instantly if students are being dishonest about their process? Very likely.
Body doubling
This is kind of a unique one. I first learned about body doubling, a productivity strategy for people with ADHD, from my colleague Liz Norell. The idea is that if you’re struggling to focus on a task, having another person in the room alongside you completing a similar task can help you minimize distractions and get work done. I think a lot of students turn to AI the night before an assignment is due because they’ve procrastinated, they can’t focus, and they’re in a bind. Offering opportunities to practice body doubling—say, by inviting students to come work alongside me during office hours or pairing them up for out-of-class work sessions—might serve as a good accountability measure. And it might be a good solution for students who don’t think they can resist the allure of AI when left to their own devices.
The thing is: I don’t think any of these safeguards work without all the other buy-in and relationship stuff. If we jump straight to “securing assessments,” that immediately signals our distrust of students and puts us in an adversarial relationship right off the bat. As soon as students perceive that we are their enemies, we’re back in the academic integrity arms race: we devise ever-more complex mechanisms to catch student cheating and they devise ever-more ingenious workarounds, over and over.
We have got to break this cycle. For our sakes, for our students’ sakes, for the future of higher education, and the future of society at large.
Seeing the big picture
So now that I’ve written all this, I think I’ve figured out what’s been bugging me about our AI discourse right now. It’s that everybody is focused on one category of solutions to a complex problem. Some people are insisting that students are fundamentally disinterested and dishonest, and instituting an anti-cheating surveillance state in our classrooms (or students’ dorms, I guess) is the only way forward. Some people are arguing that if we only got better at motivating students and designing AI-proof assignments, the problem would be solved. Some people are suggesting that this is a structural problem that cannot be addressed in any meaningful way by individual teachers.
Every picture presented here, however, is incomplete. Here’s what I think:
We should lean heavily into authenticity, relationships, and student motivation as a first step to addressing the problem in our individual classrooms. We should explore additional accountability measures, if they’re needed, with students—keeping in mind that they have a right to privacy and should be given the opportunity to show what they know in ways that work for them. Overarching all of this, we should engage students and colleagues in conversation about the bigger issues and advocate hard for structural change—because the suggestions above may alleviate the symptom of AI misuse, but they won’t cure the underlying disease.
We need a wholesale transformation of higher ed. We need to improve and adapt our teaching methods now that AI is in the world. We need (equitable, non-punitive, anti-surveillance) safeguards and accountability measures for when student motivation is simply not enough. All these things can be true. And they all need to happen in tandem if we’re going to survive this catastrophic era in higher ed.
After some time (before smartphones) away from the classroom, a couple of years ago I was asked to teach first-year composition at a local small liberal-arts university and my experience led me to the same conclusion you've come to: there are structural problems that have created an educational environment in which AI is like gas added to a fire (or maybe like acid added to stone). My response has been to experiment with many of the same strategies you mention, including encouraging skepticism about grades and the grading process. The current hold of the transactional mindset may be well illustrated by the remarks of a student in response to my discussion of grading: "Professor Wiley, you might not like grades, but we students do."
Of course the transactional mindset is nothing new. It was well in place when I started teaching in the mid 1980s and has complex origins that aren't strictly speaking technological in origin and need broader discussion outside current "AI" debates. The blindness of academic leadership to the structural issues and the reluctance to discuss them are very frustrating and bode ill for the future of higher ed.
I can suggest three readings for those interested.
First, "What are You Going to Do With That?: The Future of College in the Asset Economy" (Harper's Magazine 9/24) by Erik Baker, an instructor at Harvard who addresses the history of the transactional mindset and where it may lead.
Second, "The End of Education: Redefining the Value of School" by Neil Postman (Vintage, 1995) that warns about the problems of what Postman calls "The God of Economic Utility" as the core purpose of education. He proposes some interesting alternative purposes/gods that are provocative.
Third, "Teaching as a Subversive Activity" (Delta, 1969) by Neil Postman and Charles Weingartner is still well worth reading.
I love this breakdown and how empathetic it is.
I’m also letting students design their own tracks for AI-free and AI-friendly assignments, though I give them a chance to take the middle ground a bit more and pivot throughout the semester if they want.