I’ve been pleasantly surprised by the relatively low levels of AI misuse I’ve encountered this semester. That’s not to say there haven’t been any problems; I have had a couple of cases that took quite a bit of time and patience to resolve. Also, the semester’s not over yet, so I’m knocking on wood. But even still: given the ready availability of these tools and how easily they can churn out passable attempts at my assignments, it’s a wonder that more students aren’t employing them.
I attribute this to (among other things) the integrity of my students themselves and the fact that my grading system prioritizes process over product, allowing students to revise and resubmit their work multiple times and evaluating them based on their growth as much as their performance. With these grade guardrails in place, it seems that only students who are deeply disinvested in their education, or in otherwise desperate circumstances, are willing to cheat with AI.
That said: I’m concerned that if and when use of these tools becomes more widespread, more students will be tempted to outsource their work to AI—and will get better at doing so. I want to be prepared for that contingency, so I’ve been thinking a bit more deeply about how I want to approach AI in future classes.
I’m reluctant to ban AI use, since I think experimenting with, and critically examining, AI tools will be important for students’ future as writers. If they write at all after leaving my class, chances are some will want to incorporate AI into their practice. I’d like them to take a thoughtful approach to it when they do.
I’m equally reluctant, however, to require use of AI. Many (perhaps most) first-year students still need to develop some fundamental reading, writing, and thinking skills that extensive use of AI would only impede. Students also need to understand their own writing process before they can understand how AI affects that process. While some students have reached a point where they can engage in this kind of metacognitive work, others have not. And of course this is to say nothing of the fact that some students simply don’t want to use AI or have concerns (as I do) about intellectual property, the environment, or a whole host of other ethical issues raised by this technology.
Because of these differing needs and desires, I’ve been letting students make their own choices about AI use, with the help of some guidelines that we created as a class at the beginning of the semester. This is working, for the most part, fine. But I’ve noticed that the decisions many students make about AI are not very intentional, and they aren’t automatically reflective about those decisions. Whether students use AI or don’t, there’s little critical exploration happening.
So, the problem is this: how can I help individual students make informed and intentional decisions about their use of AI and, if they do use it, how can I help them reflect critically on their use?
Here’s one possible solution:
What if I asked each student, at the beginning of the semester, to commit to one of two AI “tracks”? In track 1, AI use is strictly off limits: this is a fully AI-free learning experience and any use of AI would result in a potentially uncomfortable conversation with me. In track 2, AI use is integrated, with the requirement that students transparently share the details of their use and regularly reflect on it.
This idea was inspired by something I read in
’s R3 newsletter recently. In a September post, she featured an academic article on student motivation: “Choosing to learn: The importance of student autonomy in higher education,” by Simon Cullen and Daniel Oppenheimer.The article contains two studies:
In the first study, students were given a choice to opt in to a mandatory course attendance policy, in which they would incur a 3% grade bonus for missing no more than three discussion sections and a 3% grade penalty for missing more than three. If they chose not to opt in, attendance would not affect their grade.
In the second study, students were offered the choice between highly-demanding problem sets or less demanding essay questions as homework. They could switch from one to the other any time up until midterm.
The results? Researchers found that most students (more than 85%!) chose the more rigorous options. More importantly, compared to control groups who were automatically stuck with a mandatory attendance policy and highly-demanding homework, students who had the chance to opt in to these conditions attended class more reliably, spent more time on their homework, and improved the quality of their work more consistently.
The power of choice, right? When students were offered the opportunity to make an intentional decision about an action that would benefit their learning, they chose well—and they were more committed to that choice than students who didn’t have a say.
What if we did the same thing with AI? If the results of the two studies above are any indication, I think students who formally opted in to an AI-free experience would be more committed to that experience than if we made that decision for them.
For students who opted to integrate AI into their learning, I could require more transparency and offer more structured pathways for critical engagement. For example, I might ask that they…
Vary their use of AI for each assignment to encourage experimentation. If they use it to generate an outline for one assignment, they might use it to edit/proofread a subsequent one. If they use ChatGPT on one assignment, they might use Grammarly’s writing assistant on another.
Disclose the details of each AI use, either by including links to their chatlogs or by writing a detailed description of their methods.
Write a brief reflection after each assignment on how AI supported or impeded their learning for that assignment.
Share their findings with the class at the end of the semester.
The main benefit of this system would be that students have both autonomy and structure, two conditions that might help them make more intentional choices about AI use and commit to those choices more firmly. It also allows me to provide clearer guidelines around AI use and to have higher expectations for the reflection attached to it, since students will know ahead of time what they’re getting into. Reading these reflections, in turn, would help me get a better sense of how students are using AI and lead to more in-depth, critical conversations about those uses for the students who are interested in it.
One potential drawback might be that some students who opt in to the AI track haven’t quite built the reading, writing, or metacognitive skills required to usefully engage with it. However, based on my experience so far, I doubt that many students would choose the AI option—especially since it would require some extra reflective work and potentially even a presentation to the class about what they learned from using AI.
What do you think? Does this choice of tracks sound like a good way to encourage more intentional decisions about, and critical reflection on, AI? What problems have I failed to anticipate? And how are you handling AI in your own classes? I’d love to hear from you in the comments below.
In the K12 world, the developing content skills are the biggest hurdle to effectively employing similar concepts. The challenge is that there is a need to do so, since students are integrating AI tools in their writing, making those instances all the more obvious when they do. Effectively engaging students in those conversations is tricky business, since the reflection required is just not there yet. It's an evolving dynamic, and adolescent development is the biggest hurdle in K12 education to effectively engaging students in the conversation. But, we're working on ways to do so! I am anyway...