In the last Unmaking the Grade post, I was thrilled to share
’s reflection on how she employed “AI Tracks” in her spring course. I had suggested a similar approach in a post last fall but hadn’t had a chance to try it myself. I’m grateful that Noël and others were willing to test drive the idea before me so I could learn from their experiences!And I learned a lot.
In case you missed these earlier posts, here was my basic idea: I would ask students, at the beginning of the semester, to select a specific AI track. One track would be AI Free, meaning that students would commit to employing no AI tools in their work for the course. One track would be AI-integrated (or, as Noël termed it, “AI Friendly”), meaning that students could employ AI in limited ways and were required to disclose and reflect on their use of it.
Read more about the original idea here:
Promoting Student Autonomy and Academic Integrity in AI Use
I’ve been pleasantly surprised by the relatively low levels of AI misuse I’ve encountered this semester. That’s not to say there haven’t been any problems; I have had a couple of cases that took quite a bit of time and patience to resolve. Also, the semester’s not over yet, so I’m knocking on wood. But even still: given the ready availability of these t…
Noël employed this approach in an asynchronous online course called “Design Thinking and Creativity.” Most students, interestingly, wanted to be AI Free at the beginning of the course, citing a variety of reasons for their choice. Unfortunately, this desire was short-lived. Soon, Noël was getting a barrage of emails from students wanting to know if particular tools or usages would “count” as generative AI.
But more than that, she found what constitutes AI use, or even an AI tool, is a lot more slippery than it used to be. “Using AI” is no longer a simple matter of navigating to ChatGPT or another external site to prompt the chatbot and copy-and-paste its responses. Instead, AI assistance is being built, seamlessly, into applications that students use every day: Canvas,1 Google Docs, Grammarly, Canva, library databases, etc. Sometimes, students in Noël’s class didn’t even know that they were using AI.
All these factors complicated the “AI Tracks” approach, and Noël decided she wouldn’t be using it again in the fall. Read more about Noël’s experience here:
Off the Rails: Reflections on a Semester with “AI Tracks” and Rethinking Student AI Agency
I’m excited to share something a little different today. Regular readers may recall that late last year, I floated an untested idea for “AI Tracks” in the writing classroom:
Hearing about Noël’s class, and following the ongoing work of my colleague
, has made me question the viability of AI Tracks. Foolishly, I think I’ll try it anyway. But I’ll go into the semester with the benefit of others’ ideas and experiences. Here are a few things I’m planning to do to try to make the approach work:Talk with students about the proliferation of AI.
This is probably something we should do anyway, given the rapid and ongoing developments in the AI landscape. I’m not willing to accept AI as an invisible, ever-present, and irresistible part of our lives—and I don’t think students should be either. They should be aware of what applications like Google Docs are doing when they offer to “Help me write” or what’s happening when ProQuest’s Ebook Central provides a “Research Assistant” with “insights” on the book they’re reading. They should know that these applications incorporate AI, they should know how it works, and they should know where the “assistance” is coming from.
They should know these things so that they can make their own choices about when, where, and how they allow AI into their lives. I’m not saying students should necessarily refuse these tools in all circumstances. (I, personally, refuse them, but I have my own reasons.) I’m just saying students should have the knowledge to be able to make their own decisions rather than, as our tech overlords would prefer, unknowingly or passively accepting whatever kinds of AI are foisted on them, in every possible aspect of their social and academic experience.
I will ask students to refuse this kind of AI assistance, to the extent that they’re aware of it, if they choose the AI Free track. I don’t expect this will be perfect. I suspect that some students who commit to going AI Free will give into the temptations of embedded AI from time to time. I also suspect that students will encounter AI unwittingly, and consume it uncritically, more than once over the course of the semester. Making students aware of AI’s intrusion into their lives will probably require more than a one-time, beginning-of-semester conversation.
And honestly, there’s only so much we can do. But I think we should try.
Ask students to make a commitment and stick with it.
Noël allowed her students to switch tracks throughout the semester, if they preferred, rather than being locked into a specific pathway. A couple of others with whom I shared this idea have made that suggestion as well. I totally get this. Student choice is incredibly important to me. And allowing them to make their own choices about AI, on a case-by-case basis, might better prepare them for the kinds of decisions they will face regarding AI use in other classes and situations beyond school.
However, part of the reasoning behind my desire to employ AI Tracks in the first place was to help students make more intentional decisions about AI—and to give them a pre-commitment device that could help them avoid temptation.
One thing I found in previous classes was that students didn’t really have good reasons or firm convictions behind their choice to use or refuse AI. Many students were concerned about the ways AI might negatively affect their learning and didn’t really want to use it—but they nevertheless turned to ChatGPT when they were stuck or up against a deadline. Most of the students who used AI prolifically weren’t able to articulate how it benefited their learning or affected their writing process, beyond some vague sense that it helped them write faster and use bigger words. All in all, it seemed like students were being controlled by generative AI rather than the other way around.
My reason for wanting to employ AI Tracks is, in part, to combat this lack of agency. I want to give students who are concerned about AI a pre-commitment device that holds them accountable to their AI refusal throughout the semester, even when the going gets tough. I want to give students who are interested in AI opportunities to use it intentionally and to learn from their use rather than employing it uncritically, in whatever way speeds up the process. I want to give students the satisfaction of setting a goal for their engagement with AI and sticking to it until the end of the semester.
So, I guess, paradoxically, I’m hoping that limiting students’ autonomy in switching between AI tracks actually enhances their autonomy as intentional users or refusers of AI, at least for the duration of the class. I’m not sure if this makes sense. But I hope it works.
Limit the type and extent of AI use even for students on the AI Friendly track.
I’m going to carefully prescribe ways that students on the AI Friendly track can use AI. In fact, I’m planning to make a list of specific tasks and prompts students are permitted to use on specific assignments. I know—this could very well be a fool’s errand. But hear me out.
I’m doing this because my first-year students don’t always understand the goals of the tasks I ask them to do and can’t always tell how AI might support or impede their progress toward those goals. Confusingly, goals can also change from assignment to assignment. I wouldn’t, for example, want students to use an AI reading assistant on our Rhetorical Analysis exercises, because the goal of those exercises is to improve their skills in reading comprehension, summary, and basic analysis—things a reading assistant would pretty much do for them. I wouldn’t mind, however, if they used a reading assistant to break down a complex scholarly article they wanted to cite in their Researched Argument paper. The goal of that activity is not for students to fully comprehend and analyze articles meant for experts in the field; it’s to evaluate and integrate sources to support their own arguments. While a reading assistant would impede their learning in a Rhetorical Analysis exercise, it could actually enhance their learning (by reducing extraneous cognitive load) in a Researched Argument paper.
Because of this, I’ve determined that I need to be specific about how students can or can’t use AI at different points in the course. So, that’s what I’ll try to do.
I’m aware, again, that this won’t be perfect. It’s likely that some students won’t abide by the guidelines I provide, and a set of rules alone will not persuade them to use AI in learning-supported ways. If this approach is to have any hope of success, it will have to be combined with a clear sense of the purpose and value of our assignments; a grading system that prioritizes process over product and encourages revision; a strong relationship with individual students; and probably many other pedagogical practices I haven’t thought of.
And even with all those things in place, some students will still choose to use AI in ways that shortcut their learning, often knowingly. Some may coast through the class breaking rules left and right and attempting to elude accountability. A few may even succeed. But when have students not done this? I firmly believe that most students want to learn and that we can create the conditions that help them tap into their best selves. We have a lot of control here. But we can’t make every student learn in every circumstance.
If it turns out that more students than I expect abuse the AI guidelines we have in place, I’ll reevaluate. But this didn’t happen last year, and I don’t expect it to happen this year. I know. Famous last words.
“Foreground an ethos of affiliation.”
This was one of my favorite concepts from Noël’s reflection. She writes,
This fall, I plan to foreground an ethos of affiliation, asking students to think often and deeply about how their relationships with both humans and nonhumans shape their creative work. In doing so, I align myself with scholars in writing studies like Nancy Ami, Natalie Boldt, Sara Humphreys, and Erin Kelly’s work on peer review, “No One Writes Alone,” and Marilyn M. Cooper’s ecological approach. I hope this approach will both encompass and extend beyond a GenAI literacy focus by prompting students to become aware of the relational nature of their work, whether or not they use GenAI as part of their process. I also hope to emphasize the ethical dimension of affiliation, asking students, “To whom and what do you wish to be connected? Why?”
I think this will be an important concept for my students on the AI Free and AI Friendly tracks alike. The questions of “how [students’] relationships with both humans and nonhumans shape their creative work” and “to whom and what [they] wish to be connected” are what I’m really trying to get at when I ask them to make intentional choices about AI. I suspect they won’t have thought much about this coming into the class, but I hope, by the time they leave, they can answer such questions with clarity and confidence.
So, that’s what I’m planning for the fall. It’s possible the AI Tracks idea will blow up in my face, and I will regret not sufficiently heeding the warnings of Noël’s experience and Marc’s guidance. But I hope not. I will report back when the semester’s over—as long as the outcome isn’t too embarrassing.
I’m particularly excited for next week’s post here on Unmaking the Grade. It begins a multi-part series on a topic that’s been generating a lot of interest in the alternative grading community—and it’s written with a surprise co-author. Stay tuned!
Given the news about Canvas this week I am, for once, happy to be a Blackboard campus. Although Blackboard has its own AI issues.
I really like the idea of prohibiting specific AI use when it will impede the learning objectives of the task. That seems like a sustainable and defensible practice for most of the parties involved. Not for nothing, it might also encourage more faculty to articulate clear learning objectives to students.
I admire the thought you put into this, but there is no such thing as an "AI Free" track unless you want to prevent students from using Google. With the introduction of AI Search mode, the default AI summaries returned every time you put in a query, students are "using AI' every time they surf the web. A lot of what you are describing here I tried to get at in my most recent post - even the students who aren't "cheating" are still using AI in ways they simply do not consider to be cheating. No AI vs. some AI vs. A lot of AI is really, really tricky. I hope it goes well.
https://fitzyhistory.substack.com/p/what-about-the-students-who-dont