Off the Rails: Reflections on a Semester with “AI Tracks” and Rethinking Student AI Agency
A guest post by Noël Ingram
I’m excited to share something a little different today. Regular readers may recall that late last year, I floated an untested idea for “AI Tracks” in the writing classroom:
Students would commit to one of two tracks at the beginning of the semester. “In track 1,” I wrote, “AI use is strictly off limits…In track 2, AI use is integrated, with the requirement that students transparently share the details of their use and regularly reflect on it.”
Because I didn’t teach in the spring, I didn’t have a chance to try out the idea. But I heard from a couple of educators who did!
One of those educators was . Noël is a teacher-scholar specializing in feminist rhetorics, literacy studies, and digital pedagogies. She works as the Digital Teaching Programs Administrator at Boston College’s Center for Digital Innovation in Learning and is a PhD candidate in the English Department.
I reached out to ask Noël if she would be willing to share some reflections on her semester trying “AI tracks,” and she was generous enough to agree. So, without further ado, here’s Noël:
When I decided to experiment with the approach Emily Pitts Donahoe outlined in her post “Promoting Student Autonomy and Academic Integrity in AI Use: An (untested) idea” with my online, asynchronous Design Thinking and Creativity class, I wasn’t quite sure what to expect.
I was drawn to this idea because of how easily I could incorporate it into many practices I already use in the classroom. As a former K-12 educator trained in Project-Based Learning, I center “student voice and choice” in my curriculum development. I always have students set personal goals for their learning in my class at the start of each semester, detailing what they want to learn, how they will personally define success, and how they hope to grow and change as a result of our learning together. Choosing a “track” for their AI engagement, I thought, seemed like a natural extension of this preliminary intention-setting assignment. Asking students who used GenAI to reflect on their use, including evidence, also aligned nicely with reflective and metacognitive practices I’ve been using for years.
I added a question asking students to choose if they wanted to be “AI Free” or “AI Friendly” in their work for this class and provide a short explanation for why they made that choice. Those who committed to being “AI Friendly” would also commit to submitting a reflection on how they used GenAI, including evidence, for each assignment. My framing of the “tracks” borrows language from work we’ve done at Boston College’s Center for Digital Innovation in Learning (CDIL) around “zones of engagement” with GenAI. To date, our work with faculty has primarily focused on discussing GenAI at the assignment level, with any framing driven by faculty. I wanted to explore what it might look like to give students, rather than faculty, that agency.
I was curious not only to see what students would choose but also to understand their rationale for the choices they made. Clickbait dominates much of what has been published about students’ use of GenAI, with headlines like “Everyone is Cheating Their Way Through College” or “There’s a Good Chance Your Kid Uses AI to Cheat.” These perspectives don’t offer much for me as an educator. Sure, there’s going to be people who game the system, but that’s nothing new. Overwhelmingly, my experience with students at every level I’ve taught has been that they ultimately want to learn and are trying their best in an imperfect system while balancing their lives outside the classroom with being whole human beings. I eagerly awaited submissions of this first assignment, expecting a range of choices and reasons for those choices.
I was hoping to gain a deeper understanding of the diverse and nuanced ways students are currently engaging with GenAI by introducing these “tracks.”…I was thus surprised and a little disappointed that all but one student in my class decided to be “AI Free” for the course.
I was hoping to gain a deeper understanding of the diverse and nuanced ways students are currently engaging with GenAI by introducing these “tracks.” At that point, I’d had limited insight based only on what a few students felt comfortable sharing with me or in cases of fairly egregious misuse. I was thus surprised and a little disappointed that all but one student in my class decided to be “AI Free” for the course. Students cited concerns over environmental impacts, learning loss, and their motivations for signing up for the course in the first place as reasons for their decision to remain “AI Free.” Several students reflected on how they thought it would be antithetical to use GenAI in a course centered on creativity.
“Does This Count?”: Developing Metacognitive Abilities
I tried to underscore to students that they were not “stuck” on any “track.” They could (and I would welcome them to!) switch their orientation to GenAI in my class at any time. However, in practice, this resulted in an absolutely flooded inbox of emails from students that all pretty much boiled down to one question: “Does [this] count as GenAI?” I also started noticing bits of language that felt as if they were AI-generated due to mismatches and inconsistencies in writing voice across types of communication, such as what was written in an email versus what was submitted as an assignment.
When I left comments on these students’ work, noting this, I often received responses expressing confusion, paired with effusive apologies. With the exception of one student who consistently submitted work that, for example, reflected on assignments or other aspects of the class that were completely fabricated, I do believe that these students were completing their work in good faith. I don’t think they were trying to “cheat,” and having to reassure my students and clarify how I would like them to share their creative process with me revealed that I had unintentionally internalized a mindset of GenAI use that I believe is unsustainable given the pace of technological advancement and ultimately harmful to learning.
I realized that when I conceptualized these tracks, I was assuming the kind of GenAI use in which students would navigate to an external platform, such as ChatGPT, and engage in a back-and-forth dialogue before ultimately lifting or paraphrasing a portion of the generated text and incorporating it into their final work. This is the paradigm at the center of academic integrity concerns and can also be seen in citation guidelines, such as those of the MLA.
As I learned, the reality is much more slippery. I suspect it will become even more so as time goes on. GenAI tools have been quickly incorporated into many of the platforms we use on a daily basis, such as Zoom, Canva, Canvas, Grammarly, and Google, to name a few. As I draft this post, I’ve been repeatedly prompted within the Google Doc to open and engage with “Help me write.” The other week, after I updated my macOS, I noticed something new: summaries of my text threads. Like most of these tools, there were (quite comical) errors, such as the AI misunderstanding that “Darling” in my thread with one friend wasn’t an endearment but the name of her cat. The incorporation of Apple Intelligence in a higher education landscape where many students come to our classes with new MacBooks purchased under the Apple for Education discount program means that even if students aren’t navigating to ChatGPT, Claude, or another GenAI tool, their work and communications might already be shaped by the presence of these tools, perhaps without their knowledge or understanding.
I realized that when I conceptualized these tracks, I was assuming the kind of GenAI use in which students would navigate to an external platform, such as ChatGPT, and engage in a back-and-forth dialogue before ultimately lifting or paraphrasing a portion of the generated text and incorporating it into their final work…As I learned, the reality is much more slippery.
My students’ confusion about the boundaries of GenAI in their work is not unique to my classroom or institutional context. A recent article in The Chronicle of Higher Education quotes students saying they aren’t sure what “counts as unauthorized assistance.” This confusion is by design. These tools are meant not only to be efficient but also to be invisible, removing as much friction as possible from one’s work. This sense of confusion, I found, was even more heightened among my so-called “nontraditional students,” who often come to school from professional careers, military service, or at a different life stage than first-time undergraduates. These students often struggle with understanding different file types or accessing Canvas. One student with whom I talked in office hours, for example, had never heard of GenAI. He thought that the “help” he was seeing in his Google Docs was something that the University provides for students because it cares about their success.
Students are obsessed with product over process, and it’s not their fault!
In addition to confusion about whether or not they were using GenAI, students expressed deep concern over whether or not they were doing the work “correctly.” This, of course, is the product of a school system that has trained students to prioritize outcomes over the process of learning. Elise McDowell, in her piece for the Guardian, points to structural stressors such as the rising cost of tuition and the high-stakes exam culture in England. While we don’t have standardized exams in the same way at the college level in the US, we do have grades, GPAs, class rankings, and other pressures to perform that lead students to believe that we care more about polish than we do about the substance of their ideas or their learning process.
Conclusions and Moving Forward
Mid-semester, I was unhappy with the level of confusion students were expressing, as well as their continued focus on product over process. I made a few changes to my course. Most notably, I reintroduced a practice I’ve long used in my classes—process reflections. Using questions inspired by Joda Shipka’s mediated activity-based multimodal framework described in Toward a Composition Made Whole, I asked students to submit a “project pitch” statement for all remaining work in the class, where they answered the following questions:
What do you want to do?
Why are you doing it?
Who/what is involved in doing it?
How and when will you do it?
What might go wrong?
How will you know your project met its goals?
Why you? Why this project?
Then, when they submitted their “final” work for each project, I asked them to revisit their original project pitch and reflect on what had changed since they first outlined their ideas. Students, I’ve found, particularly struggle to answer the question of “who/what is involved in doing it,” often answering simply “me” or occasionally, “me and my roommate.”
I referred students back to my syllabus statement about GenAI, which says “we are always using technology when we write,” referencing Dennis Baron’s A Better Pencil, and then referred them to Pessimists’ Archive for a history of moral panics around technology. Slowly, their reflections became more detailed and nuanced.
Moving forward, I don’t plan to continue using the “tracks” idea in my classroom. This fall, I plan to foreground an ethos of affiliation, asking students to think often and deeply about how their relationships with both humans and nonhumans shape their creative work. In doing so, I align myself with scholars in writing studies like Nancy Ami, Natalie Boldt, Sara Humphreys, and Erin Kelly’s work on peer review, “No One Writes Alone,” and Marilyn M. Cooper’s ecological approach. I hope this approach will both encompass and extend beyond a GenAI literacy focus by prompting students to become aware of the relational nature of their work, whether or not they use GenAI as part of their process. I also hope to emphasize the ethical dimension of affiliation, asking students, “To whom and what do you wish to be connected? Why?”
"All but one student in my class decided to be 'AI Free' for the course. Students cited concerns over environmental impacts, learning loss, and their motivations for signing up for the course in the first place as reasons for their decision to remain 'AI Free.' Several students reflected on how they thought it would be antithetical to use GenAI in a course centered on creativity."
Good for your students!
I appreciate your reflections on the difficulty students face in avoiding AI when it's companies are making it harder and harder to do so when using word processing programs, search engines, etc.
In my experience teaching writing at a community college, students are trying to balance the pressure to pass my class (to pass the next class, to get their degree, to get a job...) with their desire for an experience that values their voice and centers their learning. Which, from what I've seen, is what most of them really want and will choose when they feel that the choice is there for them.
To me, the test is to design my class in a way that encourages students to do the work themselves and supports them in doing so.
Noël and Emily, Thank you, thank you whole heartedly for sharing your experiences. Truth be told, I was anxious for an update; as I will be attempting a similar choose-your-learning-adventure model, although for a four-credit first-year required course.
And I very much appreciate that you made a pivot at the halfway point--as that what I keep telling myself. There is always an emergency break.
Truth be told I'm nervous. Folks on campus are pretty pessimistic and think honestly, the exact opposite of your experience--every student will choose AI.
All to say, thank you, thank you again for all your transparency and honesty. Here goes nothing--as my dear friend recently shared, at least you're getting in the ring.