I really like the idea of prohibiting specific AI use when it will impede the learning objectives of the task. That seems like a sustainable and defensible practice for most of the parties involved. Not for nothing, it might also encourage more faculty to articulate clear learning objectives to students.
I admire the thought you put into this, but there is no such thing as an "AI Free" track unless you want to prevent students from using Google. With the introduction of AI Search mode, the default AI summaries returned every time you put in a query, students are "using AI' every time they surf the web. A lot of what you are describing here I tried to get at in my most recent post - even the students who aren't "cheating" are still using AI in ways they simply do not consider to be cheating. No AI vs. some AI vs. A lot of AI is really, really tricky. I hope it goes well.
I’m aware that one can’t go purely or perfectly “AI free” given the intrusion of these tools into our lives. And of course I have no intention of policing students’ Google use. I *do* think we can try to make the intrusions of AI visible for students and give them the opportunity to refuse AI, insofar as is possible, if they wish. I think lots of students are concerned about how even uses of AI that are not strictly considered “cheating” might impact their development, and I can help these students by providing some accountability around AI refusal.
I read the AI search results on Google sometimes and ignore them sometimes. I mostly ignore autocomplete suggestions, “help me write” features, and “AI insights” toolbars. I see no reason why students who don’t want to use these tools can’t commit to doing the same, if we give them the proper education and support.
I agree with the plan to not have them shift tracks. The fact that you're providing the AI tracks as an option is building agency but also a life and work skill. We might not always be able to make the changes that are ideal for us after the fact. that said even though I support not letting them shift tracks I would build in some metacognitive reflection maybe twice in the semester to document why they chose the track, what they learned about themselves, what they need to stay the course, what they would do differently and why. The practice of reflecting periodically on such major educational choice in the context of the course will likely have long term benefits for when they find themselves in a situation where they need to make a choice with limited or no opportunities to change. Looking forward to hearing how this goes. I don't teach writing but this is something I would try if I did.
Thanks, Horane! This kind of metacognitive reflection is built into my grading system (through a series of self-assessments)--I appreciate the reminder to add specific questions to about AI use.
I've been encouraging my students to learn to use AI responsibly and ethically, on the grounds that their careers are all likely going to see them using AI agents. In the small classes where I get to know students, I've had resistance, but relatively low dishonesty. From Jan to Apr, though, I had a large 1st-yr class, and gave them a two-track option: 1) must-use AI & say how, and 2) either don't use, or use minimally and acknowledge it. With the first assignment, I had a dozen students claim only to have used Grammarly to improve their style/grammar, but the whole thing was clearly by ChatGPT et al. I ended up reporting eight students to the Registrar for dishonesty, and most of them I allowed to re-do the assignment. I told the class what had happened, and that our university policy is that I don't need proof, only "balance of probability." On the second assignment, I looked over their rough drafts, saw the same thing happening, and announced a limited-time amnesty for people to change tracks to the must-use AI. Another dozen students took me up on it, and I had only one dishonesty report to make on the second one. But I was disheartened.
This summer, I have a large 2nd-yr class. Half their grade is self-assessed (a lot of my courses are partially ungraded and some of them completely so), and I made the major self-assessed project must-use AI, where they had to fill in a writing log, with some stages designated human first, then AI support; some AI research, then human checking; and the last stage was a set of reflection questions. They're handing in their revisions of those this week, and I'm going through them to check whether their self-assigned grades aren't too far above or below what I would give them, and to see how they've made use of AI and if anyone has done the whole thing with AI. So far I've only got one student who gave themself an A and did almost none of their own work--the rest that I've looked at have really gotten into the personalized projects and done well with experimenting with AIs they haven't tried before. I'm feeling less disheartened about it.
Thank you for writing about your experiences, Emily! I both enjoy reading your posts and get a lot of inspiration from them (and I've done the Ungrading Conference online).
Not to be cynical, but doesn't all of this completely depend on your students not deliberately lying about their use of AI? Like, couldn't someone choose the no AI track and then just use AI the whole time? How would you know? What would you do if you suspected? I teach creative writing and as far as I can tell my students haven't been guilty of this, but my partner teaches a humanities elective and his problem is students just using AI constantly without admitting it, often even when it's super obvious and he confronts them. If your students genuinely care about the learning goals of the tasks you assign them, that's great, but that doesn't seem like the most common situation that professors are struggling to navigate.
Yes, this relies on students being invested in the class and honest about their AI use. Of course, it's difficult in any class, especially required gen ed ones (mine included) to get students to buy into the learning goals and tasks. But I work super hard at this, especially in the first weeks of the course. I'm lucky enough to have small classes, where I cultivate individual relationships with students, which helps. We talk *a lot* about why we're doing what we're doing and how I think it will benefit them right now, in the real world (rather than in some imagined future workplace). I make a deal with them that I'll do my very best to only assign authentic(ish) tasks and no busy work if they'll do their very best to engage in good faith. And I employ a grading system that provides extensive opportunities for revision, rewards engagement in the process alongside the quality of the final products, and puts students pretty much in control of their grades--so they don't feel like they have to cheat to get a high grade in the class.
I take some small "assessment security" measures, like an occasional in-class assessment, and students write on Google Docs, so I can see in their version history if they've copied and pasted large chunks of text. We also do a lot of in-class work time, where I can get a glimpse of students' writing processes.
This has mostly worked for me so far--we'll see if things change. As far as addressing suspected AI misuse: I always ask students in the first weeks of class what they think we should do in these cases. Each time, we've decided that I will require the student to come to my office, give them a chance to show me how they used AI, ask them questions about their writing process and their paper, and then ask them to rewrite if I determine they've misused AI or can't answer the questions in a satisfactory way.
I have had a couple of cases where it seemed obvious to me that students were using AI, but they wouldn't admit to it and I couldn't prove it. That happens, and it's not my favorite. But I'm not sure that those students would be getting anything out of the class even without ChatGPT. For all I know, they were using paper mills before. So, I try not to worry about it too much.
Sorry, that response was longer than I meant it to be 😂
I've been encouraging my students to learn to use AI responsibly and ethically, on the grounds that their careers are all likely going to see them using AI agents. In the small classes where I get to know students, I've had resistance, but relatively low dishonesty. From Jan to Apr, though, I had a large 1st-yr class, and gave them a two-track option: 1) must-use AI & say how, and 2) either don't use, or use minimally and acknowledge it. With the first assignment, I had a dozen students claim only to have used Grammarly to improve their style/grammar, but the whole thing was clearly by ChatGPT et al. I ended up reporting eight students to the Registrar for dishonesty, and most of them I allowed to re-do the assignment. I told the class what had happened, and that our university policy is that I don't need proof, only "balance of probability." On the second assignment, I looked over their rough drafts, saw the same thing happening, and announced a limited-time amnesty for people to change tracks to the must-use AI. Another dozen students took me up on it, and I had only one dishonesty report to make on the second one. But I was disheartened.
This summer, I have a large 2nd-yr class. Half their grade is self-assessed (a lot of my courses are partially ungraded and some of them completely so), and I made the major self-assessed project must-use AI, where they had to fill in a writing log, with some stages designated human first, then AI support; some AI research, then human checking; and the last stage was a set of reflection questions. They're handing in their revisions of those this week, and I'm going through them to check whether their self-assigned grades aren't too far above or below what I would give them, and to see how they've made use of AI and if anyone has done the whole thing with AI. So far I've only got one student who gave themself an A and did almost none of their own work--the rest that I've looked at have really gotten into the personalized projects and done well with experimenting with AIs they haven't tried before. I'm feeling less disheartened about it.
Thank you for writing about your experiences, Emily! I both enjoy reading your posts and get a lot of inspiration from them (and I've done the Ungrading Conference online).
That all makes a lot of sense! Having one-on-one relationships with students is key to all this, I think -- my class is creative writing, so maybe that's just more fun to do in general, but imo a big part of what's kept them honest about this is that we have required conferences every two weeks where they have to talk to me directly about their process, which builds in accountability. My partner's class sizes are bigger and don't have this requirement (which is good, because he couldn't meet with 60+ students one at a time without losing his mind, but also bad for obvious reasons).
I also think you have a fair point about how deliberately cheating students probably wouldn't get a ton out of the class even without AI... but they are getting credit, and that's not nothing. I know hiring someone to write your paper has always been thing; however, it costs money and requires lead time. So I can't imagine it's as common as firing up a LLM on your own computer to instantly produce bespoke text. Unscientifically, I'd speculate that in the past, a college student who never wrote their own papers would probably ultimately either drop or flunk out, whereas now that person can probably coast through. Teachers shouldn't have to be cops, and I'm not faulting your attitude of letting it slide, because what is the viable alternative solution? But it does change the meaning of not just the grade but the degree.
Interesting approach! I'm very curious to hear how the semester unfolds for you.
One suggestion: Build-in a mid-semester Switch Tracks opportunity for students to make the choice to switch to a different track. At the beginning of the semester, before engaging in the work, it can be hard to know which track is the right one for each person. Giving students a switching tracks option could keep them more committed to staying in their track until they can switch tracks, and it can provide a great opportunity for deep reflection as they consider whether to switch tracks mid-semester.
Interesting! You make a good point about the difficulty of making these decisions at the beginning of the semester before they know what the work will be like.
The work we do before and after midterm is sufficiently different that I'm not sure a midterm switch would accomplish what I want it to. However, I have considered letting students on the AI Free track do some experimentation with AI when putting together their final portfolios, after they've already done most of the writing for the course. I'll think about it--and maybe ask my students for their input!
I really like the idea of prohibiting specific AI use when it will impede the learning objectives of the task. That seems like a sustainable and defensible practice for most of the parties involved. Not for nothing, it might also encourage more faculty to articulate clear learning objectives to students.
Yes, it's been helpful to me for sure!
I admire the thought you put into this, but there is no such thing as an "AI Free" track unless you want to prevent students from using Google. With the introduction of AI Search mode, the default AI summaries returned every time you put in a query, students are "using AI' every time they surf the web. A lot of what you are describing here I tried to get at in my most recent post - even the students who aren't "cheating" are still using AI in ways they simply do not consider to be cheating. No AI vs. some AI vs. A lot of AI is really, really tricky. I hope it goes well.
https://fitzyhistory.substack.com/p/what-about-the-students-who-dont
I’m aware that one can’t go purely or perfectly “AI free” given the intrusion of these tools into our lives. And of course I have no intention of policing students’ Google use. I *do* think we can try to make the intrusions of AI visible for students and give them the opportunity to refuse AI, insofar as is possible, if they wish. I think lots of students are concerned about how even uses of AI that are not strictly considered “cheating” might impact their development, and I can help these students by providing some accountability around AI refusal.
I read the AI search results on Google sometimes and ignore them sometimes. I mostly ignore autocomplete suggestions, “help me write” features, and “AI insights” toolbars. I see no reason why students who don’t want to use these tools can’t commit to doing the same, if we give them the proper education and support.
I agree with the plan to not have them shift tracks. The fact that you're providing the AI tracks as an option is building agency but also a life and work skill. We might not always be able to make the changes that are ideal for us after the fact. that said even though I support not letting them shift tracks I would build in some metacognitive reflection maybe twice in the semester to document why they chose the track, what they learned about themselves, what they need to stay the course, what they would do differently and why. The practice of reflecting periodically on such major educational choice in the context of the course will likely have long term benefits for when they find themselves in a situation where they need to make a choice with limited or no opportunities to change. Looking forward to hearing how this goes. I don't teach writing but this is something I would try if I did.
Thanks, Horane! This kind of metacognitive reflection is built into my grading system (through a series of self-assessments)--I appreciate the reminder to add specific questions to about AI use.
It's a year old, but I just found this paper, which seems very relevant to your tracks approach to AI: https://www.science.org/doi/full/10.1126/sciadv.ado6759
Jayme, would you believe this is the very paper that inspired the approach? 😂 I wrote about it here: https://emilypittsdonahoe.substack.com/p/promoting-student-autonomy-and-academic
Only me being a year late to the game 😂
Who can keep up!!
I've been encouraging my students to learn to use AI responsibly and ethically, on the grounds that their careers are all likely going to see them using AI agents. In the small classes where I get to know students, I've had resistance, but relatively low dishonesty. From Jan to Apr, though, I had a large 1st-yr class, and gave them a two-track option: 1) must-use AI & say how, and 2) either don't use, or use minimally and acknowledge it. With the first assignment, I had a dozen students claim only to have used Grammarly to improve their style/grammar, but the whole thing was clearly by ChatGPT et al. I ended up reporting eight students to the Registrar for dishonesty, and most of them I allowed to re-do the assignment. I told the class what had happened, and that our university policy is that I don't need proof, only "balance of probability." On the second assignment, I looked over their rough drafts, saw the same thing happening, and announced a limited-time amnesty for people to change tracks to the must-use AI. Another dozen students took me up on it, and I had only one dishonesty report to make on the second one. But I was disheartened.
This summer, I have a large 2nd-yr class. Half their grade is self-assessed (a lot of my courses are partially ungraded and some of them completely so), and I made the major self-assessed project must-use AI, where they had to fill in a writing log, with some stages designated human first, then AI support; some AI research, then human checking; and the last stage was a set of reflection questions. They're handing in their revisions of those this week, and I'm going through them to check whether their self-assigned grades aren't too far above or below what I would give them, and to see how they've made use of AI and if anyone has done the whole thing with AI. So far I've only got one student who gave themself an A and did almost none of their own work--the rest that I've looked at have really gotten into the personalized projects and done well with experimenting with AIs they haven't tried before. I'm feeling less disheartened about it.
Thank you for writing about your experiences, Emily! I both enjoy reading your posts and get a lot of inspiration from them (and I've done the Ungrading Conference online).
Not to be cynical, but doesn't all of this completely depend on your students not deliberately lying about their use of AI? Like, couldn't someone choose the no AI track and then just use AI the whole time? How would you know? What would you do if you suspected? I teach creative writing and as far as I can tell my students haven't been guilty of this, but my partner teaches a humanities elective and his problem is students just using AI constantly without admitting it, often even when it's super obvious and he confronts them. If your students genuinely care about the learning goals of the tasks you assign them, that's great, but that doesn't seem like the most common situation that professors are struggling to navigate.
Yes, this relies on students being invested in the class and honest about their AI use. Of course, it's difficult in any class, especially required gen ed ones (mine included) to get students to buy into the learning goals and tasks. But I work super hard at this, especially in the first weeks of the course. I'm lucky enough to have small classes, where I cultivate individual relationships with students, which helps. We talk *a lot* about why we're doing what we're doing and how I think it will benefit them right now, in the real world (rather than in some imagined future workplace). I make a deal with them that I'll do my very best to only assign authentic(ish) tasks and no busy work if they'll do their very best to engage in good faith. And I employ a grading system that provides extensive opportunities for revision, rewards engagement in the process alongside the quality of the final products, and puts students pretty much in control of their grades--so they don't feel like they have to cheat to get a high grade in the class.
I take some small "assessment security" measures, like an occasional in-class assessment, and students write on Google Docs, so I can see in their version history if they've copied and pasted large chunks of text. We also do a lot of in-class work time, where I can get a glimpse of students' writing processes.
This has mostly worked for me so far--we'll see if things change. As far as addressing suspected AI misuse: I always ask students in the first weeks of class what they think we should do in these cases. Each time, we've decided that I will require the student to come to my office, give them a chance to show me how they used AI, ask them questions about their writing process and their paper, and then ask them to rewrite if I determine they've misused AI or can't answer the questions in a satisfactory way.
I have had a couple of cases where it seemed obvious to me that students were using AI, but they wouldn't admit to it and I couldn't prove it. That happens, and it's not my favorite. But I'm not sure that those students would be getting anything out of the class even without ChatGPT. For all I know, they were using paper mills before. So, I try not to worry about it too much.
Sorry, that response was longer than I meant it to be 😂
I've been encouraging my students to learn to use AI responsibly and ethically, on the grounds that their careers are all likely going to see them using AI agents. In the small classes where I get to know students, I've had resistance, but relatively low dishonesty. From Jan to Apr, though, I had a large 1st-yr class, and gave them a two-track option: 1) must-use AI & say how, and 2) either don't use, or use minimally and acknowledge it. With the first assignment, I had a dozen students claim only to have used Grammarly to improve their style/grammar, but the whole thing was clearly by ChatGPT et al. I ended up reporting eight students to the Registrar for dishonesty, and most of them I allowed to re-do the assignment. I told the class what had happened, and that our university policy is that I don't need proof, only "balance of probability." On the second assignment, I looked over their rough drafts, saw the same thing happening, and announced a limited-time amnesty for people to change tracks to the must-use AI. Another dozen students took me up on it, and I had only one dishonesty report to make on the second one. But I was disheartened.
This summer, I have a large 2nd-yr class. Half their grade is self-assessed (a lot of my courses are partially ungraded and some of them completely so), and I made the major self-assessed project must-use AI, where they had to fill in a writing log, with some stages designated human first, then AI support; some AI research, then human checking; and the last stage was a set of reflection questions. They're handing in their revisions of those this week, and I'm going through them to check whether their self-assigned grades aren't too far above or below what I would give them, and to see how they've made use of AI and if anyone has done the whole thing with AI. So far I've only got one student who gave themself an A and did almost none of their own work--the rest that I've looked at have really gotten into the personalized projects and done well with experimenting with AIs they haven't tried before. I'm feeling less disheartened about it.
Thank you for writing about your experiences, Emily! I both enjoy reading your posts and get a lot of inspiration from them (and I've done the Ungrading Conference online).
That all makes a lot of sense! Having one-on-one relationships with students is key to all this, I think -- my class is creative writing, so maybe that's just more fun to do in general, but imo a big part of what's kept them honest about this is that we have required conferences every two weeks where they have to talk to me directly about their process, which builds in accountability. My partner's class sizes are bigger and don't have this requirement (which is good, because he couldn't meet with 60+ students one at a time without losing his mind, but also bad for obvious reasons).
I also think you have a fair point about how deliberately cheating students probably wouldn't get a ton out of the class even without AI... but they are getting credit, and that's not nothing. I know hiring someone to write your paper has always been thing; however, it costs money and requires lead time. So I can't imagine it's as common as firing up a LLM on your own computer to instantly produce bespoke text. Unscientifically, I'd speculate that in the past, a college student who never wrote their own papers would probably ultimately either drop or flunk out, whereas now that person can probably coast through. Teachers shouldn't have to be cops, and I'm not faulting your attitude of letting it slide, because what is the viable alternative solution? But it does change the meaning of not just the grade but the degree.
Interesting approach! I'm very curious to hear how the semester unfolds for you.
One suggestion: Build-in a mid-semester Switch Tracks opportunity for students to make the choice to switch to a different track. At the beginning of the semester, before engaging in the work, it can be hard to know which track is the right one for each person. Giving students a switching tracks option could keep them more committed to staying in their track until they can switch tracks, and it can provide a great opportunity for deep reflection as they consider whether to switch tracks mid-semester.
Interesting! You make a good point about the difficulty of making these decisions at the beginning of the semester before they know what the work will be like.
The work we do before and after midterm is sufficiently different that I'm not sure a midterm switch would accomplish what I want it to. However, I have considered letting students on the AI Free track do some experimentation with AI when putting together their final portfolios, after they've already done most of the writing for the course. I'll think about it--and maybe ask my students for their input!