Since I’m not teaching an ungraded course this semester, I’ve been taking some time to reflect on last fall’s teaching experiences and consider future directions for my ungrading practice. Today, I’ll share what my fall Writing 101 students had to say about one final topic: generative artificial intelligence.
I’ve been thinking about AI in the writing classroom pretty much nonstop for the last year and a half. I have a lot to say, but because this blog is devoted to ungrading, I haven’t written about it very much. Happily, I think there are a lot of connections between student use of AI and the way we grade. I’ll try to stay on topic here—but no promises.
First, a little background:
Introducing AI in my writing course
I started last semester from the assumption that students were likely to encounter generative AI at some point in their college careers and that they needed some level of AI literacy in order to make good decisions about whether, when, and how to use it. My colleague
talks about AI literacy a lot, and I’ve learned from him how important it is for students and instructors to understand and critically examine the use of these tools within our disciplines.I also started from the assumption that students genuinely want to learn (even though they encounter many barriers to learning both in and outside the classroom) and that under the right conditions, very few would willingly choose to cheat. My goal was to create an environment in which students and I could explore AI together, with integrity, and make decisions about its use that prioritized learning.
That meant transparency was key. I began the second week of class with a frank conversation about AI. We started by coming to a basic understanding of what LLMs are and how they work. We touched on some of the general ethical questions around LLMs before moving to a discussion of how such a technology might benefit or harm writers. I also did some live demos with ChatGPT using assignment instructions from the class. We evaluated ChatGPT’s output—what was it good at? what was it not good at?—and played around with various prompts to refine the output.
The most important part of the conversation was when I asked them to consider how use of these technologies might affect their learning. There were lots of varied opinions. Many students noted that using AI-powered tools like Grammerly had helped them not only submit grammatically correct assignments but also understand what grammatical errors they make frequently and correct them for future work. Some related how ChatGPT had helped them clarify assignment instructions in other classes or brainstorm potential paper topics. Others were more dubious and shared thoughts about how use of AI could impede the learning process. Everyone agreed that while use of AI could be acceptable in some cases, generating entire essays, or even paragraphs, with AI and copying them wholesale into an essay would likely short circuit some important parts of the learning process.
Based on this conversation, I asked students to help me crowdsource AI guidelines for our course. I compiled their suggestions into this document. I also added sections on “Student Responsibilities” and “Instructor Responsibilities” for use of AI, along with links to some AI tools students might wish to explore. (Please note: the document is already out of date!) Students ratified the guidelines in our next class.
The Results
I think these activities went a long way toward creating an environment of trust and transparency in the class. But AI turned out to be a much smaller problem than I suspected it might, and as a result, I didn’t really have to refer back to the guidelines at all.
I was quoted in a Chronicle piece late last year saying that “I have not once this semester suspected a student of passing off AI-generated material as their own work.” I shared some thoughts about why that might be on Bluesky and Twitter. Obviously, it’s a credit to the students themselves and also a result of the fact that I teach under almost ideal conditions: small class sizes, one class per semester, with lots of support. I can work on cultivating relationships with students and getting to know their writing on an individual level.
But I think one of the biggest reasons students didn’t use AI inappropriately in my class was because of the grading system and the course policies related to it. Students were evaluated on growth alongside the quality of their work, on process alongside product, and had plenty of opportunities to revise—so they didn’t feel pressure to get assignments done perfectly the first time. They also had some flexibility in submitting late work, so if they were in a time crunch, they didn’t feel the need to churn out an assignment quickly. Under these conditions, using AI to cheat just didn’t make much sense.
Student Perspectives
My students seemed to agree. By the time we got to October, I was a little surprised that I hadn’t noticed more AI use among students. I included some questions about it in my midterm survey, but I also simply brought it up during class.
When I asked who had experimented with AI, only two or three hands went up. When I asked how they employed it, they said they used it to clarify some of my assignment instructions (they really had trouble with the concept of a “rhetorical analysis” paper) and to brainstorm ideas. When I asked those who didn’t use AI why they chose not to, the answer was more or less a collective shrug. “I just don’t think about it, I forget it’s even there,” said one student. “I don’t trust it,” said another. No one seemed particularly enthused about its capabilities.
Given the collective panic we’ve seen about ChatGPT, I think these responses would surprise a lot of people—and maybe this class was unusual. But for my students at least, the response to generative AI was a resounding “meh.”
The responses on both the midterm and final survey for the course confirmed these general impressions. Here are the questions I asked:
Have you used AI as a learning aid in this course?
Yes
No
Other:
If you used AI, what tools did you use? Check all that apply.
ChatGPT
Elicit
Wordtune
Grammerly
Other:
If you used AI, which of the following best describes your use? Check all that apply.
To clarify assignment instructions
To brainstorm or get ideas for my work
To generate outlines for my work
To generate sentences or paragraphs I used in my work
To correct my grammar
To edit or proofread my work
Other:
If you used AI, did you feel it supported your learning? Why or why not?
If you did not use AI, why not?
Most students indicated that they didn’t use AI at all. The numbers who said they did use AI went up from 6 to 8 between the midterm and final survey. But that number also includes students who use AI-powered tools like Grammerly. In fact, ChatGPT and Grammerly were the only tools students indicated using at all. The most popular use of AI that students cited was brainstorming, followed closely by clarifying assignment instructions. Others used it for proofreading or correcting grammar. No one in either survey reported using it to generate sentences or paragraphs that they then copied into their papers. Obviously, I can’t prove this is the case, but I can say that I didn’t notice much, if any, evidently AI-generated material in their work.
Here I’ll share (with students’ permission) some common themes from their responses on open-ended questions.
Students who liked AI
AI can help improve grammar.
I feel like it did support my learning. I used it to proofread and correct my grammar in my papers. It was the easiest platform to use.
[AI did] not really [support my learning] because I only used it to correct grammatical and punctuational mistakes.
AI can help students get unstuck.
It helped me think of new ideas for whenever I just stuck.
It supported my learning because when i was confused i used it to clear up any of my misunderstandings
If I was ever stuck or needed guidance on my work AI was there to help. For example, I would put the prompt into the generator and allow it better explain it.
I feel as though it started my learning process, I wouldn’t say I learned much from AI itself though.
AI can help build confidence.
It helped clarify some things in areas that I felt I was hopeless in learning and it gave pretty good and accurate feedback when it came to editing.
When I feel stuck on an assignment it tends to leave me discouraged. By using AI I can get ideas and assurance on my work that then allowed me to feel positive about my paper.
I’m struck here by the nuanced ways students are thinking about the relationship between AI use and learning. For example, one student who used AI to improve their grammar noted that it was a good learning tool for them, but for another student, having AI correct their grammar was not a learning experience. This gets to something I tried to drive home for students early on: what constitutes a learning-focused use of AI for one person in one context may not constitute a learning-focused use for someone else.
I’m also thinking a lot about AI as a tool that may not support learning itself but may, as one student notes, help to “start the learning process.” The blank page is a barrier for so many students, and the feelings of dejection it creates can really impede their progress. If ChatGPT can help students clear that first hurdle, I don’t have a problem with their use of it to do so—as long as they’re making decisions that prioritize learning.
Students who disliked or didn’t use AI
AI just doesn’t come up.
When doing assignments, it isn’t a thing that pops into my head as a resource and by the time it does cross my mind, it’s too late and I’ve already completed the assignments.
I’ve never used it so I didn’t see a point in starting now.
AI can impede learning.
I’ve just never used it. I feel like no matter how its used, AI is a crutch that impedes growth as a writer, so why would I willingly do that to myself?
I find that when using AI sometimes it will put ideas in my head that I now feel I should include even if it may not have been the best idea, it feels like an easy eay out. This could lead to me arguing weaker points or sometimes things that are unrelated that AI made me believe could be related to my writing.
AI use might be considered cheating.
I was never allowed to use AI in high school so I just don’t think about using it now. I was always told if I used it I would receive a zero so now even though we are allowed to use the system I just don’t to be on the safe side of things.
I didn’t use AI because all through high school I was told; if I used it, I would be cheating. I choose not to use it in this class because it just feels like I would be cheating myself.
AI isn’t helpful.
I don’t see the point of it. Never used it, don’t think i will during this course.
I felt like I didn’t need it. My paper was so specific and personal that I felt like AI would not help my paper at all. It felt useless or like a waste of time.
When I used AI, I used it to help brainstorm ideas for my research argument. That was really the only time I’d thought to use it and it didn’t really help so I just never went back to it.
I didn’t use ChatGPT or generative AI like that because I felt like I didn’t need to. My first topic was extremely specific, so AI wouldn’t help with my paper much because of how broad AI is. And my second paper, I felt like there was so much research available, along with my prior knowledge to the topic, that I didn’t see much use for its help.
AI isn’t trustworthy.
I just never think to use it and I also don’t really trust it
I never really used it in the past just because I don’t really trust it and it’s easier to learn and work through mistakes if its my own work
AI isn’t as smart as students.
I already feel confident in my writing so I don’t need a crutch.
I think I can do it better.
I hate ai. I am smarter then robots.
This last student comment was my favorite. Despite the incorrect use of “then,” I agree with the student: they are smarter than robots simply by virtue of the fact that they can think, if for no other reason. And I’m pleased with the number of students who believed in themselves enough to declare that they could “outwrite the AI” (as I named a previous assignment).
As I noted, I like to think that this attitude was enabled in part by the grading system of the course: students truly believed that with enough revision they could produce strong final products, and they weren’t penalized for the mistakes they made along the way. Plus, they recognized that their own minds and the research they did for their work were more valuable sources of expertise than a predictive text machine could ever be.
I was also pleased that students recognized the ways that AI might impede their learning and committed themselves to that learning, even when things got difficult. Some students were particularly very thoughtful in pinpointing the exact ways in which use of AI hampered their development—for example, the student who noted that AI might make poor suggestions about crafting a good argument that the student might feel compelled to include anyway, to the detriment of their process.
Of course, this is only one class, and it’s still early days for generative AI. This technology is advancing at lightning speed, and students (and I) still don’t fully know what it’s capable of. I don’t expect student attitudes to remain stable. But last semester gave me confidence that the challenges of AI in teaching and learning are not insurmountable—and that new approaches to grading can help.
There were two other student comments that I haven’t stopped thinking about but that didn’t seem to fit into any of the categories above. One was a comment about what AI is good for and what it’s not:
to me, this is only for busy work like defining vocabulary terms, taking notes, making study guides, etc
I’m interested in how this student has been using AI in their other classes, and I think they’re right: AI could be helpful for the kind of things they list. I also just think that this is indicative of students’ general attitudes toward their schoolwork. If it feels like busywork, they’re just not gonna do it. This was increasingly true before ChatGPT came on the scene, and it’s even more true now. We would be wise to take a good, hard look at what we’re asking students to do. If our assignments feel, even a little, like useless tasks undertaken primarily to earn points, we should reevaluate—or at the very least go out of our way to help students understand why those tasks are valuable.
The other comment was from a student who seemed to use AI a bit more extensively than their classmates and who found it helpful:
I think it did [support my learning] as sometimes you have an idea for something in your head but you just can’t get it into words.
I confess that, as a writing teacher, I’m somewhat conflicted about this comment. On the one hand, I know the feeling. And actually some of my favorite teachers have been those that perform this function well: they take the garbled mess of what you were trying to say and repeat it back to you in ways that sound super smart, like what you meant to say all along, only much better. If AI can serve that function, maybe that’s not a bad thing.
On the other hand, an important part of the writing process is taking that idea you have and going through the struggle of putting it into words, and then rewording it, and then refining it, and then rewording it again until you’re finally satisfied (or, more likely, until you’ve overrun your deadline). Often, this process helps you figure out not only how to say something but also what, precisely, you want to say. Writing, as
and others remind us, is thinking. I’m concerned that students won’t always know what the line is between, on the one hand, knowing what you want to say and using AI to find some words for it and, on the other hand, not knowing what you want to say and using words to find out for yourself. Because I’m not sure I could tell the difference.This is what we should be most focused on going forward: not wringing our hands about cheating but thinking deeply about how to help students understand their own learning. What are the unproductive struggles that students encounter during their schoolwork that might be eased using AI or other tools? What are the productive struggles through which real learning happens and for which use of AI or other tools would be counterproductive? What roadblocks are “desirable difficulties” and what are just plain “difficulties”? And how does use of AI affect learning for different students in different contexts?
Answering these questions is the task ahead of us, and our students.
Thank you for sharing this - it seems like you've been having really constructive conversations with your students!
What I've seen with my community college students so far has been similar to what you describe. When we've talked about ChatGPT et al, students' expressed responses have generally ranged from "meh" to "blech", and I've seen very little evidence that they're using it in the stuff they hand in. Actually, a significant number of my students prefer to do much of their writing on paper. (Some do use grammar checkers and translation tools, both of which I'm generally ok with.)
My sense is that there are a number of things going on:
1. I am primarily teaching in person, so I'm working with students who have generally chosen a less technologically mediated experience.
2. Some of my students are dealing with barriers in terms of technology access and literacy.
3. Many of my students have had bad experiences with ed tech in the past, and I suspect that this informs their response to ChatGPT et al as relates to their studies.
4. Some of my students appear to view using ChatGPT et al as some combination of trashy/untrustworthy/inauthentic/dishonest/cheating. Others are just not interested. (I do get a few AI enthusiasts, but not a lot.)
5. My classes give students a lot of space to decide what they want to do and how they want to do it, and I make a point of encouraging them to do things in a way that involves genuine thinking and learning *for them*.
6. Moving toward ungrading seems to be leading to higher engagement and lower anxiety.
I hear different things from colleagues, especially for courses that are fully online, but this is what I've been seeing myself.