It’s been a busy June! I’m on the organizing committee for The Grading Conference, which, as of the release of this blog post, is about halfway over. My head is spinning with all the ideas and approaches to grading we’ve discussed so far, and it has only been day and a half. I’m looking forward to sharing more thoughts about the conference soon.
I’ve also done two podcast interviews in the past few weeks: one with my colleague
(and others) for Intentional Teaching and one with my Grading Conference co-organizers and Robert Bosley for The Grading Podcast. Both episodes should be out soon, and I’m excited to share them with you.A huge part of these conversations was generative AI and its impact on teaching and learning. While I was recording the episodes, I was also thinking about my course for the fall, so AI has been top of mind for me recently. And despite the productive chaos of The Grading Conference, I found a bit of time to set down some half-baked thoughts about it. Please do take them as half-baked—AI is one of those topics on which my thinking develops so quickly that I can hardly keep up with it.
As we continue to explore the impacts of AI on teaching and learning, some things have become a lot clearer to me while others have become a lot more muddy. The main thing that’s getting more muddy for me, right now, is what the most important parts of the writing process are and how generative AI might variously affect students’ experiences with those different parts.
I have always recognized that writing means different things in different disciplines and that asking students to write may have a number of different functions across courses. That means, of course, that the appropriateness of using AI, and the ways it might support or impede learning, vary widely across fields and even assignments.
But I’m becoming increasingly conscious of the fact that even when we teach in the same disciplines, have similar goals for student learning, and assign similar learning tasks to reach those goals, we might still have totally divergent ideas about how writing works for our students—and thus the ways in which AI helps or hurts them.
I’m thinking here about the writing classroom, specifically, and debates about which parts of the writing process students should or should not outsource to, or augment with, generative AI. At this point, I think most of us agree that spelling, punctuation, and grammar are not learning priorities. If spellcheck/Grammerly/AI can help students improve their work in these areas, fine.
But what about, for instance, brainstorming and idea generation? I’ve heard a lot of instructors say that it’s okay for students to use AI at these early stages of the writing process, and I’ve suggested this as a possible use of AI to students myself. Leon Furze, however, takes a different view. On Teaching in Higher Ed last week, he suggested that the initial idea work of writing is “far more important than the finished product itself,” and that AI may be more useful at later stages of the process, like shaping the first draft.
Others might disagree with this view on the grounds that writing is, at its core, revision. For some, getting down those initial ideas is just brain dumping, and the real work comes with taking that jumble of thoughts and making it coherent. In that case, allowing AI to shape a first draft may cut off an important avenue for students’ development as writers.
So, everybody has a different idea about what the important part of the writing process—the part you can’t outsource—really is. My sense is that the key piece, for all of us, is the piece where the thinking happens. That’s where we most want students to engage. The problem is that the thinking happens at different stages for different people.
For me personally, brainstorming and idea generation are not fruitful thinking zones. Coming up with a paper topic was always the very worst part of writing a seminar paper in graduate school. It seemed like no matter how much I tried to bounce ideas around in my head or fiddle away in my notes documents, nothing really good or interesting ever happened until I sat down to put my sketchy ideas into a coherent draft for an imagined audience. I often began that draft fairly uninterested in what I was writing, with an idea that was not very strong. But the more I wrote on something, the more interested I got. The more interested I got, the more deep thinking I did. And the more thinking and writing I did, the clearer and stronger my idea became.
I do get a lot out of revision, too. But the main place my thinking happens is in the initial drafting stage. I know other writers who spend tons of time at the front end ruminating on their ideas, often mostly in their heads, and then write a first draft as if they were simply transcribing those ideas. For them, all the thinking is in the brainstorming. I know others who write reams and reams of words on a first draft and then go back and substantially rewrite those words as they shape the piece. For them, all the thinking is in the revision.
These are, of course, oversimplifications. Obviously, some kind of thinking happens for everyone at every stage of the writing process and that process rarely happens in the kind of linear way I’ve described. But I believe it’s true that all of us work differently as writers.
A case in point: in the podcast episode I mentioned, Leon Furze says that he sometimes dictates the first drafts of entire articles by recording voice memos on his phone while he’s out for a run. This would never work for me, and not just because I’m a fairly poor runner. But it clearly works for Leon! That’s the thing that matters.
All of this is to say: because the most meaningful or important work of writing occurs at different stages of the process for different people, it’s very difficult to make hard-and-fast rules about where AI supports or hinders learning, even within one assignment. Some individuals may benefit immensely from using a chatbot as a tool for brainstorming or to get over the initial hump of the blank page so that they can get on with the real learning. Others may find that using AI in this way outsources a part of the process that is essential to their development. Some may find that using an AI tool to shape or organize their ideas is a helpful way to consolidate their thinking or to spur further thought. Others may find that such an activity takes thinking out of the equation entirely.
These variances are very cool! But they also present a problem for instructors. If all of the above is true, then we can’t really tell individual students how AI will affect their learning; they have to tell us. Sure, I can make some informed guesses. I can look at an assignment and make some suggestions about what AI may or may not do for students. I can look at a student’s work and try to identify where AI may have supported them or got in their way. All of this can be helpful. But it does not fundamentally change the fact that use of AI—even the same use, on the same assignment, for the same purpose—will affect different students in different ways.
The first implication of this is that it’s difficult to create a set of rules around AI use in my class. That makes things hard, but it’s something I can deal with.
The implication that really interests me, though, is this: in order for students to make informed choices about AI, they have to be knowledgeable not only about AI tools themselves but also about their own writing processes. Unfortunately, most students enter my class without a well-defined, well-developed, or personalized process, much less a deep awareness of that process. They have to cultivate their own process and a reflective stance on that process at the same time that they’re trying to figure out how AI might affect their growth as writers.
This strikes me as an exceedingly difficult task. What can help?
Well, unsurprisingly, I think the way we evaluate students plays a big role. The more space we can give students to practice their own writing process and then examine it, the better off they’ll be. That means we have actually value process and reflection work in our grading: we can ask students to engage in this work all we want, but if their efforts aren’t recognized, they simply won’t do it. Moreover, we have to teach and model for students how to reflect. This is a foreign activity for them, and sometimes for us—but I’m becoming increasingly convinced that it’s one whose importance can’t be understated.
I think we also have to encourage experimentation and play. For me, this means asking students who want to incorporate AI into their writing process to do so in different ways and to reflect on how their different uses affected their work. Last year, most of my students were uninterested in using AI for their writing. I think that’s likely to change in the future. One thing I’m considering is asking AI-curious students to use these tools differently for each assignment they submit—perhaps once for brainstorming, once for drafting, once for revising—and talk about which use they found most helpful and why.
Going forward, students will need to know not only what their writing process looks like but how AI affects that process. The skill to discern this is one of the most important things they can leave my class with.
Thanks for sharing these thoughts! In my first-year writing class last spring, students and I created a policy that said they could use AI in whatever way they thought would help their learning, and they needed to cite it and write about their use in a reflection (that they turn in anyway with all "final" drafts). I also told them that if I thought their use of AI was hurting their learning, we would talk about it. I was hoping for helpful feedback, and I got a few experimenters. But I think most students either didn't use it or forgot/failed to cite it and include it in their reflection. I would definitely try the experiment again, though. I'm interested to hear what others learn!
Thanks for this post, Emily. I've been feeling the same thing about "where" AI fits in teaching writing (if it does at all), and your post fleshes out the nebulous ideas I've had, too. I'm in the process of revising my fall course, which does incorporate AI for two assignments. Both call upon students to approach the applications as objects of scrutiny as well as a possible tool to use. Last fall, nearly all of my students decided that AI/LLM tools were not helpful for their learning, even though they presented useful information they could use (with the usual cautions). Now, as I pull together my plans, I am wondering whether students' histories with ChatGPT and the like will have changed (diminished?) their ability to distance themselves from the AI/LLM enough to consider the applications as objects of scrutiny.
(I do this kind of hand-wringing as I plan.)