5 Comments
Jun 14Liked by Emily Pitts Donahoe

Thanks for sharing these thoughts! In my first-year writing class last spring, students and I created a policy that said they could use AI in whatever way they thought would help their learning, and they needed to cite it and write about their use in a reflection (that they turn in anyway with all "final" drafts). I also told them that if I thought their use of AI was hurting their learning, we would talk about it. I was hoping for helpful feedback, and I got a few experimenters. But I think most students either didn't use it or forgot/failed to cite it and include it in their reflection. I would definitely try the experiment again, though. I'm interested to hear what others learn!

Expand full comment

I love the idea of turning in reflections with every piece.

Expand full comment
Jun 14Liked by Emily Pitts Donahoe

Thanks for this post, Emily. I've been feeling the same thing about "where" AI fits in teaching writing (if it does at all), and your post fleshes out the nebulous ideas I've had, too. I'm in the process of revising my fall course, which does incorporate AI for two assignments. Both call upon students to approach the applications as objects of scrutiny as well as a possible tool to use. Last fall, nearly all of my students decided that AI/LLM tools were not helpful for their learning, even though they presented useful information they could use (with the usual cautions). Now, as I pull together my plans, I am wondering whether students' histories with ChatGPT and the like will have changed (diminished?) their ability to distance themselves from the AI/LLM enough to consider the applications as objects of scrutiny.

(I do this kind of hand-wringing as I plan.)

Expand full comment

I’m glad to be finding more and more people interested in ungrading!! Great article, by the way.

Expand full comment

Great points, Emily!

I engaged in some similar processes this year. I asked my students to use AI in very specific ways (via rubrics) and then evaluated their use of AI and gave them feedback and a letter grade. This was uncomfortable at first, but at the end they appreciated it and we all learned a ton.

They learned what kind of strategies worked for them. They told me whether or not they thought prompt engineering was even worthwhile, and the principles that I laid out to them were all broad and writing-based (give AI context, be specific with instructions, use your imagination to come up with creative places to take the conversation or creative tasks for the AI to help you with.)

They also reflected about the effectiveness of the bot and in one case on the impact on their overall work. I call this the What-Why-How approach.

I think I lose people when I talk about giving a letter grade to student interactions with AI. But by doing that, I was able to communicate with students beforehand that I cared about their interactions a lot. So much so, in fact, that I was willing to devalue the final product of the project and increase the value of the interaction -- in terms of grading weights.

This is key - assessments are a communication mechanism. We tell students what we care about the most when we design an assessment. They pick up on that, in both direct and subtle ways. Imagine if we DON'T grade our student interactions with AI in 2024-25, what will we be communicating to them then? This counterfactual argument usually turns teachers' heads and they realize that it is in fact quite dangerous not to evaluate student interactions for a grade.

In any case I agree with a lot of what you said. There are so many paradoxes rising out of the LLM advancements. I actually thinking they are opening windows for us to teach metacognition, reflection, and creativity in ways we could have never imagined -- but they are not the ways that tech and EdTech companies are selling to us. In fact, they are almost exactly the opposite.

In any case I appreciate this thoughtful post - thanks for sharing!

Expand full comment