This blog post is part of an ongoing series of reflections completed during my Spring 2023 writing course. While the post is being published in June of 2023, it was originally written on February 17, 2023.
In the last week or so, my students and I have been preparing for the submission of their first major assignment. As part of that process, I asked them to help me co-create the rubrics we’ll use to evaluate their work.
Students tend to find this a very foreign activity, but I think that when done well it has some advantages: 1) it brings their voices and perspectives into the assessment process; 2) it can help them get better and more confident at assessing their own writing; and 3) it can get all of us on the same page about the assignment criteria and let students know how I’m thinking about their work.
To create a rubric for the “Outwrite the AI” assignment, I started out with the questions, “What would you like to be assessed on in your writing? What does good writing look like to you?” That garnered mostly crickets, so I modified slightly. “Okay, what have you been assessed on in past writing assignments?”
They had a lot more to say about this. One student offered that they had been assessed on whether or not they had included two quotations in every paragraph. “Okay,” I said. “That’s a rule; what do you think was the guiding principle or reasoning behind that rule?” Students pretty quickly settled on the concept of evidence: quoting from sources indicated that you had good evidence for your argument.
Great. I wrote “Evidence” on the board, clarifying that for this assignment students could decide what kind of evidence to provide, whether or not to directly quote their sources, and where those quotations should go. Those decisions would depend on their specific writing contexts. But yes: every good argument had to provide good evidence. We talked a bit more about the qualities of a good argument.
Another student offered that they had been assessed on whether or not their assignments hit a specific word count and included the right number of paragraphs. Again, I asked about the principle behind that. Students agreed that it was important to make their argument substantive, long enough to get their point across effectively.
Another student, a bit uncertain, raised the concept of “flow.” From there, we worked our way back to the idea of “organization”: good argumentative writing follows a logical train of thought. We also talked about persuasiveness, and how that was dependent on the chosen audience. And I prompted them to think about appropriate use of AI for the assignment.
I took a picture of the notes we made on the board and brought them back to my office, where I distilled and fleshed out our conversation into the following rubric:
In the next class, we used this rubric as a jumping off point for the “Share Your Story” rubric (students have the option to complete either of these assignments for their first submission). I asked what categories they would eliminate for the new rubric: obviously, the category for engagement with AI would go away, and probably the one related to argument, since not all stories would have an argument.
They were stumped when I asked what we should add. But when we returned to the question after our class discussion of some narrative-focused readings, they decided that special attention to audience and purpose would be important. This is what I distilled from our second rubric conversation:
So, that’s how we made the rubrics. My plan is to ask students to evaluate themselves on these metrics for the assignments they submit and to mark and return the rubrics myself for each submission as well. I’ll leave extensive written feedback and also highlight where I think the submission is at (developing, proficient, or excellent) for each category.
I’m going to be totally, completely honest with y’all right out here on Al Gore’s internet: I hate these things.
I like the idea of them. It’s important to talk about assessment criteria with students, and I think it’s helpful to give them a rubric that indicates plainly whether I think their work has hit the target learning goals or still has a ways to go. It also gives them a tool to assess their own work.
But at this point in the semester at least, students didn’t have many of their own opinions about what good writing might look like, so the activity was really more of an exercise in transparency than in actual co-creation. Moreover, I found the rubrics very difficult to write. For me, all the categories blend together—how do you differentiate between “strength and quality of argument” and “rhetoric and persuasion”? And I’m worried that the language is opaque and unhelpful. What even is “critical reflection”? What qualifies as “specific, thorough, and apt” annotations? What defines “appropriate” in these contexts?
Part of the problem is that answering these questions in the limited space of the rubric risks creating the kind of rules and constraints that my students have said they dislike and that I want to remove. Providing more guidance on what qualifies as “thorough” or “critical” on the document itself might mean creating arbitrary restrictions (“at least 6 annotations,” “at least 5 sources,” “a specific, debatable thesis sentence at the end of an introductory paragraph”) and circumscribing decisions that I want students to make for themselves. Even when these restrictions are framed as guidelines (“5-7 annotations”), students still often take them as arbitrary rules and focus on obeying those rules rather than making smart writing choices informed by specific writing contexts.
I will, of course, do my best to offer additional guidance on these metrics in class—but I’m torn between the impulse to provide more specific, measurable assessment criteria and my feeling that students will learn better if they have some freedom of choice in these matters. And I have mixed feelings about what is “measurable” in this context. If we’re going to be assessing student learning, we have to have ways to measure that learning and clear metrics so that students can know whether or not they are on track. But as we all know, there are many things about learning that just aren’t measurable.
So, really, we’re back to the tension that gives rise to my ambivalence about rubrics: how can we create clear, specific, and measurable assessment criteria without hampering student creativity, independence, and learning? Still haven’t hit on an approach to this problem that satisfies me.
Of course, I’ll also have students doing self- and peer-assessment. And I anticipate that my comments on their work will be more valuable than whatever I mark on a rubric. But I’d still like them to have some indication of whether or not they are fulfilling the course learning goals and how they are progressing toward them. I don’t know. Maybe I’ll rethink the rubrics entirely next time.
I’d love to hear your thoughts about the tension I’ve outlined here. Please share your ideas below, and stay tuned for the first assignment submission next week!
Your rubric at some of the better ones I’ve seen. I really appreciate the co-creative aspect. Also, thanks for that link to keeping receipts! Very much along the lines of CYA, but more student-oriented.
When I've co-created a rubric with students, it's been for a particular kind of work (e.g. infographics, podcast episodes) where I can share examples with students. We'll discuss the examples (sometimes professionally produced, sometimes produced by students) and use those to generate elements of the rubric. Rubrics are pretty abstract, and starting with concrete examples seems to help.