Teaching Reading and Analysis with…Standards-Based Collaborative Grading?
A report from the field at midterm
I’m in the middle of midterm conferences with students in my first-year writing course, so I’ve barely had time to blog. But I’m buzzing with thoughts about how collaborative grading has been going this semester, so I’ve decided to take a break from spotlighting grading research to set some of those thoughts down here.
The main thing I’m reflecting on is a revision I’ve made to the early units of my course that I have some mixed feelings about but that seems to be bearing fruit in my students’ self-assessments and midterm conferences. For these early units (though perhaps not for later ones) I seem to have arrived at some kind of hybrid standards-based/collaborative grading model.
But before I get into that, I’ll share some background on the problem I was trying to solve and the solution I devised.
The Problem: The Rhetorical Analysis Paper
(I’m about to get a little in the weeds of writing pedagogy here; if you’re in another field, you might wish to skip ahead to the next section.)
I’ve never been happy with the way my rhetorical analysis units have gone in the past. Rhetorical analysis is a required component of first-year writing in many departments, including mine. It’s often one of the first things we teach. And usually, students demonstrate their learning through a paper of some kind—after all, this is a writing class.
One problem I have with this is that it’s difficult to make such an assignment authentic. I almost never see a rhetorical analysis “in the wild,” so finding real sample materials for students to use as models is almost impossible. One of my main goals as a teacher is to assign only writing tasks that might have life and relevance beyond the walls of our classroom. A rhetorical analysis paper rarely seems to fit the bill.
Another problem is the way these papers are assessed. It seems to me that the primary purpose of writing a rhetorical analysis is to practice a specific kind of critical thinking—not to write a pretty paper. Sometimes students, and I, get caught up in the “paper” part and forget that what we’re really trying to do is practice and assess analytical skills.
Additionally, these assignments are made more difficult by the fact that many students struggle with basic reading comprehension skills. Much has been said about this in the past few months. Some of it I agree with and some of it I don’t, but I won’t belabor that here. Suffice it to say that without the ability to identify, at a basic level, what the text is saying, it’s virtually impossible to do a rhetorical analysis. Attempting to write a paper under these conditions, even if you have the opportunity to revise later on, is a recipe for disaster.
So, I needed a way to help students practice skills in reading comprehension and basic analysis, get feedback on their thinking, and try again. I needed to help them, and myself, focus on sharpening their analytical ability, undistracted by the demands of writing a good paper (at least for the moment).
The Solution: Rhetorical Analysis Exercises
Instead of asking students to write and revise a Rhetorical Analysis paper over the course of a three-week unit, I’m now asking them to do three shorter rhetorical analysis exercises, due every Friday. (Here’s my template, if you’re interested.)
In the early part of each week, we practice rhetorical analysis together through social annotation on Perusall and in-class activities. We’ve looked at texts across a variety of media: op-eds about sports, popular culture, social media, and education; the written platforms of the Democratic and Republican parties, as well as the campaign sites of students’ congressional representatives; and marketing materials from my own university.
At the end of the week, students complete and submit a rhetorical analysis exercise on their own. In these exercises, they choose a specific text to analyze (from a menu of options I’ve selected) and then answer the following questions about it:
Context & Argument
What is the context of this text? What larger conversations or controversies might it be intervening in?
What’s the implicit or explicit argument of this text? What is it trying to sell you? Where and how is the pitch presented?
Audience & Purpose
Who is the audience of this text (and who is not)? How do you know?
What is the purpose of this text? What do its creators want it to accomplish, and how do you know?
Research, Evidence, & Persuasion
What research or evidence do the creators of the text provide (or not provide) to support their argument or sell you on their pitch? What form does that persuasion take?
Style, Conventions, & Mechanics
What stylistic choices have the creators made (or not made) to appeal to their audience, accomplish their purpose, or strengthen their points?
The five categories above are mapped directly onto the learning outcomes set by my department for this course. Students’ own writing will be evaluated on criteria in these same categories later in the semester.
I provide feedback on each of these assignments (yes, this takes a lot of time) by writing comments on student work and by sharing whether I think their answers in each category are “developing,” “proficient,” or “excellent.” In other words, I provide both qualitative feedback and a series of “marks,” if you like that word. Students use the comments to inform their subsequent rhetorical analysis exercises, but I also offer them the chance to revise their answers if they wish. They’ll have an opportunity to submit their best work in a final portfolio at the end of the semester.
Some Midterm Revelations
Revelation #1 is simply that students needed to spend this time on the basics of reading and analysis. Some are just now picking up on fundamentals like understanding the argument of a text or identifying its audience. Others are just starting to get what “analysis” is, and it took three different attempts to get there. These are foundational skills for the class, and many students, unfortunately, haven’t been properly equipped with them. (I blame standardized testing, but that’s another post.) While three-plus weeks is a long time for a first-year composition course to spend on tasks other than essays, my tentative sense is that it will be worth it when we turn to students’ own writing.
But my other revelation is that I have inadvertently created a kind of hybrid grading system for the first half of the class, combining elements of collaborative grading and standards-based grading. At least on a micro-level, these rhetorical analysis exercises seem to function like a standards-based assignment:
Students are trying to reach specific standards like identifying an argument and its context, analyzing an author’s stylistic choices, etc.
Student performance is evaluated separately for each standard.
Students have multiple attempts to reach the highest level of achievement.
Initially, I had mixed feelings about the system, due to the “dead frog problem”—a colorful concept I first encountered in David Clark and Robert Talbert’s Grading for Growth:
“If a person attempts to understand a frog by dissecting it and examining each piece separately, then they have also destroyed the cohesive being. Likewise, if you dissect a course into many separate standards, you risk losing sight of the interconnected whole” (60).
Writing is such a holistic activity, and all the “standards” above are interconnected. It’s hard to talk about argument without talking about purpose; you can’t separate choices about style from choices about venue and audience. I didn’t want students to get lost in these details and fail to see the pieces they were analyzing as a whole. And I didn’t want them to get the idea that these categories represented the Five Immutable Pillars of Rhetorical Analysis, or whatever.
I do see slight evidence that this may have happened for some students, especially when they say things like, “In this class, I have learned what a rhetorical analysis is and how to do one.”
But here is what’s winning me over: the midterm conversations I’m having with students are much more relevant, robust, and specific than the conversations I had last year.
In previous iterations of the course, I asked students to look at their work more holistically, as I was doing, to talk about where they were or weren’t reaching the course goals. Students really struggled to identify particulars of their strengths and weaknesses or talk about the quality of their work in nuanced terms.
This semester, it’s different. I’m hearing things like…
“I want to get better at understanding the implicit arguments of a text and finding the deeper meaning.”
“I feel like I understand audience a lot more now, and that will help me when I write my own argument paper.”
“I still have a hard time figuring out the context of an argument.”
“At first, I couldn’t understand what ‘style’ was, but now I know what things I can look for when I’m thinking about an author’s style.”
These kinds of jumping-off points lead to much richer conversations between me and the students. I think breaking things down into pieces has helped them better understand where, and why, they’re succeeding or struggling.
Confession: I’m almost a little rankled that this is working so well. It feels a little too…systematic. And writing is not at all a systematic activity. In fact, I wonder if, in some ways, this method replicates teaching practices associated with standardized testing. It also feels somewhat foreign to my own training as a humanities educator. In fact:
I was talking to a group of humanities faculty recently about AI. I noted that writing papers involved so many different kinds of cognitive work. And I suggested that if they mostly wanted students to practice one or two main cognitive tasks, they might prioritize teaching and assessing those specific tasks and allowing students to use AI assistance for the other ones. For example, if they mostly cared about students’ ability to construct an original argument about a primary text, perhaps they could allow students to use an AI research assistant to identify relevant secondary sources or a reading assistant to help them break down the complex arguments of those sources.
A couple of faculty members objected, however, that all the parts of writing are interconnected. How can you separate creating an original argument from reading the arguments of others? Where are the lines between “brainstorming” and writing a first draft?
I don’t really disagree. But I wonder if the process is different for our students. These skills are all connected for us because we’ve already mastered them. But when you’re just learning how to read, analyze, and write complex arguments, maybe it’s helpful to break the process down into discrete pieces that can be practiced and refined one at a time.
Am I way off base here? What do we gain or lose by breaking rhetorical analysis into discrete skills and assessing it through questionnaires rather than papers? Writing instructors: have I trampled over all best practices in the teaching of writing? Alternative graders: am I wrong in thinking that this way of assessing analysis veers into something that looks like standards-based grading?
I should add that I would still characterize what I’m doing as collaborative grading. At the end of the day, students propose their own grades and those grades are based not only on the marks I provide but also on a variety of other factors. Even so, this part of the course feels more standards-y to me.
This is very much an experiment, so I’d love to hear your thoughts. How are you teaching reading, writing, or analysis? What kinds of hybrids have you (intentionally or unintentionally) created in your grading system?
I like the idea of moving rhetorical analysis away from a formal "paper" structure and having the process broken down in smaller writing exercises. I have had similar feelings about my own rhetorical analysis paper assignment being inauthentic, and this is how I will be handling it this year:
1. I have found that doing group exercises using advertisements in class is a fun and less tedious way to practice analytical skills--students can still get feedback from me and each other, but I'm not sitting at my computer for hours doing that. I feel like they are still learning how the process works this way.
2. I am also thinking about making rhetorical analysis a part of the peer review process when students evaluate each others' writing. This way, they are practicing this skill in a meaningful context, but it's not necessarily something I have to "grade" and respond to.
3. I am a fan of John Warner's writing assignments in his book The Writer's Practice. He has several different analytical writing assignments which feel a bit more genuine and interesting than the typical rhetorical analysis paper. One asks students to analyze a commercial and identify the subtext, and another asks students to identify what makes a particular work of humor funny. While students still struggle with these projects, I think they make the activity feel less "academic" and more intrinsically motivating, especially if you let them pick the commercial and/or work of humor. I also think asking students to write a review of a movie, product, TV show, video game, etc. requires them to use analytical skills in a more real world context, and I give my students the option to do this as well.
Overall, your process sounds very useful and meaningful, though it also sounds like a lot of work! :-)
I’m currently using the TQE Method to teach my students how to read and analyze text.
https://open.substack.com/pub/adrianneibauer/p/the-power-of-the-short-story?r=gtvg8&utm_medium=ios
For writing, I’ve adapted the work of John Warner, giving my students more opportunities to think on paper.
https://adrianneibauer.substack.com/p/changing-the-way-i-teach-writing?r=gtvg8