In December, I wrapped up the second ungraded class I’ve taught at the University of Mississippi. I won’t be teaching undergraduates again this semester, so I’m not immediately designing another ungraded course. But I thought I should reflect on my takeaways from fall while they’re still fresh. I hope that those of you who are about to begin a new ungraded course will find this quick reflection helpful.
Here are some things I learned from last semester:
Tracking progress was useful—but students needed help to do it.
I wrote a whole series of posts on the progress tracker I developed for Writing 101:
I won’t rehash those posts here, but I will say that I think the document helped the students and me align expectations throughout the semester and gave both of us more confidence about final grade designations. It was also a really useful tool for student reflection and self-assessment.
That said, students would not have used the progress tracker if I hadn’t provided time and space to actually work with the document. I anticipated, at the beginning of the semester, that students wouldn’t keep up with the worksheets on their own, and so I built in class time to fill them out. But even with that class time built in, some students—knowing I would never see the tracker—didn’t put in much effort. And the more I considered it, the more I thought it would be helpful for me to see their notes as the semester went along.
Next time, I may roll the progress tracker and the periodic self-assessments into one document/activity. I could create a shared document for every student, ask them to keep up with the tracker regularly, and designate specific times during the semester when I would take a look at each tracker and leave comments for students about their progress. I think this might make progress-tracking/self-assessment simpler and more efficient for all of us.
More structure for late work and revision = a better experience for everyone.
In my first ungraded course at UM last spring, I used an assignment extension form for late work, and I let students set their own revision schedules. I wouldn’t say this was a bad idea, and I think it would work well in other contexts. But it didn’t work as well as I had hoped for my first-year students.
In the fall, I switched to a late token system that put a (generous) limit on the number of assignments students could submit late and (in theory) gave them only a few days past the original assignment deadline to submit their work. Because there is some grace in the grading system to allow for occasional missed assignments and provide multiple opportunities to submit major assignment revisions, the late tokens worked much better. Giving students solid target dates for revisions, rather than asking them to set their own revision schedules, did require me to sacrifice some flexibility. But because students can continue working on any major assignment until the submission of their final portfolios, I think the system worked well.
Lesson learned: for the first-year students I teach, more structure is better.
Modeling was key.
I find that somehow I can never do enough modeling. I haven’t even hit a point yet where I’m getting diminishing returns on it.
Most of the modeling I’ve done in the past has been giving students samples of the writing assignments I ask them to undertake and discussing those samples with them in class. Or showing them my own writing process whenever possible. But this semester, I started providing more models for metacognition and self-assessment as well.
For example, when I asked students to write their extended self-assessment at the end of the semester, I gave them two examples to review: an anonymized self-assessment from a former student (shared with permission) and a self-assessment generated by ChatGPT using my assignment prompt. I think discussing these two examples in class led to more robust self-assessments from students at the end of the course. This is definitely something I’ll do again.
Generative AI was not a big issue—and I think ungrading helped.
I am quoted in this recent Chronicle piece on generative AI in the classroom saying, “I have not once this semester suspected a student of passing off AI-generated material as their own work or otherwise using AI inappropriately.” It’s true.
I wrote about some of the factors that may have helped on the platform formerly known as Twitter and on Bluesky. I’ll also have more to say in a later post about my students’ perspectives on generative AI.
But I think one of the biggest reasons my students were so uninterested in using generative AI is because of ungrading. Since the course was ungraded, students had many, many opportunities to improve their work based on my feedback, so they didn’t feel pressured to get it exactly right the first time. I think ungrading also helped me build healthy relationships with students and cultivate transparency and habits of open and honest communication. They seemed to trust me to be fair about their grades and just didn’t seem to feel there was a need to outsource their work and learning.
In case you missed it, I collaborated last May with students from my spring course to write this piece about the relationship between alternative grading and academic honesty in the age of AI:
Everything in the piece still stands. There is no one solution to the academic integrity problems introduced by new technologies. But changing our grading systems is one important piece of the puzzle.
Best of luck to all of you who are preparing for new courses! I’ll be back in two weeks’ time with a series of posts that share student comments from my fall course—starting with student perspectives on ungrading. Stay tuned!