The academic world is aflutter with discussions of ChatGPT’s impact on course design, especially the design of writing assignments. I, too, have been thinking about this quite a bit and will have some more specific thoughts about ChatGPT in the future. For now, though, I wanted to discuss a couple of approaches I have (or am) taking toward writing assignment design that have nothing directly to do with ChatGPT. However, as I’ll suggest, these approaches happen to also be helpful in managing some of the potential problems raised by ChatGPT. The upshot, then, is that course and assignment design that is aimed at the traditional goals of education - student learning, developing thinking and writing skills, etc. - can also help us manage the supposed menace of things like ChatGPT.
I will focus on three main elements of my course design:
Regular in-class informal writing
Regular out-of-class exploratory writing
The use of Specifications Grading for formal writing
In-Class Informal Writing
In the past, I have often started a class session with a writing prompt related to past ideas from the class. And I have often ended a class session with a writing prompt that encouraged review and synthesis of ideas from the class session. Both of these techniques are well supported in the teaching and learning literature. However, I just prompted students to do it and gave them time. They recorded their thoughts in their notebooks or laptops or phones or just tuned out for a few minutes. There was no accountability. Rather, I just tried to persuade them of the benefit of the task and told them their learning is up to them. Additionally, although I told them to respond to the prompt without reviewing notes or readings, I did not forbid it and so many just looked back to record their thoughts (or looked for the first time perhaps).
I’m sure many readers can predict the results of this approach. A fair number of students would regularly arrive to class 5-10 minutes late, since from their perspective the priming activity wasn’t “important” (it is worth noting that late arrival is a general problem at my university, so I don’t actually know how much specifically related to this activity). Often, these were the students who struggled the most and so would most benefit from the activity. Even for students who were there, it was typical that the students struggling the most also didn’t engage in the activity. Similarly for end-of-class prompts: although no student would just leave early, a number would just use the time to pack up, finish their online shopping, or whatever. And, again, most of the students that didn’t take my offer were precisely the ones who most needed it.
So, the change this semester is twofold: incorporate these sorts of writing activities into the grading system of the course; and incorporate related writing prompts throughout the class session, rather than just at the beginning and/or end. Beyond the obvious benefits of this - providing accountability, incentivizing performance, etc. - there are additional benefits discussed in the literature on informal writing.
First, prompts that begin a session by having students connect course ideas to their own lives or otherwise allow them to explore an issue from their own perspective have been shown to increase buy-in to the class and content. Ideally, this means more students are curious about the stuff we will then discuss in class and this curiosity will prompt attention and focus. The importance of provoking curiosity, and its connection to attention, are discussed at length by James Lang in Distracted.
Second, prompts throughout the session can break up class activities (especially lecture) in a way that also tends to promote greater attention. And, of course, these prompts are opportunities for the students to do something with the information that have just learned, which is key to making it stick. They can also provide shyer students an opportunity to collect their thoughts before a small-group or full-class discussion.
Finally, the writing prompts - no matter when they show up in a class session - can be oriented toward having a student struggle with an important question or issue that I might expect them to later grapple with in more formal writing. In this way, we are modeling the writing as thinking process. They need to engage in some brainstorming, false starts, etc. with ideas before they can really settle on the ideas they’d want to share outward. But often courses aren’t set up to allow that. These sorts of in-class writing prompts provide such an opportunity.
I’ll say more about how all these approaches work together to respond to ChatGPT concerns below, but one of the core ideas with these assignments is that by prompting low-stakes brainstorming and thinking on ideas that will show up in formal essays, when students get to those formal essays they won’t feel as out of their depth or otherwise anxious about the assignment. Those sorts of feelings are one common cause of cheating.
Out-of-Class Exploratory Writing
I am also assigning, for the first time, out-of-class informal and exploratory writing. In my Engineering Ethics courses, I am calling these “Weekly Wonderings” and in my Philosophy of Law course I am calling them “Wonder-full Writings”. In both cases, I present them to students as an opportunity to wonder precisely because (following Plato & Aristotle) philosophy begins in wonder.
In many ways, these assignments are similar to the in-class writing. But, they should provoke a greater amount of writing (somewhere between 250-500 words) and a greater depth of writing. In particular, the “Weekly Wonderings” in Engineering Ethics involve looking back to something from the previous week and digging into it more. So, in this way, they will have already had the ideas from class and now get to explore them. The Philosophy of Law is different, as there the idea is to get them wondering about a reading before we discuss it in class. But, as an upper division course, the aim there is to guide students in formulating critical philosophical questions on their own.
Like the in-class writing, though, these assignments are low stakes. The “old college try” is really the guide for grading. But also like the in-class writing, I can design the prompts to encourage students to start thinking and writing about issues that I will later ask them to discuss more formally. And so, once again, part of the aim is to increase student confidence in a way that makes them feel much more prepared to successfully complete formal writing assignments. In the case of Philosophy of Law, the writing should also help the students prepare to be actively engaged in class, much like a 1-minute paper would do in class, but with more depth.
One key thing with this exploratory writing is that the “old college try” idea is really about demonstrating a meaningful attempt to grapple with the issue and think about it. It isn’t about the writing quality, it isn’t about being accurate, and it isn’t about communicating clearly to others. As such, many of the things about writing assignments that scare students the most are eliminated and, with them, much of the urge to cheat (with ChatGPT or otherwise).
Specifications Grading for Formal Writing
Specifications Grading has been gaining ground in academia for awhile now. And I started using it early in my academic career. But as I started seeing the concerns other faculty were raising about ChatGPT I continually came back to “well, if you use specifications grading, that won’t really be an issue”. So, more directly than the previous two design techniques, the use of “Specs” grading on formal essays can combat the use of ChatGPT.
You can find plenty of discussions online about Specs grading as a whole, but what is relevant here is two things: A set of ‘specifications’ that an essay must meet to “pass”; and an opportunity to revise essays that do “not yet” meeting those specifications. The specifications function like a “one level rubric”: rather than assigning points or whatever for each element, an essay must meet all specs to pass. Importantly, the common recommendation is to set these specifications such that a paper that meets them would (at a minimum) be something like a ‘B’ or ‘B+’ paper in a more traditional system. I don’t think that is the greatest guidance, since it assumes there is some “form” of a “B” that all our systems partake in, but it can be a useful heuristic.
So, my formal essays have a set of “specs”. These are provided to students in advance, which allows them to more or less use the criteria as a ‘checklist’. Some of the specs are obvious - a student referring to them and checking their own work would definitely know whether they met them or not. Others are more judgment based. But the benefit of the judgment based ones, if students use the specs list as a ‘checklist’, is that it is helping students develop the ability to assess their own work. And then they will get to compare their assessment to an ‘experts’ when their essay is returned. So, one of the major learning benefits here is just that: developing self-regulated learners who can assess the quality of their own work.
The other learning benefit of specs grading, which Linda Nilson identifies in her book presenting the system, is that it promotes “rigor”. Now, I agree with many of the ungrading advocates who see “rigor” as a bad word in the way it is often used as a cudgel. But there is some good and useful idea hidden in there. Basically, by not allowing “half-assed” work to receive any credit at all, you effectively eliminate any incentive for doing such work. The result is that students take the assignment more seriously from the start and, as a result, produce better work. This is because, as noted above, the ‘passing’ level is something like a ‘B’ or ‘B+’ quality paper.
As an illustration of the power of this: before I started doing specs grading in my Engineering Ethics course, many of the papers I received were a single paragraph over 2-3 pages. My eyes bled. Once I started using specs grading and included “uses paragraphs” as a specification, lo and behold, every student magically knew how to use paragraphs! It wasn’t that some simply didn’t know before, it was that they knew they could still do well enough without worrying about it.
Finally, the “pass” or “not yet” with revision system greatly enhances student learning. Having to review their own work, review the feedback, and revise in light of it is something many students have never been asked (or, more precisely, forced) to do. But so much deep learning can happen there. Indeed, many of my students have reported, over the years, how much better they became at writing as a result of the revision process.
Fighting ChatGPT without the Cop-Shit
One way of responding to concerns about ChatGPT is to double-down on the authoritarian red-queen style arms race that has guided much of the academy’s response to new technologies. I don’t like that approach for many reasons, some of which I’ll discuss in later posts. For now, I just want to suggest that the design elements I have discussed above can all be understood as means of fighting ChatGPT, even though I didn’t have that in mind when I decided to use any of them.
Writing as a (Thinking) Process
The first way these techniques can fight the desire to use ChatGPT is that they make visible the fact that writing is a process. In particular, it is a thinking process. When I ask students to write even a formal essay, I don’t see it as the “end of learning” where the whole goal is just to show me what they know. There is some of that, of course, but I also see it as part of the overall learning process. If we can get students to understand the contribution writing makes to their thinking and learning, it can help them see why cheating really is cheating themselves.
Now, importantly, to really get student buy-in on this, the idea of writing as a process has to apply not just to informal writing that may contribute to formal writing. It also has to apply to the formal writing itself. Hence the “pass”/”not yet” approach to evaluation.
Increasing Student Confidence
I’ve already discussed this briefly above, but the other way these approaches can help is by improving student confidence. To become a better writer, you have to practice writing. To become a better thinker, you have to practice thinking. But so often our courses are designed in ways that don’t provide observable opportunities to think (or write). Thus, the informal writing here goes a long way to correcting this common deficiency. Additionally, the ability to revise formal essays, especially when combined with positive feedback, can reduce anxiety and improve confidence. And, once again, a lack of confidence in one’s abilities is a major cause of cheating.
Regular informal and exploratory writing doesn’t only improve student confidence in their writing. It should also improve their confidence in their understanding, which will carry over to the formal essays. If students feel like they understand things well, then they won’t feel (as much of) a need to seek out inappropriate assistance. When a student sits down to write a formal essay, now they will have already been thinking about the issues to some degree through the informal writing, and they will have access to the informal writing from which to pull ideas.
Specifications against ChatGPT
A final check comes with the formal essays. Obviously, the real hope is that through the informal writing students will feel confident in their abilities and have a desire to keep digging into the issues such that they won’t want to resort to ChatGPT or other forms of cheating for the formal essays. But, just in case, the specifications approach functions as a way of making it simply not worth it to use ChatGPT.
I noted earlier that “half-assing” a paper stopped being ‘worth it’ when I switched to specs grading. The same could apply to ChatGPT. Most of the faculty discussions about ChatGPT that I have seen suggest that the ChatGPT output represents a “C” or perhaps “B-” quality essay. Following the specs grading heuristic, that wouldn’t pass for me. More broadly, faculty are regularly noting what ChatGPT does well and what it seems to struggle with. To the extent that something it struggles with, like taking a position and defending it, is a relevant learning outcome for your students, then one way to check against the use of ChatGPT is to ensure that that sort of thing shows up in your specifications.
To add to this, the fact that students must revise in light of feedback if they do not pass also means that a student who attempted to use ChatGPT to pass is now in a potentially poor situation for revision. They don’t really know what was said in the first place, and so if I tell them they need to revise to (for instance) justify the claims being made or to establish a position on the issue, to do so will require really thinking about what the assignment is asking for. So it isn’t as if the revisions will be pro forma and even if a student ends up still using much of the text from ChatGPT, the thinking they will have to go through to convert it into something that passes will likely mean they have learned quite a bit. Alternatively, they will have to put so much work into the revision that they will learn it didn’t save them any real time at all to use ChatGPT in the first place.
Finally, there are two new things I am doing this semester with my formal essays that also work against the use of ChatGPT. First, one of the specs for the formal essays is the inclusion of an outline. Now, I know that ChatGPT can produce outlines, etc. and so I don’t think this is a super great check all on its own. But, it is part of my broader philosophical aim of making visible the writing process and emphasizing to students the importance of going through that process. Second, I am requiring any student who plans to revise an essay they did not pass to visit our university Writing Center or meet with one of my student assistants (in Engineering Ethics, I have “Engineering Peer Teachers” who help students with papers; unfortunately, they aren’t TAs and cannot grade them). Again, I am not doing either of these things because of ChatGPT. In both cases, it is about deepening learning and reinforcing the process of writing. But, both can potentially check against the use of ChatGPT. Neither are perfect, but both do something.
Conclusion
One of the core philosophical ideas I have for thinking about when the use of AI is appropriate emphasizes thinking about the purpose of the activity and, in particular, whether the process is essential to the purpose or whether the purpose is exhausted by the product. I’ll talk about this in more detail later. For now, I mention it to indicate how much of what I have said above emphasizes the process of writing and how I use writing assignments to prompt a thinking process in the students. This means that if we want to achieve the purpose of the writing assignments, then they and their minds are irreplaceable. The process is the point, the product is almost incidental as it really is just a way of prompting the process and providing observable evidence of the process.
I think if we work to emphasize that more - and it is something that many writing scholars have already been advocating for decades - then we can simultaneously cut against the (inappropriate) use of ChatGPT and enhance student learning and trust.