Welcome to the latest entry in my ongoing pedagogical experiment with the celebrated/notorious AI text generator, ChatGPT. Each week, I ask the program the same questions I ask the students in my liberal arts course at the University of Michigan, ‘Music & Meaning In Our Lives’. Last week was exciting because ChatGPT received an update that made its work more impressive. Nevertheless, and especially in the context of this course, it struggles to excel, and, often, makes egregious errors.
This week, I will zoom out a little bit, both because I continue to experience technical issues accessing the program, and because ‘Music & Meaning In Our Lives’ is in a bit of a lull as we close out the term’s first unit and move to the second, ‘Music & Identity’ (something I am very excited to write about).
To what end?
I want to continue some of the commentary on the broader discourse about the program’s impact on teaching and education that I included at the end of my last post in the series. To start, I will reflect on my positionality in relationship to this topic, because educators in different circumstances will probably feel differently than I do. After this, I will discuss more about ChatGPT’s ostensible utility as a cheating tool and the misunderstandings that I see motivating the outcry about ChatGPT’s threat to education (at least at the college level).
I am fortunate to teach courses with 25 or fewer students. I am also an adjunct faculty member, which means teaching is not my full-time job, and I am not compensated enough by my employer to spend all of my work time each week reviewing, grading, and designing new assignments. Additionally, my biggest pedagogical influence is bell hooks: I am not interested in the ‘banking method’ of education, nor in dominating my students through my instruction. So, as I have made clear throughout this series, the assignments of mine that ChatGPT is well-suited to complete are simply not a high priority to me and my broader work as an educator.
Coming from this perspective, I believe work that ask students to regurgitate information, the kind of thing ChatGPT can do with some success, is antithetical to actual learning. I think this TikTok video that describes differences in students’ ‘learning’ and ‘performance’ mindset does a good job of explaining what I mean here: these kinds of assessments reinforce behavior that does not support real learning, and, as a result, they do not measure students’ learning very well. Yesterday, this tweet about ChatGPT supposedly ‘passing’ the “Wharton MBA Exam” (there is no sourcing so this very well could be made up entirely) went viral:
This user sees this (alleged) development as a need to dramatically change education, but I would say, if this actually happened, all this outcome indicates is that the exam in question is an ornament and does not assess learning very well at all (and, who graded it? What rubric did they use? I have a lot of questions about this).
The perspective advanced by the above tweet is that ChatGPT represents a threat to education, which is now vulnerable to fraud due to this software’s apparent prowess. But, what might motivate students to use this tool in the first place? Many other instructors pursue the kind of liberatory pedagogy I learned from bell hooks (a concept originating in the work of Paolo Freire), which devalues the kind of assessments that ChatGPT is most capable of completing. In addition to this, there is another broader trend in collegiate pedagogy called ‘ungrading’. Among many aims, ‘ungrading’ argues that the stipulating qualities of traditional assigned work are unjust and should be approached as flexibily as possible, if not altogether eliminated. This means due dates are less meaningful and grading criteria are much more individual, to the extent that students can design their own assessment parameters.
If a course is taught by an instructor who embraces ‘ungrading’, or treats deadlines flexibly, and/or teaches in a manner that either avoids or lowers the ultimate stakes of the sort of assignments that are vulnerable to ChatGPT’s writing abilities, then students may not consider this software very valuable. This is especially true if accessing ChatGPT continues to be as inconsistent as what I have experienced this week. College students have other ways to cut corners in their course work, and it is possible whatever more reliable systems pre-date ChatGPT in this arena will continue to be relevant despite the apparent ease-of-use and verisimilitude of AI-generated text.
With this said, I want to emphasize that the issue here is not totally pedagogical. Any insinuation that instructors alone must change how they work to parry the ostensible lunge of ChatGPT into classrooms overlooks the real-world burden that impinge students’ time, such as high tuition, student debt, food insecutiry, and unaffordable housing. These are problems that universities have the power and resources to relieve, but, generally speaking, do not. As a result, students may simultaneously enroll in as many courses as possible (to complete their degree as quickly and incur less debt) while also working as much as they can outside of school (to defray the costs of housing, tuition, and life). In this case, ChatGPT is (and should be) incredibly appealing, as, in contrast to other extant modes of cheating (like paying someone to write a paper), it is free and very fast to use.