Welcome to the second entry in my experiment asking ChatGPT the same questions I present to my students in my ‘Music & Meaning In Our Lives’ course at the University of Michigan. The first post introduced the series and centered around the question, ‘what is music?’. I went back-and-forth with the AI text-generator to explore the implications of the initial definition it provided me. ChatGPT did fine (I gave it a ‘C’), but not without some serious concerns.
This week, I have much more specific subject matter to discuss with ChatGPT: the components my course’s framework for analyzing music dubbed, “The Parameters of Music”. The software handles these prompts very differently and the results are very interesting, so let’s get right into it.
Remember to subscribe below so you follow along on this pedagogical journey for the rest of the semester!
“The Parameters of Music”
Although ‘Music & Meaning In Our Lives’ is a liberal arts course, music analysis is a huge component of what I do with my students. There are many reasons why I emphasize this, but one of the most important is that this is a new skill my students can learn, practice, and take with them at the end of the semester. I want my students to begin/continue self-actualizing as listeners, and teaching them techniques that enable them to articulate how and why they find certain musical materials meaningful is a critical step in that process.
The style of analysis we use is very different from traditional music theory courses because musical training, performance experience, and the ability to read notation are not required. Our approach is entirely aural, descriptive, and relies on only five technical terms. To be clear, I did not invent this scheme: I learned it in 2016 when I began my nonsequential five-semester stint teaching a handful of music theory courses at Western Michigan University in Kalamazoo, MI. I loved teaching in WMU’s music theory curriculum, which was designed by two amazing composers on faculty, Christopher Biggs and Lisa Renee Coons (I’m a composer, too, and I tend to like to the non-dogmatic, pragmatic way many of us approach music theory). To clarify further, the analytical framework I’m about to describe is also not Chris and Lisa’s idea, but it was completely new to me when I first encountered it.
The form & analysis course I taught at WMU employed ‘The Parameters of Music’, which are specific categories of musical sound, to analyze structure through the expression of some or all of these specific components over the course of a piece or given passage. I found this method rather innovative, particularly in the way it enabled me to practice the same analytical process with my students regardless of the music’s style or aesthetics. This exact characteristic has proven very advantageous to my ‘Music and Meaning in Our Lives’ course because we can listen to any music someone wants to bring to the class and not worry about whether it will fit well with our techniques.
The Parameters of Music are: timbre, melody, texture, rhythm, and harmony. The biggest difference between teaching this approach to music majors at WMU and liberal arts students at the University of Michigan is that, now, I never bring sheet music into the classroom. As a result, harmony comes up much less frequently with my ‘Music and Meaning in Our Lives’ students, except occasionally in terms of harmonic rhythm. But, everything else works well, especially timbre, texture, and rhythm, which all benefit from being both aurally salient and relatively easy to describe without additional technical terminology and considerations.
ChatGPT defines “The Parameters of Music”
For this entry in my ChatGPT series, I’m going to ask the program to explain each parameter and assess the quality of its responses. I will present all its definitions in succession and then comment on them, beginning with ‘timbre’:
Now, ‘texture’:
Now, ‘melody’:
Now, ‘rhythm’:
Finally, ‘harmony’:
To begin, there is some strong content here. The definitions for ‘timbre’ and ‘melody’ are impressive, and the first paragraph of the response for ‘texture’ is also very good. The ‘melody’ response is probably my favorite. I like the way ChatGPT immediately discusses melody as a distinct musical object in terms of texture and form. Melodies must stand out, even in the densest counterpoint (this is, after all, part of the allure of excellently compose polyphony). I also think it is very good that the ‘timbre’ definition discusses the overtone series (well, “spectrum of overtones”, which conflates the real terms ‘overtone series’ and ‘frequency spectrum’). Another interesting characteristic of the ‘timbre’ response is that, despite receiving essentially the same prompt, ChatGPT only produced one paragraph when it generated three with the others.
I vacillate about the quality of the answer for ‘harmony’. That ChatGPT immediately discusses the concept of harmonic progression and motion is interesting and unexpected. Emphasizing the linearity of our experience with harmony seemingly prioritizes the listener’s experience of harmony as an unfolding sequence of sounds. This contrasts with the way harmony is often presented in traditional music theory instruction, which focuses on identifying isolated chords (thus, emphasizing their verticality as stacks of notes) before anything else. Beyond this, however, the definition ChatGPT provided is rather vague. The more I read, the more it seems to teeter between lacking specificity in order to stifle bias and lacking specificity because it is unsure what to say. In one respect, this generality enables ChatGPT to avoid invoke specific styles, or use the way harmony operates in classical music as the foundation of its response, which is common in textbooks and broader discourse. But, I think this virtue is accidental.
The other problems I find in ChatGPT’s offerings are both explicit and implicit. To continue with ‘harmony’, there is world salad in the second paragraph: “Tension”, “resolution”, and “dissonance” are all related concepts to harmony, but the sentence structure here is odd. Much like something I noticed in this series’ first post, the highlighted sentence reads almost like a translation error:
You may also have noticed the small but glaring error in the program’s misuse of “effect” when “affect” may be more appropriate (it is a little hard to determine give that, “tension, resolution, and dissonance” are neither effects nor affects). This same mistake occurs more clearly in ChatGPT’s definition of ‘texture’:
These two moments illustrate not only a recurring error in the software’s language selection, but also demonstrate one of multiple obvious parallelisms that are present across different responses. This invocation of e/affect occurs in the second paragraph of both the ‘harmony’ and ‘texture’ definitions, and the entries for ‘melody’ and ‘rhythm’ also refer to, “a wide range”, of things in their second paragraphs, echoing form and content from the other passages I’ve examined.
The phenomenon of ChatGPT generating similar language in response to separate prompts appears more concerningly in other interactions I’ve had with the program that I will discuss later in this series. For now, this tendency, which may be a bug that the software works out over time, undermines the program’s utility as a device for cheating. I think panicked instructors can relax. An even more obvious problem with ChatGPT’s utility to mischievous students is that it produces the same response if you pose a prompt more than once. I am not sure if this occurs across different users or is only contained within one user’s (my) interactions, but this characteristic ostensibly makes cheating with ChatGPT impossible.
Final Grade: F
It may surprise you that ChatGPT has failed this week. After all, I think there is some good in its responses (I even characterized some as, “strong”!). But, I can identify problems with almost all of the definitions I’ve discussed to this point, and we haven’t fully addressed ‘rhythm’, which is pretty bad. Moreover, my students in ‘Music and Meaning In Our Lives’ do much more with The Parameters of Music than simply produce a good definition for each component. If ChatGPT can’t define these concepts without error, I don’t expect it will be able to apply them at all.
The program’s definition of ‘rhythm’ (reproduced below) is, in my view, completely unacceptable, despite the fact that some parts are appropriate. From the outset, it is clear it doesn’t its training on this topic is totally unsatisfactory:
Yes, rhythms often take the form of patterns, but is “silence” important here? That is not a critical component of rhythm, even though rests can take on a rhythmic role in some musical examples. Additionally, the second sentence is far too general: what it says is not necessarily erroneous, but the definition’s failure to be meaningfully specific at this point represents a major weakness. The ancillary terminology ChatGPT introduces in the second paragraph (“tempo”, “meter”, and “accent”) is, generally speaking, relevant to this topic and its definitions are accurate, if brief. But, through contrast, the adroit specificity here emphasizes the mushy bricolage that is the rest of the response. Finaly, as I noted at the end of the previous section, we can see the repetitive use of “wide range” in the second and third paragraphs.
In my view, a good definition of rhythm would talk more directly about the concept of musical time. I tell my students that, “rhythm is the performance of duration”, and we also talk about the way rhythms are structured in an abstract sense (i.e. short/faster v. slower/longer) and in the context of meter. Rhythm, meter, and syncopation tend to be the technical musical concepts my students engage with most frequently and successfully in this course, probably because they are very aurally salient, and there is so much consistency in how rhythm is expressed in the music they tend to listen to (pop, rock, and hip-hop). However, ChatGPT really seems not to understand rhythm, which seems like a reflection of the materials it learned from; perhaps this definition will improve in the future.
It is important to keep in mind that ChatGPT cannot listen to music, so, as far as I understand, its experience of musical concepts is based on textual descriptions. My music academic POV could go on a tangent about how, considering this, it isn’t surprising that rhyhm eludes the program, as Euro-centric biases inherent to English-language music scholarship have deemed rhythm less important than other Parameters of Music, such as melody (remember how strong the ‘melody’ definition was?). Yet, I also wonder if the software’s non-corporeality plays a role here, or if this characteristic will ultimatley act as a ceiling on its ability to understand rhythm, which is not only communicated aurally but also through embodiment. Rhythm is something we feel in addition to hearing it in the music with listen to, and I’m not sure how ChatGPT will ever be able to experience that.
As I said at the beginning of this section, I don’t ask my students to simply define The Parameters of Music, we also work on using this style of analysis to connect stylistically disparate examples. Next week, I will show you what happens when I ask ChatGPT to attempt this activity. Remember to subscibe to follow the series!