1. A Simulation Theory of Musical Expressivity, by Tom Cochrane.
What makes sad music sad? This question keeps coming up. A head-on approach can frustrate, so let us aim at the side: at how we come to hear an emotion in a piece of music. Maybe uncovering the nature of that process will be easier than brute-forcing our way to the nature of the thing the perception is a perception of.
Here is a good starting hypothesis: the method we (or our minds) use to hear emotions in music in some way hijacks, or piggybacks on, the method we use to judge what emotions other people, or we ourselves, are feeling.
Both of those methods, in standard cases, start with perception. You come to know Jones is sad by seeing his face and observing his behavior. And you come to know that you are sad by perceiving your own behavior (“from the inside,” as it were), and, more generally, by perceiving changes in your body. When it comes to others’ emotions, you by definition have only their outwardly-observable behavior to go on, but when it comes to your own, your evidence is more expansive. You can know you are angry by sensing the tension in your muscles; you don’t have to feel the furrow in your brow.
Here is a picture of your mind. It contains an “emotions detection module” (EDM); perceptions of someone’s expression, posture, and behavior are fed into the module; then the module “outputs” an opinion about what (if anything) that person is feeling.
Inside you mind also is an “emotion self-detection module” (SDM). Perceptions of your own body are fed into it; then it outputs an opinion about what (if anything) you are feeling.1
How can the EDM be used to detect emotions in music? “Easy,” you might say; “just feed it the perceptions of the music, instead of perceptions of someone’s expression and behavior.”
Keep reading with a 7-day free trial
Subscribe to Mostly Aesthetics to keep reading this post and get 7 days of free access to the full post archives.