Autotuning and the ‘humachine’. Exploring new means of technological expressivity #arp2015


Anne Danielsen, University of Oslo, Norway

Abstract: During the last decade the digitally pitch­corrected voice has repeatedly been used to express human conditions of alienation, numbness, emotional distance, or flatness, in particular in hip­hop and related musical styles. In this paper, I will give an analysis of some recent examples of expressive use of autotuning and discuss the ways in which this technology—which in many ways seems inhuman and mechanistic—seem to be able to capture certain human states or conditions better than the unmediated voice, the most human of all instruments. Autotuning, then, has complemented the human repertoire with new sounds. In the second part of the paper, I will discuss to what extent this and related tools might be considered part of a new and radical stage in the interaction of human and machine in popular music history—a stage that might be characterized by a decisive undermining of the traditional separation between man and machine in music production.

Anne’s presentation opens with a history of Autotune, telling the now familiar story of Cher through T-Pain, Kanye West and Gaga, although the presentation does not mention its current ubiquity as a general (and more subtle) pop production tool. We first hear West’s ‘Heartless’ (2008):

The fast-attack robotic Autotune effect, here, is consonant with the lyric themes of the album (emotional distance, loneliness etc). Kanye’s robo-character is described as ‘hyper-embodied’ (after Stan Hawkins).

The next example is Gaga’s Starstruck, a song whose lyric refers in part to record production. Gaga’s individual and artist personas are contrasted with each other and with the ‘evil Gaga’ character in Starstruck. The song subverts the idea of the relationship between the life and the work of the artist, and Autotune contributes aesthetically to the listener impression of contrivance.

Next we hear the a capella/autotuned section from Bon Iver’s ‘Woods’ with its (deliberately bad?) AutoTune application – shown by diatonic wobble on the note sustain. To me this is the most interesting aspect of the presentation (although Anne does not have time to explore it in detail) because it is a software artefact, the wobbles being difficult for the performer to control meaningfully in terms of their rhythmic placement as part of each note’s duration. The lyric content of ‘Woods’ is connected with Thoreau’s ‘Walden’ aka ‘Life In The Woods’.

The presentation alludes to Heidegger, observing that Autotune technologically enhances humans while widening the palette of expression. Anne suggests that (extreme/fast attack) Autotune is likely to be remembered as the sound of the 2000s.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: