Anatomy of the hit: Ariana Grande’s ‘7 rings’

Anatomy of the Hit: Ariana Grande’s “7 rings”
Fact_File

As a music theory geek, I love to get inside songs and figure out why we like them. There’s something beautiful about the ability of a mainstream hit to bring people together. And when the songwriter and singer is as extraordinary a talent as Ariana Grande, we can be sure we’re putting the very finest pop product in our ears.

So let’s dive in, intro first, middle bit in the middle, and outro at the end, as has been the way since the dawn of time.

TIDAL video

Intro – 8 bars [0:00]

We hear a single reverbed synth sound playing half notes, with occasional 8th note passing notes, and no indication of what’s to come. That’s sparse, even for a trap-pop intro. At this point, we don’t even know if we’re hearing 140BPM (fast pop) or 70BPM (slow ballad).

Optimal distinctiveness and the songwriting singer #arp

Davey will be discussing songwriter identity in the context of optimal distinctiveness theory, and uses this to frame some popular music within the known teen phenomenon of ‘I loved [that band] before they were famous’. He uses the famous example of iMacs that looked like furniture – the novel and the familiar are balanced to create consumer need.

Popular music is perceived to come out of ‘scenes’ – genres, fashions and subcultures – and necessarily has different audiences, who in turn require identity, categorisation and distinctiveness (Zuckerman 2014).

In Davey’s auto-ethnographical research, he has created 4 albums over 8 years; 2 of these gained traction; 2 faded away. He analyses each project according to its distinctiveness, genre, novelty, conformity etc (via the above ODT framework).

We now hear ‘Memory is a Weapon’ (CousteauX, 2017), from Davey’s reboot of his turn-of-the-century band Cousteau. The journalistic feedback and reviews triangulate the product’s perceived distinctiveness. Assimilation (conformity to expectation) is contrasted with Differentiation (challenge to expectation) – for example, the torch singer persona of Cousteau’s work becoming the rogue-ish character of the CousteuX reboot. This is in the lyric mode of address (first person, reflective, confessional). Most of the rest of the album is in the dramatic mode of address (quasi-second person – addressing the audience as if they were present or speaking to somebody else positioning the audience as witness).

The journalistic responses agreed with the intent, reliably highlighting words such as ‘dark’ and ‘brooding’ etc.

Microrhythms and Microsounds in African-American Popular Music

I always love to hear Anne speak. Alas, I live-blogged her entire hour-long keynote today, complete with examples, and due to a horrible WordPress browser fail (including no success with autosave reversions) I lost all the text and examples!

So to recreate it from memory, Anne discussed some of the musical characteristics of black popular musics, as articulated by Wilson (1983), and then used these to trace a 50-year timeline of rhythmic accuracy in African-American popular music, particularly focusing on the cusp of digital tools (from early 1980s). Trends were traced, from the quest for super-accurate grooves (e.g. Prince’s Kiss), through the muddying/blurring of the beat (examples include Snoop Dogg, D’Angelo, Destiny’s Child, Tyler The Creator).

For more on this fascinating field, take a look at Anne’s researchgate profile and RITMO/UiO profile, and her various books and publications on micro-rhythm in popular music.

 

 

Meaning in vocal timbre #arp

 

The (Dis) Embodied Voice: hearing meaning in vocal timbre

Simon Zagorski-Thomas (London College of Music, UWL)
Keywords: Vocal timbre, ecological perception, embodied cognition, sonic cartoons

Leonardo da Vinci - Virgin and Child with Ss Anne and John the Baptist.jpgABSTRACT: It can be argued that since the persona of the performer is widely perceived to be the locus of meaning in popular music – as opposed to the more indirect voice of the composer in the western art music tradition – that the timbre of the voice and its control during performance should be the focal point of popular music analysis. This paper uses a framework combining the ecological approach to perception (Gibson, 1979; Clarke, 2005), embodied cognition (Lakoff and Johnson, 1999) and the neural theory of metaphor (Lakoff and Johnson, 2003; Feldman, 2008) to explore how the disembodied sound of the recorded voice in popular music is interpreted as a schematic representation of a human entity and action: a sonic cartoon (Zagorski-Thomas, 2014).

Service Models in Popular Music Production Education #arp #songwriting

Collective Creativity: A ‘Service’ Model of Contemporary Commercial Pop Music

  • Paul Thompson, Leeds Beckett University, UK
  • Phil Harding, Leeds Beckett University, UK

Keywords: Creativity, Pop Production, Songwriting

Thewantedifoundyou.jpgABSTRACT: A commercial pop music production is rarely the result of a single individual and pop music producers and songwriters are often part of a larger creative collective (Hennion, 1990) in creating a musical product. A team leader typically manages this group activity. That team leader requires an appropriate level of cultural, symbolic and economic capital (Bourdieu, 1984) so they can effectively evaluate the contributions of the rest of the team and guide the project towards commercial success (Thompson & Harding, 2017). This study explores the role of the team leader within the creative production workflow of pop songwriting and production since the 1990s and investigates the ways in which pop songwriting and production teams work within a creative system of pop-music making. Building upon previous studies in this area (Harding and Thompson 2017) the ‘Service Model’ flow system is illustrated with distinct linear stages that include the processes of pop songwriting, pop vocal recording, post vocal production and then mixing. However, within each of these production stages the ‘highly nonlinear dynamics’ (Capra and Luisi, 2014) of the creative system (Csikszentmihalyi; 1988, 1999) can be viewed in action as the team work together to make the pop record. Drawing upon a series of interviews and data gathered during a Practice Based Enquiry (PBE) conducted at Westerdals University in Oslo, this paper presents the pop music ‘Service Model’. Importantly, the model underlines the value of the collective (rather than individual) in the commercial pop songwriting and production process.

This is Phil and Paul’s third presentation about this project (related to Phil’s PhD) – and represents bringing the research up to date by talking about contemporary pop production. For background, you can read about last year’s paper and/or pick up Phil’s book PWL from the Factory Floor.

Teaching Song Production Analysis #apme2018

Misty Jones, Middle Tennessee State University

Practical Production Analysis: Helping Students Produce Competitive Songs

Misty opens by describing her particular students as ‘in the box’ producers – that is to say, they create the entire sound recording in a Digital Audio Workstation. The problem she’s trying to solve is this: the students’ recordings are just not ready yet [for the commercial marketplace]. So today, she will be sharing her approach to helping students to make their song recordings competitive, in the genre they want to produce in.

The approach starts with the assumption that students ‘have their chops down’ – that is, they can write melodies & lyrics, understand harmony, and can program beats. With this out of the way, the students are asked to work on these four areas:

  • Form/Arrangement
  • Instrumentation
  • Texture Variation
  • Production Techniques

Music Technology, Bandlab and Berklee #apme2018

On the bus to the university this morning I introduced myself to the person sat next to me, who turned out to be John Bigus from my own institution (Berklee’s a great community, but it’s a BIG community, so it’s possible to work there for a long time without knowing everyone’s name). John is responsible for the PULSE free resource, available at pulse.berklee.edu, which is part of Berklee’s initiative to work with K-12 school age music creators and teachers.

John has been working with Bandlab, so there is an introduction from the company’s Lauren Henry Parsons, and our interviewer is Bandlab’s Michael Filson. It’s a cloud-based, free, 12-track DAW app (mobile app or browser-based) with 3.5m users across the world. It’s sponsored by the music instrument industry, which is why the end product is free for musicians – and a walled-garden version for students and teachers. It’s also part of a relaunch of SONAR’s Cakewalk.

Association for Popular Music Education #apme #songwriting

Conference roomI’m in Nashville, at the #apme conference, hosted by Middle Tennessee State University. Popular Music Education is still a relatively young field, at least in terms of having its own conference (launched ~10 years ago) and journal (launched last year). More about AMPE at popularmusiceducation.org. Conference schedule here.

Coming from Berklee, perhaps I’ve become too comfortable with the idea that everyone talks about popular music pedagogy all the time. A lot of colleagues here are from institutions that have a long history of classical music education, but have only recently launched popular music programs. They are often seen as mavericks in their schools, and are viewed with some suspicion by more traditional teachers and departments. So there’s a palpable sense of community here, and even during this first morning of day 1 I’ve frequently overheard the phrase: “I’ve finally found my people!”.

Eurovision 2018 – live music analysis blog #eurovision

 

Final results

[voting results entered at ~11.30pm GMT / 6.30pm ET on May 12th 2018. My predictions shown in brackets]

  1. Israel ‘TOY’ (2)
  2. Cyprus ‘Fuego’ (1)
  3. Austria ‘Nobody But You’ (x)
  4. Germany ‘You Let Me Walk Alone’ (4)
  5. Italy ‘Non Mi Avete Fatto Niente’ (x)

My predictions (actual placing shown in brackets):

  1. Cyprus ‘Fuego’ (2)
  2. Israel ‘TOY’ (1)
  3. Ireland ‘Together’ (16)
  4. Germany ‘You Let Me Walk Alone’ (4)
  5. France ‘Mercy’ (13)

So I missed Austria and Italy completely, but got the first two (albeit reversed) and predicted three of the top four.

Of the soft predictions:

  • The Danes’ ‘Higher Ground’ (my personal favourite) will do well, but won’t win.
    CORRECT. Denmark came 9th (of 26)
  • Finland’s ‘Monsters’ will be in the top half of the voting.
    WRONG. Finland were 25th out of 26!
  • The Netherlands’ competent and enjoyable US country-rock ‘Outlaw in ‘Em’ will get some votes but will be in the bottom half.
    CORRECT. 18th of 26.
  • The Estonian operatics won’t do well.
    WRONG. Estonia were 8th of 26.
  • The Hungarian metalheads will get crucified. Which will probably suit them just fine.
    CORRECT. Hungary were 21st out of 26.

Predictions

[Written at at 10:19pm GMT (5:19pm ET) on May 12th 2018, before voting begins]. As always, I’ll leave the predictions here permanently, and post the real results when the voting is complete.

  1. Cyprus ‘Fuego’
  2. Israel ‘TOY’
  3. Ireland ‘Together’
  4. Germany ‘You Let Me Walk Alone’
  5. France ‘Mercy’

Soft predictions:

  • The Danes’ ‘Higher Ground’ (my personal favourite) will do well, but won’t win.
  • Finland’s ‘Monsters’ will be in the top half of the voting.
  • The Netherlands’ competent and enjoyable US country-rock ‘Outlaw in ‘Em’ will get some votes but will be in the bottom half.
  • The Estonian operatics won’t do well.
  • The Hungarian metalheads will get crucified. Which will probably suit them just fine.

Intro

Screenshot 2018-05-12 14.56.41Welcome to the 2018 Eurovision live musicology blog, now in its eighth year. This site has provided live music analysis of the ESC final every year since 2011, previously during the UK live broadcast. Since 2016, the text has been written from Boston USA, 5 hours behind UK time and, this year, also the Altice Arena in Lisbon, Portugal, where the live show takes place.

New Music Ecosystem 2018 conference – live blog

No, the other Washington.

 

Photo by user ilka_paola, caption reads Amazing Guitar Exhibit at the Museum of Pop Culture in Seattle ! 🎸#guitars#guitarexibit#musuemofpopculture#northseattle#ilovemuseums#seattle#dtseattle#downtownseattle#washingtonstate#seattlewashington#exploringseattle#exploreseattle#tourismseattle#beingatourist#goldenhour#sunset#pacificnorthwest#pnw#walkingthestreets#downtowncruising#seattlelife#sunnydayinseattle#fallinseattle#solotraveler#solotravel#musuemgeek#citypass#seattlecitypass
Mopop Seattle. This exhibit is a very large guitar stand, which is arguably not stable enough for touring.

I’m in Seattle at the New Music Ecosystem conference, organised by the University of Washington Law School. It’s a gathering of music and law professionals, discussing the future of creators’ compensation, tech/music innovation, and copyright reform. [Grammar folks – I’ve now been in the USA for long enough, and had Oxford commas inserted into my copy so many times, that I have decided to give in and just use them from hereon].

 

On The Fringe (accessible networked music solutions)

Dan Nichols, University of Northern Illinois

ABSTRACT: Over the past several years, perhaps no single person has fostered collaborations with groups on the fringe of networking infrastructure than Dan Nichols of Northern Illinois University. In this session, Dan will explain how to reach partners with limited expertise and resources. Prominent software solutions like Artsmesh and Jam Kazaam will be explored.

IMG_0804Thus far, the conference has focused on high-bandwidth institutions with fast Internet2 connections. Dan’s presentation covers bringing networked collaboration to the masses, particularly those who do not have access to these network/hardware/institutional resources.

Dan begins with a description of Jamkazam, a free solution (that also offers a $299 hardware audio interface). After describing its features and virtues, he plays us a demo of multi-site bands jamming at SXSW 2014.

LOLA, Ultragrid and MVTP updates #networkperformingarts

LOLA, Ultragrid and MVTP updates

  • Claudio Allocchio (GARR)
  • Milos Liska (CESNET)
  • Sven Ubik (CESNET)

Project leads for LOLA, Ultragrid and CESNET’s 4K streaming appliance (MVTP) share recent developments for each platform. The presentation will include the recent release of LOLA v1.5.0, new features and a look to the future of LOLA v2.0 and multisite functionally. Milos Liska will lead a walkthrough of the new Ultragrid graphical user interface, which opens of the application to less experienced users. Sven Ubik will share the most recent work with the MVTP appliance with a live demonstration of the technology in action.

LOLA.jpgClaudio is one of the world leaders in this field, and one of the inventors/developers of LOLA (background presentation about LOLA). Today’s presentation is to update the community on the technological advances in the new version LOLA v1.5.0. [JB comment – the tech developments are remarkable, including fast JPEG compression and 120FPS video support – which I find astounding in an internet-based video service, let alone one where super low latencies are crucial].

Live Network Music collaboration #npapw18

Newworld.jpgI’m in Miami at the annual conference of ‘Network Performing Arts Production Workshops’ (that’s the current title of the organisation; they have stated in the introductory remarks that they’re looking for a more pithy name than ‘NPAPW’!). Our host organisation is the New World Symphony.

#BerkleeXR

Dancer at #berkleexr
Kate Gow, a Boston Conservatory at Berklee Dance student, using a Noitom motion capture system.

Yesterday I attended the inaugural Berklee XR summit, held on Feb 7th 2018 at the Berklee Media Center, here at Massachusetts Ave in Boston, part of the Stan Getz Library. The all-day event was envisaged by our estimable Berklee colleague and XR Professor Lori Landay. XR is a catch-all term (Cross Reality, or Extended Reality) and it refers to “technology-mediated experiences that combine digital and biological realities”. Augmented Reality (AR) is adding virtual objects to the real world, typically through a smartphone camera (Pokémon Go being a well-known example); Virtual Reality (VR) is creating an entire immersive world for the user to experience, usually via first-person headset technologies. VR is particularly used in games such as Doom VFR but also in non-gaming contexts, including medicine, education and law (not to mention dance).

As promised to the delegates on the day, I’ve listed below some of the urls we collected, from listening to the artists, colleagues, presentations and exhibitors. If you were there and you think I’ve missed any, please contact me here or @joebennettmusic and I’ll be glad to add to the list.

The Sound Dome at KMH #arp #arp2017

Bill Brunson, Royal College of Music, Stockholm

[JB note – I type this sitting under the Sound Dome in ‘Lilla Salen’, one of the Royal College’s lecture theatres. It appears to be an array of 13, 8 and 4 speakers arranged in concentric circles above a non-raked 100-ish seater auditorium in a large black box space, with options additional floor-level speakers in a circle. We can also see a big stereo FOH ceiling PA and four subs in the corners of the space. I’m sure we’ll hear more – in both senses of the phrase – soon].

Bill begins with a description of his own background, and like many at ARP he traces his first inspiration back to the moment he first heard the Beatles. He tells stories about his enthusiasm for acquisitiveness in audio, having recently bought three SSL desks after using one for a particular session in an opera house (he notes that, as a Texan, he believes everything should be big).

Slapback echo #arp #arp2017

Tor Halmrast: Sam Phillips: Slap Back Echo, Luckily in Mono

elvisAbstract: “Slap back echo” was created by Sam Phillips for Elvis Presley´s early Memphis recordings. Using cepstrum and autocorrelation, we find that the tape delay used in Sun Studios was 134-137 ms, which is so long that the echo is perceived as a single, distinct echo in the time domain, and not the comb filter coloration of timbre in the frequency domain defined as Box-Klangfarbe. Such coloration would be perceived if a distinct, separate, reflection gave a comb filter with a distance between the teeth (CBTB: Comb-Between-Teeth-Bandwidth) comparable to the critical bandwidth along the basilar membrane in the cochlea. When Elvis changed to RCA Victor´s studio in Nashville, “RCA was anxious to recreate the “slapback” echo…To add them to Elvis’ vocals Chet [Atkins] and engineer Bob Farris created a pseudo “echo chamber” by setting up a speaker at one end of a long hallway and a microphone at the other end and recording the echo live”. Analysis of these recordings gives that the echo is somewhat shorter (114 ms and 82 ms), and much more diffuse, so “slap echo” was not actually recreated. The main findings is that even though the delay time of the Sun Studio “slap tape echo” is long, the echo is still perceived as rather “close”, because the echo is in mono. Panned in stereo, the feeling of being inside a small room would disappear. In addition, we analysed also a shorter delay, as for a possible reflection from the floor of the studio back to the singer´s microphone. These results are more unclear, but we found that such shorter delay would have given Box-Klangfarbe, but if this actually was a floor reflection, the measured deviation of the delay time must mean that the singer moved his head during the recordings (a highly reasonable assumption for Elvis!)

[JB note: Tor’s presentation was outstanding, but it was also extremely technical in terms of physics and data, so I’m not sure I fully did it justice with this live blog post. With this limitation in mind, I’ve posted several of his slides to help the more technical reader].

Tor begins (after a disclaimer that he is not an Elvis fan) with some background about Sun Studios and their recording environment, and some technical analyses of slapback parameters – comb filtering, phase, delay and frequency. We hear the delay from Heartbreak Hotel, leading into a more detailed discussion of how a very short delay creates comb filtering. If you are 1.751m from a wall, ou get a time delay of 10ms, and a Comb Between Teaath Bandwidth (CBTB) of 100Hz. Importantly it is not possible to get rif of this effect with EQ. So if you put a source/mic this close to a wall you will hear this artefact.

Gated Reverb 80s to today #arp #arp2017

Alex Case: Oops, Do It Again – Gated Reverb From the 80s to Today

(UMass Lowell/recordingology.com/)

H910.jpgAbstract: Among the more absurd sonic concoctions to come out of the recording studio, gated reverb offers a unique aesthetic possible only through loudspeaker-mediated sound. Born in the 80s, it relied upon creative, even counterintuitive application of some of the newest signal processing technologies of the time. The genesis of gated reverb was part discovery, and part invention. Its further development was motivated by rebellion, and confusion. Peter Gabriel did it first, with “Intruder” (1980). Phil Collins made it famous, with “In the Air Tonight” (1980). But David Bowie likely inspired it all with tracks like “Sound and Vision” (1977). This paper tours the development of gated reverb, with audio illustrations showing when, how, and why. What began as a radical reshaping of timbre has evolved into a more subtle form. Gated reverb remains relevant in contemporary music production, not just for 80s pastiche, but as a tool for overcoming masking through the strategic leveraging of its unique psychoacoustic properties.

We begin with the world’s most famous example of gated reverb – the drum fill from ‘In The Air Tonight” and Alex comments… “Before texting, this is what caused cars to swerve”. We then look a signal path diagram and see transient images describing the dynamic properties of a compressed and gated reverb.