Skip to main content

How AI-generated music is changing the way hits are made

The Future of Music, episode 2

The idea that artificial intelligence can compose music is scary for a lot of people, including me. But music-making AI software has advanced so far in the past few years that it’s no longer a frightening novelty; it’s a viable tool that can and is being used by producers to help in the creative process. This raises the question: could artificial intelligence one day replace musicians? For the second episode of The Future of Music, I went to LA to visit the offices of AI platform Amper Music and the home of Taryn Southern, a pop artist who is working with Amper and other AI platforms to co-produce her debut album I AM AI.

Using AI as a tool to make music or aid musicians has been in practice for quite some time. In the ‘90s, David Bowie helped develop an app called the Verbasizer, which took literary source material and randomly reordered the words to create new combinations that could be used as lyrics. In 2016, researchers at Sony used software called Flow Machines to create a melody in the style of The Beatles. This material was then turned over to human composer Benoît Carré and developed into a fully produced pop song called “Daddy’s Car.” (Flow Machines was also used to help create an entire album’s worth of music under the name SKYGGE, which is Danish for “shadow.”) On a consumer level, the technology is already integrated with popular music-making programs like Logic, a piece of software that is used by musicians around the world, and it can auto-populate unique drum patterns with the help of AI.

AI is already integrated with consumer music-making programs like Logic

Now, there’s an entire industry built around AI services for creating music, including the aforementioned Flow Machines, IBM Watson Beat, Google Magenta’s NSynth Super, Jukedeck, Melodrive, Spotify’s Creator Technology Research Lab, and Amper Music.

Most of these systems work by using deep learning networks, a type of AI that’s reliant on analyzing large amounts of data. Basically, you feed the software tons of source material, from dance hits to disco classics, which it then analyzes to find patterns. It picks up on things like chords, tempo, length, and how notes relate to one another, learning from all the input so it can write its own melodies. There are differences between platforms: some deliver MIDI while others deliver audio. Some learn purely by examining data, while others rely on hard-coded rules based on musical theory to guide their output.

However, they all have one thing in common: on a micro scale, the music is convincing, but the longer you listen, the less sense it makes. None of them are good enough to craft a Grammy Award-winning song on their own... yet.

Michael Hobe, co-founder of Amper Music.
Michael Hobe, co-founder of Amper Music.
Photo by Christian Mazza / The Verge

Of all the music-making AI platforms I’ve tried out, Amper is hands down the easiest to use. IBM and Google’s projects require some coding knowledge and unpacking of developer language on GitHub. They also give you MIDI output, not audio, so you also have to have a bit more knowledge about music production to shape the output into an actual song.

Amper, on the other hand, has an interface that is ridiculously simple. All you have to do is go to the website and pick a genre of music and a mood. That’s it. You don’t have to know code or composition or even music theory in order to make a song with it. It builds tracks from prerecorded samples and spits out actual audio, not MIDI. From there, you can change the tempo, the key; mute individual instruments, or switch out entire instrument kits to shift the mood of the song its made. This audio can then be exported as a whole or as individual layers of instruments (known as “stems”). Stems can then be further manipulated in DAWs like Ableton or Logic.

I had Amper generate the clip of music below while cruising around LA in the back seat of my friend’s car. Using my phone, I picked rock as the genre, and then, appropriately, “driving” as the mood. It spent about a minute churning away before delivering 30 seconds of audio. The result isn’t radio-ready, but it has chords, a little structure, and it sounds... pleasant. It could easily sit in the back of a YouTube video or an advertisement and no one would guess it was coded, not written.

As someone who makes music, the idea that code can do what I do is freaky. It’s unnerving to think that an algorithm can make a not-terrible song in minutes and that AI is getting in on creative turf we categorize as distinctly human. If AI is currently good enough to make jingly elevator music like the clip above, how long until it can create a number one hit? And if it gets to that point, what does it mean for human musicians?

Taryn Southern showing off IBM Watson Beat.
Taryn Southern showing off IBM Watson Beat.
Photo by Christian Mazza / The Verge

These aren’t questions that Taryn Southern is concerned with. Southern is an online personality who you might know from her YouTube channel or when she was a contestant on American Idol. These days, Southern is interested in emerging tech, which has led to her current project: recording a pop album. Those two things don’t sound like they could be related, but her album has a twist: instead of writing all the songs herself, Southern used artificial intelligence to help generate percussion, melodies, and chords. This makes it one of the first albums of its kind, a collaboration of sorts between AI and human.

Amper was the first AI platform Southern used when beginning her album, and now she also works with IBM Watson Beat and Google Magenta. She views AI as a powerful tool and partner, not a replacement for musicians.

“Using AI, I’m writing my lyrics and my vocal melodies to the actual music and using that as a source of inspiration,” Southern tells me. “I find that really fun, and because I’m able to iterate with the music and give it feedback and parameters and edit as many times as I need, it still feels like it’s mine in a sense.”

AI isn’t good enough to craft a hit radio song on its own... yet

To get an idea of how a human can work with AI, look at Southern’s 2017 single, “Break Free.” The SoundCloud audio below is an early export of material from Amper. Compare that to the YouTube video that has the final, released version of the song. Bits of the AI-composed original peek through here and there, but it’s more like seasoning, not the main dish. To transform it into a pop song, Southern made a lot of creative decisions, including switching instruments, changing the key, and, of course, writing and performing the vocals.

Southern originally turned to AI because even though she was a songwriter, she knew “very, very little about music theory.” It was a roadblock that frustrated her to no end. “I’d find a beautiful chord on the piano,” Southern says, “and I’d write an entire song around that, but then I couldn’t get to the next few chords because I just didn’t know how to play what I was hearing in my head. Now I’m able to iterate with the music and give it feedback and parameters and edit as many times as I need. It still feels like it’s mine in a sense.”

This feeling of empowerment is exactly what Amper Music is trying to deliver. “I don’t look at it like artificial intelligence,” Amper co-founder Michael Hobe says. “It’s more of intelligence augmentation. We can facilitate your creative process to cut a lot of the bullshit elements of it. For me, it’s allowing more people to be creative and then allowing the people who already have some of these creative aspects to really further themselves.”

When Hobe says “bullshit elements,” he’s talking about a guitarist not knowing how to orchestrate an instrument they’ve never worked with before, the time spent crafting the velocity of individual drum hits, or simply being faced with writer’s block. Amper isn’t meant to create the next AI superstar; it’s meant to enable musicians. Of course, using AI also has the added benefit of allowing Southern and others with no formal music background to participate in making music. It democratizes the creative playing field so anyone can play what they hear in their head, just like Southern.

it’s not about creating the next AI superstar; it’s about enabling musicians

I ask Southern what she would say to people who think using AI is cheating. “Great,” she says. “Yes, we are totally cheating. If music is concretely defined as this one process that everyone must adhere to in order to get to some sort of end goal, then, yes, I’m cheating. I am leading the way for all the cheaters.” She laughs, and then pointedly says, “The music creation process can’t be so narrowly defined.”

It’s something to think about. Every time a new technology is introduced and that tectonically shifts the way we create music, there are naysayers. Things like AutoTune, the use of samples and loops, and Digital Audio Workstations were all “disruptors” that we adapted to and are now commonplace tools and methods. AI will probably be next.

The technology’s impact on the music industry as a whole remains to be seen. Will it destroy jobs? How will it affect musical copyright? Will it ever be able to work without a human? But people like Hobe and Southern believe it will ultimately reap positive benefits. Sure, an algorithm making music sounds scary because it mirrors human capabilities that we already find mysterious, but it’s also a compelling tool that can enhance said human capabilities. AI as a collaborator increases access to music-making, it can streamline workflows, and it provides the spark of inspiration needed to craft your next hit single.

“You’re collaborating and working with the AI to achieve your goal,” Hobe says. “It’s not that the AI is just doing its own little thing. It’s all about the process between it and you to achieve that final artistic vision.”