Skip to main content

Musician Taryn Southern on composing her new album entirely with AI

Musician Taryn Southern on composing her new album entirely with AI

/

How artificial intelligence simplifies music production for solo artists

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

taryn-southern
Photo: Taryn Southern

If you heard Taryn Southern’s new single “Break Free” on the radio, you’d probably just keep driving or grocery shopping, or doing whatever you do in places that still have radios playing. The song is a big, moody ballad — the kind that might play during the climax of a Steven Spielberg movie. “Break Free” wasn’t composed by a John Williams copycat, but by artificial intelligence. The song is not a fluke or a novelty for Southern either; she’s using artificial intelligence platforms to create an entire album, called I AM AI. It’s the first LP to be entirely composed and produced with AI.

Southern used an open source AI platform called Amper Music to create the stems of “Break Free.” For each track, she plugs in genre, the instruments she wants to use, and beats per minute. In return, Amper churns out disjointed verses that can be rearranged into a song, and layered beneath Southern’s vocals. Southern told The Verge she’s toying with four other AI music platforms, but she’s not sure which of those will make the final album cut.

As an early YouTube adapter (her videos include popular covers, behind the scenes videos, and parody songs) Southern is used to figuring out ways to make new technology work for her. “[AI music] is such a new space that there aren’t a whole lot of players in it right now,” she says. “There are still some music platforms that haven’t yet released their software, so I’m excited to see what they put out and if I can integrate them at the last second, I would love to.”

Read our full discussion about AI, song structure, and making music without the traditional constraints of a producer, below.

The interview has been condensed and edited for clarity.

Why did you decide to make your album using AI?

I’ve always been into emerging tech. I was early on YouTube and I suppose it’s just part of my DNA to play with new tools. I was reading about artificial intelligence platforms for music, and I came across a story on this company Flow Machines, based out of France. At the time it wasn’t open source technology. But then around 6 or 7 months ago, I read about Amper Music and a few other AI music companies. I just started playing around with their platforms because they were all open source, and when I realized that I could actually write a song I liked using artificial intelligence, I just thought, ‘This has to be the whole project.’ For me, it was a creative challenge.

Can you walk me through the process of creating a song using AI?

Well, it’s different for each song. Even before I was working with AI, I always started with the music and the chord structure. I never started with lyrics or melody. It’s similar with this process, I’m still building the foundation of a song, which I do in this case, on the Amper platform. The way it works is to give the platform certain input like BPM, instrumentation that I like, genre, key, etc. The platform will spit a song out at me, and then I can iterate from there, making adjustments to the instruments and the key. I can even change the genre or emotional feel or the song, until I get something that I’m relatively happy with. Once I have that, I download all the stems of the instrumentation to build actual song structure.

What Amper’s really good at is composing and producing instrumentation, but it doesn’t yet understand song structure. It might give you a verse or the chorus and it’s up to me to stitch these pieces together so that it sounds like something familiar you would hear on the radio. Once I’m happy with the song, then I write the vocal melody and lyrics.

How would you compare that process to traditional songwriting and production?

For songwriters who don’t play instruments or who have to work with a human collaborator, it can be quite freeing and liberating to do this, because you don’t need any knowledge of instrumentation to make a great song — you just need to have a good ear. You need to understand how song structures work, and be able to write melodies and lyrics, which I’m not saying I’m the best at. But it is very helpful for those kinds of artists.

For me, the process is actually really similar to how I would work with a human collaborator, because I don’t have that kind of experience playing a lot of instruments. I play very basic piano. I have to always work with a music producer who can take my vision and make something out of it that I really like. Sometimes they do, and sometimes you struggle to find alignment with that person. So with the AI it’s great, because I have this sort of vision for a song and then I can play around with the platform until I’m happy and I get something out of it that I like, and sometimes it pleasantly surprises me.

Do you think using Amper affected the way you approached the vocals?

One hundred percent. Many times I would go into the platform with an idea of what I wanted to create and it would send something to me that was totally different, but inspired me in a different way. I ended up making this album with a lot of cinematic-type sounds, which I wasn’t initially anticipating. Things that you would hear in a film soundtrack like these really epic-sounding instruments. I really enjoyed playing around with those, and that wasn’t something I considered. I think inevitably it has impacted the direction of the album.

Is there anything in particular that you look for when you’re searching for an AI program?

The first thing would just be that it’s open source or that I have the ability to work with the engineers. One of the programs I’m using with is not open source, but I’ve been working with the engineers who built it.

It’s such a new space that there aren’t a whole lot of players in it right now. I’m excited to experiment with anyone and everyone. There are still some music platforms that haven’t yet released their software, so I’m excited to see what they put out, and if I can integrate them at the last second I would love to. Like IBM Watson has not yet released their platform. I’m really curious about that.

Was there a particular turning point when you decided to make the entire album using AI, or did you go into it that way from the beginning?

And the fact that I used different platforms creates a new challenge because they don’t all work the same way.

Did you have any trouble maintaining consistency between songs?

Well at first, I originally thought I would make the album with each song being a different genre. Because initially I thought that would be an interesting way of showcasing what AI is able to do. But then I realized that while that might be cool, it might lead to some pretty heavy whiplash for listeners. The reality is I listen to Top 40 pop, that’s what I love, and so I guess I wanted to create my version of a standard pop album.

Do you see yourself using these programs in the long run or do you think you’ll want to go back to more traditional songwriting?

If you told me last year that I was going to make an album entirely with AI, I would’ve thought you were crazy. I definitely think this is the beginning of AI music, in the same way that samples have really changed hip-hop, in terms of how people make music, I absolutely think AI will do the same. It’s still in its infancy, so for now it’s a little bit like taking a sledgehammer to something.