With only a couple of minutes and access to YouTube, you can now make SpongeBob SquarePants sing your favorite Taylor Swift song. Or make Taylor Swift sing your favorite Drake song. Or make Drake sing your song. AI covers and AI-generated originals are suddenly everywhere on the internet, to the point where even diehard fans don’t know what’s a real leak and what’s a generated one. It’s a complicated ethical and legal mess and makes being a music fan weirder than ever.
That’s only the very beginning of the ways AI is coming into the music-making process. There are tools like Suno and Soundful that can create a supposedly original song based only on a text prompt; platforms like Magenta and BeatBot that can generate beats and instruments in just a few seconds; and plug-ins like Izotope Neutron that can clean up, mix, and improve the quality of your track with no effort at all. Fast-forward a bit, and AI could be involved in every part of the music process.
Or... maybe not.
For the second episode in our Vergecast series about AI, we knew we wanted to explore how musicians are integrating AI into their process and where they might in the future. So we enlisted two experts: Charlie Harding, a songwriter and the co-host of the excellent Switched on Pop podcast, and Ian Kimmel, a producer and songwriter who has worked on songs with BTS, Mary J. Blige, Rick Ross, Juice WRLD, and many others. Kimmel also runs a business called Biscuit Head Collective that helps musicians turn their ideas and demos into top 40-ready tracks — which is exactly the sort of thing a lot of AI tools are starting to promise.
Harding and Kimmel set off to try and write a song using only AI tools and came on the show to tell us how they fared, what they came up with, and what it all means for the people who make and listen to music. This week on the show, the debut (and maybe swan song) of “I Don’t Belong (Intruding Skyscrapers).”