Skip to main content

A night at the AI jazz club

A night at the AI jazz club

/

This is what happens when artificial intelligence is left to write the music

Share this story

It’s a Wednesday night in North East London and upstairs at the Vortex Jazz Club the machines are calling the shots. The human spectators are jiggling happily in their seats, and the musicians are undeniably flesh-and-blood, sweating and straining at their instruments. But the music itself is the product of electronic brains — trained to soak up the music of great artists and strain out new melodies.

This is "the first concert consisting almost entirely of music composed by artificial intelligence" says professor Geraint Wiggins of Queen Mary’s University at the beginning of the evening. In about a few minutes we’ll be listening to Medieval chants, Baroque chorales, and jazz and pop — all made by artificial intelligence with the help of computer scientists who programmed the evening’s "composers." As Wiggins reads out a list of the contributors there’s an excited buzz in the room. The atmosphere is a little like a school recital; no surprise given that most of the audience members are the computer scientists themselves, keen to see how their progeny performs — as well as assess the competition.

"the first concert consisting almost entirely of music composed by artificial intelligence"

To make that judgement easier, says Wiggins, the evening’s music will be in the style of familiar genres or composers. "If we just produce computer-computer music, who’s to say if it’s any good?" he asks as we settle in tight rows of French bistro chairs. "We’re trying to emulate styles through the ages so you can recognize them and tell us whether they’re any good."

Singers tackle an AI-written piece of choral music. (Image credit: James Vincent / The Verge)

The other, unmentioned, reason for this mimicry is that it best fits the current capabilities of AI. The bulk of the concert’s music is the product of deep learning — a type of artificial intelligence that tech companies like Google and Facebook have used to great effect for tasks like speech and facial recognition. Deep learning is fantastic at sifting out patterns from large libraries of data, and then either labeling that information (saying, for example, that’s a cat, this is a human, and so on) or creating completely new data that fits what it’s previously seen.

like a photocopier with an unruly imagination

In the case of tonight’s concerts, researchers fed their deep learning systems sheet music from specific composers and time periods. The machines then analyzed recurrent patterns — a series of notes, harmonic sequences, and so on — and created something similar. You can think of this as a photocopier with an unruly imagination. You feed in your latest report from work and out the other end comes something that looks broadly the same, but is, for some reason, talking about a company that doesn’t exist and citing sales figures that don't add up.

You can argue that this doesn’t really count as composition, and that the computers are just aping humans. But so what, said the scientists at the concert; can you really say that any artist is without precedence? Aren’t all creative acts simply the sum of their influences? Shut up and let my kid play.

At times during the night I could see their point. A jazz combo led by Mark d’Inverno — an accomplished pianist and professor of computer science at the University of Goldsmiths — sounded just like the real thing. I wouldn’t go so far as to say that things got heated, but toes were tapping, drinks were knocked over (by me, anyway), and there was even a bit of light whooping from the assembled scientists.

The first AI-written song d’Inverno played had been distilled from the works of Miles Davis, using software developed by Sony’s computer labs in Paris. (The same software responsible for the AI pop song "Daddy’s Car," also performed that evening.) The AI’s contribution was just a lead sheet — a single piece of paper with a melody line and accompanying chords — but in the hands of d’Inverno and his bandmates, it swung. They started off running through the jaunty main theme, before devolving into a series of solos that d’Inverno later informed me were "all human" (meaning, all improvised).

The AI can't handle structure — making it perfect for jazz

D’Inverno’s successful performance is due in part to the fact that jazz is a genre that’s well-suited for an AI composer. Although deep learning systems are great at pattern recognition, they can be remarkably short-sighted, working only in relatively short sequences of notes. They’ll happily produce a couple of bars of melody, but can’t understand the overarching structure of a piece — the way a symphony might return to a central theme, for example, repeating the melody with added tweaks and flourishes so that it moves into new territory. When it comes to jazz, this is less of a problem. People don’t expect structure from jazz.

During the interval, I spoke to Stefan Lattner, a scientist at the Austrian Research Institute for Artificial Intelligence, as he stepped outside for a cigarette. He was bullish about the future of AI composers, and said they’d soon be generating background music for things like adverts. But he also bemoaned their current inability to grasp musical form. "The problem is that these models only learn the statistics of music," said Lattner. "We can give you probability of the next note being a C or an F or an E, but we don’t look much beyond that."

Lattner’s own contributions to the evening — a pair of melodies generated by a deep learning systems trained on Mozart’s collected piano sonatas — demonstrated the challenge of structure perfectly. The first piece, named "Mozart Unchained" performed by pianist Carlos Chacón, was barely recognizable as classical music. It was not discordant, but the melody was all over the place; like something written in the mid-20th century. It was angular and unexpected, taking sharp turns as if trying to shake off the listener’s patience and goodwill.

The second, named "Mozart Constrained," was much more coherent. You’d never mistake it for the genuine article, but there were passages (no longer than 20 seconds) that sounded familiar. Common musical patterns appeared throughout, like an Alberti bass figure in the left hand (a way of playing broken chords as four successive notes that go low-high-middle-high) or a type of melodic ornamentation known as a mordent. Memories of playing Mozart sonatas flooded back to me, and my fingers twitched in recognition — brought to life by the Frankenstein sonata assembled from various parts.

The difference between the two pieces, though, is not the computer’s intuition: it’s Lattner’s. The "Unchained" version was made by giving the deep learning system as little guidance as possible, while for the "Constrained" version Lattner added external constraints that limited its ability to generate all but the most statistically likely melodies. In some places, it felt to me like the system might even be lifting passages wholesale from Mozart, but Lattner couldn’t say for sure if this was happening.

Lattner’s pieces raised big questions about AI-generated music for me: when does human intervention outweigh a machine’s skill? In one way or another, computers are always going to rely on humans for their judgement on what makes good music, and what doesn’t; we have to program them, after all.

"A critical, creative accomplice."

Speaking to d’Inverno after the concert, he suggested that this was the real future of music written by artificial intelligence: a constant collaboration. "It’s always the human’s job to interpret," he told me, leaning on his piano and riffling through the sheet music. "In some sense you could write four random chords down and I’d try and find a way of making music with that." He points out that although the music sounds like Miles Davis, it feels like a fake when he plays it. "Some of the phrases don’t quite follow on or they trip up your fingers," he says. This makes sense, as this isn’t music written by a human with hands sitting at a keyboard; it’s the creation of a computer. Artificial intelligence can place notes on a stave, but it can’t yet imagine their performance. That’s up to humans.

When computer scientists are talking about AI, they often compare it with envy to our unconscious intelligence. The way we catch a ball without thinking, for example, or pick up a new video game in a matter of minutes. Being able to understand and think about music isn’t usually mentioned, but it’s arguably a more subtle and difficult test of cognition than any of these tasks. You might not think of yourself as particularly musical, but you’ll know when a song sounds off. And given a bit of practice, you’ll know how to play one, too.

Mark d'Inverno after the show. (Image credit: James Vincent / The Verge)

In d’Inverno’s vision of the future, AI doesn’t replace humans, but becomes a sort of musical sparring partner. "Even if you don’t think machines can be creative by themselves, they can potentially be creative friends," he says. "You can imagine a situation when you’re having a conversation with a machine offering prompts as a critical, creative accomplice."

It’s a reassuring notion, one that fits into a familiar spectrum of human-computer collaboration. This begins with the first computer-synthesized notes, and takes in mainstream musical aids like GarageBand. What’s not clear, though, is where that spectrum ends. Once an AI is able to offer up respectable musical criticism, isn’t it also making judgments independent of humans? If it’s a worthy friend and collaborator, will it one day be able to strike out on its own? What happens when some future musical AI goes solo?

Well, the band might break up, but the music will go on.