The examples the company has shared are music to my ears.
Google researchers have made an AI that can generate minutes-long musical pieces from text prompts, and can even transform a whistled or hummed melody into other instruments, similar to how systems like DALL-E generate images from written prompts (via TechCrunch). The model is called MusicLM, and while you can’t play around with it for yourself, the company has uploaded a bunch of samples that it produced using the model.
The examples are impressive. There are 30-second snippets of what sound like actual songs created from paragraph-long descriptions that prescribe a genre, vibe, and even specific instruments, as well as five-minute-long pieces generated from one or two words like “melodic techno.” Perhaps my favorite is a demo of “story mode,” where the model is basically given a script to morph between prompts. For example, this prompt:
electronic song played in a videogame (0:00-0:15)
meditation song played next to a river (0:15-0:30)
Resulted in the audio you can listen to here.
It may not be for everyone, but I could totally see this being composed by a human (I also listened to it on loop dozens of times while writing this article). Also featured on the demo site are examples of what the model produces when asked to generate 10-second clips of instruments like the cello or maracas (the later example is one where the system does a relatively poor job), eight-second clips of a certain genre, music that would fit a prison escape, and even what a beginner piano player would sound like versus an advanced one. It also includes interpretations of phrases like “futuristic club” and “accordion death metal.”
MusicLM can even simulate human vocals, and while it seems to get the tone and overall sound of voices right, there’s a quality to them that’s definitely off. The best way I can describe it is that they sound grainy or staticky. That quality isn’t as clear in the example above, but I think this one illustrates it pretty well.
That, by the way, is the result of asking it to make music that would play at a gym. You may also have noticed that the lyrics are nonsense, but in a way that you may not necessarily catch if you’re not paying attention — kind of like if you were listening to someone singing in Simlish or that one song that’s meant to sound like English but isn’t.
I won’t pretend to know how Google achieved these results, but it’s released a research paper explaining it in detail if you’re the type of person who would understand this figure: