he idea that artificial intelligence can compose music is scary for a lot of people, including me. But music-making AI software has advanced so far in the past few years that it’s no longer a frightening novelty; it’s a viable tool that can and is being used by producers to help in the creative process. This raises the question: could artificial intelligence one day replace musicians? For the second episode of The Future of Music, I went to LA to visit the offices of AI platform Amper Music and the home of Taryn Southern, a pop artist who is working with Amper and other AI platforms to co-produce her debut album I AM AI.
Using AI as a tool to make music or aid musicians has been in practice for quite some time. In the ‘90s, David Bowie helped develop an app called the Verbasizer, which took literary source material and randomly reordered the words to create new combinations that could be used as lyrics. In 2016, researchers at Sony used software called Flow Machines to create a melody in the style of The Beatles. This material was then turned over to human composer Benoît Carré and developed into a fully produced pop song called “Daddy’s Car.”(Flow Machines was also used to help create an entire album’s worth of music under the name SKYGGE, which is Danish for “shadow.”) On a consumer level, the technology is already integrated with popular music-making programs like Logic, a piece of software that is used by musicians around the world, and it can auto-populate unique drum patterns with the help of AI.
Now, there’s an entire industry built around AI services for creating music, including the aforementioned Flow Machines, IBM Watson Beat, Google Magenta’s NSynth Super, Jukedeck, Melodrive, Spotify’s Creator Technology Research Lab, and Amper Music.
Most of these systems work by using deep learning networks, a type of AI that’s reliant on analyzing large amounts of data. Basically, you feed the software tons of source material, from dance hits to disco classics, which it then analyzes to find patterns. It picks up on things like chords, tempo, length, and how notes relate to one another, learning from all the input so it can write its own melodies. There are differences between platforms: some deliver MIDI while others deliver audio. Some learn purely by examining data, while others rely on hard-coded rules based on musical theory to guide their output.
Read More Here