With the help of the 30-year-old digital music standard MIDI, computers are learning musical composition and creativity. Researchers have been experimenting with AI-generated music for years, and this article describes the experiences they have had along the way.

In May, Google research scientist Douglas Eck left his Silicon Valley office to spend a few days at Moogfest, a gathering for music, art, and technology enthusiasts deep in North Carolina's Smoky Mountains. Eck told the festival's music-savvy attendees about his team’s new ideas about how to teach computers to help musicians write music—generate harmonies, create transitions in a song, and elaborate on a recurring theme. Someday, the machine could learn to write a song all on its own.

Eck hadn't come to the festival—which was inspired by the legendary creator of the Moog synthesizer and peopled with musicians and electronic music nerds—simply to introduce his team's challenging project. To "learn" how to create art and music, he and his colleagues need users to feed the machines tons of data, using MIDI, a format more often associated with dinky video game sounds than with complex machine learning.

Researchers have been experimenting with AI-generated music for years. Scientists at Sony's Computer Science Laboratory in France recently released what some have called the first AI-generated pop songs, composed from their in-house AI algorithms (although they were arranged by a human musician, who also wrote their lyrics). Their AI platform, FlowMachines, has also composed jazz and classical scores in the past using MIDI. Eck's talk at Moogfest was a prelude to a Google research program called Magenta, which aims to write code that can learn how to generate art, starting with music...

... Read the full article by Tina Amirtha at FastCompany.com

Join The Conversation