Skip to main content

Can We Teach Computers to Make Like Mendelssohn?

Not yet. Robot composer good at emulating short snippets, but still hasn’t mastered melody

Duke researchers are teaching computers to compose new classical music in the style of Romantic-era composers like Chopin and Beethoven. Photo from Pixabay.com
Duke researchers are teaching computers to compose new classical music in the style of Romantic-era composers like Chopin and Beethoven. Photo from Pixabay.com

DURHAM, N.C. -- If the field of machine learning had a holiday theme song, it might be this remix of the Christmas carol “Hark! The Herald Angels Sing” written by a computer.

Duke University researchers are teaching computers to write classical piano music in the mode of great composers like Mendelssohn and Beethoven. The resulting tunes are a pastiche of 19th century style.

Duke graduate student Anna Yanchenko started working on the project with Duke statistics professor Sayan Mukherjee as part of her Master’s degree in statistical science. A musician herself, Yanchenko wanted to see if she could use artificial intelligence to turn a computer into a composer. What they found doesn’t signal the takeover of the machines, but it helps researchers understand their creative potential.

The team wanted to know which machine learning methods come closest to producing classical piano music that could pass as human. To find out, they did an experiment.

They focused on a particular class of methods used to analyze how data changes over time, called state space models. Computer speech recognition uses the same sorts of models to translate what a person says to text on the screen.

To train the models, the researchers chose 10 piano pieces from the Romantic era, the period of Western classical music that began in the early 1800s.

Each piece was converted to a string of numbers representing the pitch and length of each note. The researchers then fed the data set into the system, and had 14 models analyze the music and create new works in the style of the originals.

Before it’s trained, the system doesn’t know anything about Romantic music. It is able to learn, on its own, from the data.

The computer sorts through each piece looking for characteristic patterns of notes, or motifs, then builds a new sequence of notes based on the patterns it finds.

A remix based on Beethoven's Fifth Symphony, for example, might still contain the famously foreboding da-da-da-DUM that marks the beginning of the original. But the notes might reappear at a different pitch, or tempo, or be rearranged in some other way.

The researchers used their models to generate 140 new pieces, each 1000 notes long.

To determine the winners, they scored the results in terms of harmony, melody and originality, or how much it differed from the original.

The team also played the top-scoring pieces for 16 people, half of whom were musicians, to see what they thought.

The results are unlikely to replace the maestros soon. When asked how human-like the generated pieces were on a scale of 1 to 5, most listeners gave them a three.

The biggest difference is the lack of melody, Yanchenko says. “There’s really no long-term structure,” Yanchenko said. “That’s the main giveaway that this hasn’t been composed by a human.”

Instead, many of the pieces sounded more like they were composed by an amnesiac with a looping machine: “It seems to forget what it just did. It repeats itself a lot,” Yanchenko said.

“Jokingly we can say we can maybe generate Muzak for elevators,” Mukherjee said.

While many listeners couldn’t identify the original songs that inspired the computer-generated pieces, most named the piece based on Chopin’s Funeral March as their favorite.

The tune based on Beethoven’s “Ode to Joy” came in second, though the listeners said it sounded more like new-age piano than classical.

The one inspired by Mendelssohn’s “Hark! The Herald Angel’s Sing” was the least popular. “They thought it was a little herky-jerky,” Mukherjee said.

The notes in the generated pieces also didn’t blend as well as they did in the originals, though models trained on songs with simple structure, such as Pachelbel's Canon, produced more pleasing tunes.

Computer-generated music isn’t new. Researchers such as David Cope, former professor of music at the University of California, Santa Cruz, have tried to use computers to analyze and imitate musical styles since the early 1980s.

Ultimately, the Duke team is trying to figure out which methods work best for modeling particular types of music. Models developed based on patterns in Romantic-era music may not work as well for jazz, for example.

The researchers concede that the models they tried were trained on a simplified representation of real music. For one, the training data imply that the notes are always played sequentially one after the other, and never simultaneously, as in a chord. But it’s a start.

“In the future we’d also like to look at orchestral pieces with multiple instruments,” said Yanchenko, currently on the staff at MIT Lincoln Laboratory. “But we’re not there yet.”

Yanchenko presented the results on Dec. 7 at the 12th Women in Machine Learning Workshop, held in conjunction with the 2017 Conference on Neural Information Processing Systems in Long Beach, California.

This research was supported by the National Science Foundation (NSF DMS 16-13261, NSF IIS 15-46331, NSF DMS 14-18261, NSF DMS-17-13012, NSF ABI 16-61386) and the National Institutes of Health (NIH R21 AG055777-01A).

CITATION:  "Classical Music Composition Using State Space Models," Anna Yanchenko and Sayan Mukherjee. https://arxiv.org/abs/1708.03822.