I’ve seen really mind-blowing examples of the power of such architectures, from recreating images using particular art styles to automatically forming word representations that account for pretty high-level semantic relations.
Recurrent Neural Networks (their most frecuent form, LSTMs) are particularly interesting in that they learn patterns in sequences. A great article showing their tremendous power is Andrej Karpathy’s Unreasonable Effectivenes of RNNs. If you haven’t, you should go read it now.
After seeing the wonderful things they can do, I decided to try and use them to tackle a problem I’ve been interested in for years: programmatic music composition. This is something I’ve attempted in the past using Markov models, Genetic Algorithms, and other techniques with pretty disappointing results.
I was really surprised when a project I thought would take me a whole weekend actually took me ~5 hours of coding at a busy Starbucks. (!)
Let me show you a short 8-second sample first, and then tell you how I did it:
That’s one of the first pieces my RNN composed. Not very good, right? But bear in mind that it learned how to generate that just by looking at examples of existing music. I didn’t have to program (or know about) any rule about how to structure notes to form a melody. Let’s see how:
First thing I did was looking for a good representation for the music to be composed. I eventually found the abc notation, which was particularly good for my purposes because it includes concepts of chords and melodies, which makes it easier to procedurally create more meaningful sounds.
Here’s a simple example I just wrote:
I won’t go over the details of the format, but if you know some musical notation it will look familiar. If you’re curious about how this sounds, you can listen to this in Ubuntu by saving the above snippet into a
hello.abc file and running:
Given that abc is a text format, I decided to give Karpathy’s char-rnn a spin. I actually ended up using Sherjil Ozair’s TensorFlow version of char-rnn, because TensorFlow (Google’s new Machine Learning framework) is way easier to play with than Torch. You should know that hardcore Machine Learning researchers don’t often use TensorFlow because it’s 3x slower (Justin Johnson told this to me and although I haven’t benchmarked it myself, most serious work I’ve seen uses either Caffe or Torch.
I then needed the data to train the RNN on. After googling a bit I came across the Nottingham Music Database, which has over 1000 Folk Tunes in abc notation. I downloaded this pretty small dataset, compared the >1M data points typically used to train Neural Nets for real.
I concatenated all the abc files together and started training the network on a AWS g2.2xlarge instance:
Without much work I was training a RNN on a folk music dataset.
After only 500 batches of training, the network produces mostly noise, but you could begin to guess a trace of the abc notation:
After 500 batches of training the RNN produced invalid abc notation.
Wait a couple more minutes, and with 1000 trained batches the outcome changes completely: While still strictly invalid abc format, this looks much like the correct notation.
As you can see, the network even starts generating titles for its creations (found in the
After 7200 batches, the network produces mostly fully correct abc notation pieces. Here’s an example: A valid abc notation piece produced by the trained RNN after 7200 batches.
OK, so the RNN can learn to produce valid abc notation files, but that doesn’t say anything about their actual musical quality. I’ll let you judge by yourself (I think they’re quite shitty but at least non-random. I don’t think I could have programmed an algorithm to generate better non-trivial music)
I’ll list 4 more non-hand-picked pieces I just generated from the fully-trained network, along with their abc version and music sheet. The names are hilarious.
A GanneG Maman [0:33] (music sheet)
Ad 197, via PR [1:01] (music sheet)
Thacrack, via EF [1:17] (music sheet)
Lea Oxlee [1:07] (music sheet) - my favorite!
Machine Learning is at a tipping point. The tools are getting better. Complexities are being abstracted. You don’t have to have a PhD to make apps that use state of the art network models. Very good frameworks and models are available for everyone to use. It’s a good time to hack something in Machine Learning. Even if you’re not an expert, you can achieve fun results.