Generating Pokemon-Inspired Music from Neural Networks

In terms of model architectures, we were able to learn about and implement bidirectional layers, and we got to see its value for learning from sequence data.Spoiler alert: Bidirectional LSTM performs much better than LSTM in terms of in-sample prediction accuracy and output coherenceWe were also surprised by the power of multi-layer perceptrons as our MLP-based generator network using continuous inputs was able to compete with our RNN-based discriminator network trained on discrete data in a short period of time..This continues to prove the extraordinary versatility of MLPs.Finally, this project gave us the opportunity to experiment with reinforcement learning..To us, this method represents an excellent middle ground between supervised and unsupervised learning..Unlike before, we could now obtain the help of a generator AI to lead the model to the ‘correct result’ without the need of a target variable..We’re excited to see the new advances that this type of machine learning will lead to.Our code for this project can be found on Github!ReferencesSigurður Skúli’s blog post — McMahon’s AI Jukebox — Johnson’s page on Music Composition with RNNs — paper on Music Generation with Deep Learning Networks — Paper — more reading on GANs, check out these links:A Brief intro to GANs — Short PyTorch Tutorial on GANs — GAN Library — GAN Zoo — Hints for Training GANs — More details

Leave a Reply