Folk music modeling with LSTM for algorithmic composition research
Top 82.4% on sourcepulse
This repository provides tools and models for generating folk music using Long Short-Term Memory (LSTM) recurrent neural networks. It targets musicians, researchers, and enthusiasts interested in AI-driven music composition and style modeling, offering a way to create novel folk tunes and explore the intersection of traditional music and artificial intelligence.
How It Works
The project utilizes LSTMs, a type of recurrent neural network well-suited for sequential data like music. It trains on large datasets of folk music transcriptions to learn stylistic patterns, melodies, and structures. The trained models can then generate new musical pieces in a similar style, offering a deep learning approach to algorithmic composition.
Quick Start & Requirements
conda
for environment management. Key dependencies include mkl-service
, Theano
(master branch), and Lasagne
(master branch).conda
, pip
.python sample_rnn.py --terminal metadata/folkrnn_v2.pkl
python train_rnn.py config5 data/data_v2
Highlighted Details
Maintenance & Community
The project is associated with research from institutions like Queen Mary University of London. While specific active community channels like Discord/Slack are not explicitly mentioned, the extensive list of publications and media coverage indicates significant prior engagement and research activity.
Licensing & Compatibility
The repository does not explicitly state a license. Given the nature of the dependencies (Theano, Lasagne) and the project's academic origins, users should verify licensing for commercial use or integration into closed-source projects.
Limitations & Caveats
The project relies on Python 2.7 and older deep learning libraries (Theano, Lasagne), which are largely deprecated and may present significant challenges for setup and compatibility with modern Python environments and hardware.
3 years ago
1 day