Music transcription modelling and composition using deep learning

Sturm, Bob L., Santos, Joao Felipe, Ben-Tal, Oded and Korshunova, Iryana (2016) Music transcription modelling and composition using deep learning. In: 1st Conference on Computer Simulation of Musical Creativity; 17 - 19 Jun 2016, Huddersfield, U.K.. (Unpublished)

Full text not available from this archive.

Abstract

We apply deep learning methods, specifically long short-term memory (LSTM) networks, to music transcription modelling and composition. We build and train LSTM networks using approximately 23,000 music transcriptions expressed with a high-level vocabulary (ABC notation), and use them to generate new transcriptions. Our practical aim is to create music transcription models useful in particular contexts of music composition. We present results from three perspectives: 1) at the population level, comparing descriptive statistics of the set of training transcriptions and generated transcriptions; 2) at the individual level, examining how a generated transcription reflects the conventions of a music practice in the training transcriptions (Celtic folk); 3) at the application level, using the system for idea generation in music composition. We make our datasets, software and sound examples open and available: https://github.com/IraKorshunova/folk-rnn

Item Type: Conference or Workshop Item (Paper)
Event Title: 1st Conference on Computer Simulation of Musical Creativity
Uncontrolled Keywords: deep learning, recurrent neural network, music modelling, algorithmic composition
Research Area: Music
Faculty, School or Research Centre: Faculty of Arts and Social Sciences (until 2017) > School of Performance and Screen Studies
Related URLs:
Depositing User: Oded Ben-Tal
Date Deposited: 19 Apr 2017 13:47
Last Modified: 24 Apr 2017 13:28
URI: http://eprints.kingston.ac.uk/id/eprint/35038

Actions (Repository Editors)

Item Control Page Item Control Page