top of page

MLP study 1
Duration - 5:30
Instrumentation - four-channel fixed media

Sonification of a 25-parameter dropout layer in a multilayer perceptron architecture training on one epoch of the MNIST dataset. 

Neural networks are huge matrices of numbers. When we train a network, these numbers are first initially randomized. Then, sample vectors are multiplied through the network, compared against their expected outputs, and then the error is fed back through the network to recompute the numbers. With thousands of iterations, the parameters of the network gradually converge to a (hopefully) good predictor. In this piece, each of the nodes one of the layers of a multi-layer perceptron network is mapped onto different musical parameters during a training process on the archetypical "MNIST" dataset, a collection of handwriting samples of the arabic numerals. Each of the parameters, over the course of the piece, moves from a state of random chance to an almost perfect classifier of (a narrow subset of) human handwriting.

Best experienced loud.

MLP study stereo mix
00:00 / 05:30
bottom of page