Making Generative Music in the BrowserMy personal processAlex BainterBlockedUnblockFollowFollowingMar 25After making generative music systems on generative.
fm for the better part of a year now, I’ve received numerous requests for an explanation of how I create the systems featured on the site.
Rather than replying to everyone individually, I thought it would be best to explain my process here for anyone interested.
Before I do that, you should know that I work full-time as a software developer.
To keep this post as brief as possible, I won’t be teaching you how to code.
There are other methods of building generative music systems which don’t require programming knowledge, but I don’t use them.
In addition, I’ve probably had more music education than most, though I know a lot more about software development than I do music theory.
Berklee offers a few great courses on Coursera which cover music theory and are free to audit (and the instructor is great).
To be fair to musicians who have to deal with unfamiliar programming terms, I also won’t be explaining any music-related concepts or terms unless it’s quick.
I will try to link to explanations where possible.
It might be worth noting that I’m not affiliated with any of the products, libraries, or resources I’ll be linking to.
I’m just trying to be helpful by sharing what I use.
ToolsWeb Audio APIThe Web Audio API is a relatively new browser API which is well supported.
It enables web developers to play, synthesize, control, process, and record audio in the browser.
As you can probably guess, this is the linchpin technology I use to create browser-based generative music systems.
Boris Smus wrote an excellent, short book on the subject titled Web Audio API: Advanced Sound for Games and Interactive Apps which I recommend.
It offers an abstract layer on top of the Web Audio API which should be familiar to musicians and producers, with a vast array of synthesizers, effects, and filters, as well as related utility functions for things like converting scientifically notated pitches to their frequencies and back.
Additionally, it greatly simplifies access to an accurate timing system.
While this library is not strictly necessary for making generative music systems in the browser, I’ve never built one without it.
It’s very rare that I find myself interacting directly with the Web Audio API rather than using this fantastic library.
js and the Web Audio API.
SamplesIt’s certainly possible to synthesize sounds with Tone.
js and the Web Audio API, but it’s not something I’ve explored much (read: I suck at it).
Instead, I prefer to use recorded audio samples which I play and manipulate.
There are plenty of libraries full of free or cheap audio samples out there, but the most significant ones I’ve used at the time of writing are the Community Edition of Versilian Studios Chamber Orchestra 2, the Versilian Community Sample Library, and the Sonatina Symphonic Orchestra.
The generosity of the providers of these and other free libraries inspires me to release my work for free as well.
In addition to using sample libraries, sometimes I record my own audio samples for use on the site.
I record with a Rhode NT1-A microphone or direct from my Line 6 POD HD500X into a Focusrite Scarlett 2i4.
This is all relatively cheap gear which I purchased used.
Occasionally when I record, I reconstruct my “recording booth” which I designed and made out of PVC pipe and movers’ blankets to dampen sound.
Though, I usually can’t be bothered.
While not every piece requires this library, it’s invaluable for the ones that do.
The library contains all sorts of helpful functions which do things like returning all the notes or intervals in a given chord or scale, inverting chords, transposing notes and intervals up or down a given amount of semitones, and so much more.
My ProcessI can describe my process in three “phases” which sound very official, but in reality the lines between them are blurry and I go back and forth.
fm pieces usually start one of two ways.
Either I have an idea for a piece which I then build and play with, or I discover a sound I like while playing with sample files.
In the former case, these often start on paper, either as diagrams or just text descriptions which I jot down when I think of them.
While I can’t imagine these are interesting to anyone, they might be the best way to demonstrate how I thought of some of the pieces.
Here’s some samples from my notebook:I swear I didn’t realize how much this looked like boobs until now.
This was one of the first diagrams I made.
The idea I was trying to capture was to begin with two instruments playing the same notes in mono.
Slowly, one instrument would be panned hard left and the other hard right, and this panning would oscillate back and forth.
The further away the instruments got, the more different the music they played would be, but as they got closer to the center panning again, their music would merge back into the same piece.
This became “Lemniscate.
”These diagrams became “Trees.
” I had this idea while out for a walk after noticing the way trees start as a single trunk and then split over and over as they go up.
I do have a piece on the site named “Drones,” but this isn’t it.
I haven’t made this one yet.
Sometimes I just write down descriptions of an idea.
These are for “Eno Machine.
” As indicated, it’s based on a technique Brian Eno used to create one of the tracks on Music for Airports.
You can read more about that here.
I did make this one but I wasn’t happy with the results, so it’s not on generative.
This isn’t a piece so much as a technique for playing a group of notes.
I’ve used it in several pieces, but it’s probably most noticeable in “Sevenths.
” I’ll cover it a bit below.
For ideas like these, it’s really just a matter of building and executing it to see what it sounds like.
Then I move to the next “phase.
”The other way a piece can begin is when I find a sample I like.
While some samples are strong enough to stand on their own in a piece, more often I arrive at a sound I like by manipulating samples.
I experiment with reversing, changing the pitch of, adding effects to, and arranging audio files in different ways until I find something I enjoy.
A very simple example of this is the piece “Impact,” which I made because I liked the sound of a piano key played in reverse, a technique used by tons and tons of bands (like Yes).
I have a separate repository to store these experiments while they’re in progress.
The code is messy and there’s no documentation.
It’s meant to be a quick and easy place for me to try things.
I serve the project locally during development with webpack-dev-server, which means my code is re-fetched and re-executed every time I change it.
I know there are some live coding environments but I haven’t tried them — this setup works fine.
Most of my experiments fail to produce anything I like.
I’ve spent many hours trying to manipulate a sample into something interesting only to deem the direction a failure and abandon it.
I do this experimentation intentionally rather than waiting for inspiration to strike in the hope that I’ll hone my intuition and in the future I’ll be able to come up with something I like faster.
Synthesis(As in synthesis of the system, not audio synthesis)Once I find some kernel of an idea I like, it needs to be expanded into a generative system.
This means finding ways to add randomization such that the piece won’t repeat itself even though it plays forever.
This randomization can be applied to just about any aspect of the music.
For example, in places where a composer would normally have an instruction like “rest for four beats,” I’ll insert something like “rest for some amount of time between two and five seconds.
” In this example, as the piece executes it will make new selection of time whenever that rest happens.
Another example would be for me to program the piece to chose a note at random rather than me choosing a particular note to play.
The systems make random choices, but I’m in control of the possible outcomes of those choices.
I don’t program pieces to just choose any old note to play.
Instead, I usually constrain the choice within a specific selection of notes which I feel are appropriate.
I don’t have any trick for how I choose these constraints, I just try things and adjust as I hear the results.
I can also weight decisions using probabilities.
For example, if I had a system which generated a stream of notes, I could build the system such that 90% of the notes are played by a piano and 10% of the notes are played by a violin.
Or, I could program a system such that 25% of the time, the note being played is also played an octave higher or lower.
Allow me to explain “Sevenths,” as it’s one of the easiest ones to understand.
You can find the source code for it here.
The core idea behind this piece is simply to play seventh chords over and over.
First, a pitch class is chosen randomly from A, A#, B, C, C#, D, D#, E, F, F#, G, G# (that’s all of the pitch classes in western music — in this case, I chose not to constrain the options).
Then, an octave is chosen randomly from 2, 3, 4, and 5, which are simply the ones I thought sounded the best.
Combining the pitch class and the octave gives the chord’s tonic, or root note.
Next, the type of seventh chord is randomly selected from major, minor, and dominant.
Finally, a chord inversion is randomly chosen.
Now a complete seventh chord has been selected.
Rather than play all the notes of a chord at the same time, I wanted the notes to happen randomly over a short period of time.
The system generates a random number between 0.
25 and 5 seconds, which is the max duration of time the chord will be played in.
I’ll call that number X.
Then, for each note in the chord being played, the system generates another random number between 0 and X seconds.
This is how long from now the note will be played.
So if X is 4.
34 seconds, one note of the chord might be played in 0.
22 seconds, another in 1.
23 seconds, another in 2.
24 seconds, and the last in 3.
Finally, the system generates a random number between 3 and 15 seconds.
This is the amount of time to wait before playing the next chord, and the whole process starts over.
RefinementAt this point, I have a system which creates generative music I can listen to.
I like to play the music while I work on something else, but I pay attention to the output so I can adjust the system.
If I hear something I like, I might make an adjustment so it happens more often.
If I hear something I dislike, I’ll try to minimize or remove it.
This process usually spans multiple days of listening for several hours a day.
I tweak things here and there and then restart the system until I get output I’m consistently satisfied with.
By the time a piece ends up on generative.
fm I’m almost sick of listening to it.
That’s really all there is to it right now.
I hope this is helpful for those who’ve been curious about my process.
I’ve found releasing new pieces on a weekly cadence is just fast enough to push me out of my comfort zone.
Eventually I’d like to spend more time on each system in order to create pieces which have significant changes and movements, but for now I’m focused on getting better at coming up with new systems.
I sincerely hope the work I’m producing right now will be proven as my worst by the work I haven’t created yet.
You can listen to my generative music systems on generative.
They mostly produce ambient, minimal music.
You can also follow me on Twitter where I tweet links to my Medium posts without the paywall.
Putting my articles behind the paywall makes them eligible for recommendation to new readers through Medium’s curation process, but I want them to be free for anyone who wants to read them.
I also tweet updates about generative.