Consequent - Generative Audio with Gibber and Processing image

Consequent - Generative Audio with Gibber and Processing

Choose your instrument:

Note that this is pretty taxing on the CPU. If you hear clipping, try fewer instruments.

I recently attended a hackday where creatives were challenged to create project using data from John Brown's uncontext art project.

Many people created interesting things. Mike Creighton created Fragmented, a generative portraiture experiment that's really quite beautiful, while others made flocking simulations in Unity, fed data to play Mario Brothers, or crafted visuals with P5 or the Hype Framework. It was very inspiring to see so much diversity in both approaches and technologies.

For myself, I wanted to play with Charlie Robert's Gibber, a library for generating audio in the browser with javascript. My goal was to create a listenable song that changes over time and is never the same twice. Gibber, p5, and uncontext make this possible.

To be clear, everything you hear is generated. There are no samples being triggered - this is javascript generating sounds and feeding them to your audio output.

Generative, but Structured

Many generative-audio projects seem to fall in to one of two camps - one with wildly varying frequencies and tones, or with a set of samples that all sound nicely together and are randomly picked to play at various times. I believe both of these have approaches are doable with p5 and gibber, but I wanted to do something that was more structured and could perhaps be used as the background of a game.

I settled on an approach that pushes random melodies through random progressions while modulating the sounds. Because Gibber has a Theory component, I was able to program a small number melodies that all 'play nicely together' and select one based on the uncontext data feed while simultaneously selecting a new progression. Meanwhile, the data feed modulates some of the characteristics of each audio signal - volume, pan, frequency, pulse width, and resonance. It remains interesting and surprising to me, even after listening to it as much as I have.

There were a couple neat tricks I learnt. I used TweenLite to drop the frequency of an instrument over about 5 seconds, which results in slow-drop sound. In fact, tweening audio (even over 1/10th of a second) makes it sound more polished and helps reduce any annoying pops or clicks.


There's also a visual component, created by drawing a series of lines based on what each instrument is doing amplitude and frequency wise. It's an interesting visual because I have access to each track's frequencies, which most visualizers do not, so I can create more granular representations of the audio.

Have you played with Gibber? I'd love to hear it! Shoot me a line on twitter @lucastswick. And if you haven't yet played with Gibber what are you waiting for?

Finally, you can download the repository on github.

Gibber Close Ups