C10ud

Date: 2020-11-14 | projects | audio-reactive | p5js | art |

Overview

C10ud is an audio visualization engine based on the Gray Scott Reaction Diffusion algorithm. It uses layers of metadata to feed mutate the algorithm's calculations in sync with the target audio.

Context

C10ud is the latest in my series of audio-reactive projects. I wanted to explore more complex forms of visuals and took inspiration from Max Cooper's corpus of procedurally generated works.

In the past, I'd been building many simple visuals where I had a lot of control but at some point this felt too imperative. Because I was retaining direct control of my elements, things felt a bit clunky and it was hard to scale to larger scopes which I think brings about some really wonderful effects. The power of computation is that you can create rules and have the computer carry them out to scales a human could never dream of handling. That's something I wasn't taking advantage of and something I think I accomplished reasonably well in this project.

How it was built

Read: Building a Reaction-Diffusion audio visualizer with p5.js.

This visualization runs the Gray-Scott reaction diffusion model at its core. This basically means that I'm simulating the reaction and diffusion of two 'chemicals'.

To make this interesting visually, I've built an engine around this algorithm that allows me to assign different colors to each chemical, so in a simple case where chemical A is white and chemical B is black, a given pixel's color would be determined by the value of A - B in that pixel.

I didn't think that one-off colors for A and B would be interesting enough for a visualization so I went a bit further and created a system of diffusing layers of metadata. These layers work independently of the core reaction diffusion algorithm (though often base decision on the current state of the visualization) and feed into the core calculations, allowing the dynamic insertion of elements like colors, diffusion types, chemicals, etc.

Together, this gives me a good visualization base and methods of mutating the visualization. I then hooked in data from audio to trigger these mutations to create a complete audio reactive engine.

Technologies

The whole program is built within a p5js sketch and incorporates some utility libraries I've built for myself over the years. I'm using Parcel to package and serve all my resources but that's inconsequential as it could easily be swapped out for something else.

You may have noticed that there are a lot of pixel operations being done which means this thing runs slow. Anecdotally, the visualization of a 355 second song took upwards of 500 hours to process - and that doesn't include the time to code.

Because of this vastly-slower-than-real-time processing speed I had to create an adaptation layer to map frames to process-scoped seconds in order to ensure the visualization mapped up with the actual audio. This was a useful exercise in async processing but I think better technology choices (i.e. not running in the browser) and coding techniques (i.e. not resorting to hundreds of pixel operations every second) would've made this project far more performant and sustainable.

  • p5js
  • parcel

Where to find it

Moving Through Time - Griffin Hanekamp and Steve Is Space

Further Reading

Want more like this?

The best / easiest way to support my work is by subscribing for future updates and sharing with your network.