Updating my vanilla JS audio visualizer (moon-eye) to work more consistently across processor speeds, animate smoother on the DOM, and adapt to changing song / audio landscapes in real-time

Date: 2019-11-16 | moon-eye | javascript | audio-visualizer |

This month I'm focusing on touching up / improving my current projects. As part of that effort, I've gone back and given moon-eye a facelift.

algorithm change

The major change in this release is a modification to how the audio processing engine works. I used to use a standard average of a fourier transform of the input audio over the last n cycles to create a baseline of recent audio then see if the newest input was higher or lower and pump that into the visualizer to increase or decrease pupil size.

old alg

I was using a set count of averages (and there was one average per cycle), this meant that different processor speeds would have different history lengths and thus would have vastly different experiences.

To offset the variability of history lengths, I bumped the pupil change multiplier so that when it hits it really hits and when it misses it really misses so that the visualizer was always energetic. This was okay, but due to the differences in sizes that was pumped into the DOM, the pupil animation would tear (the "teleporting" effect) all the time which I don't think looked very good.

The alg was very dependent on the kind of song for performance. Because we're working on averages, we risk having a song that's very consistent constantly hitting and missing the increase condition causing an energetic visualizer even though, from human perception, nothing's really happening. Moreover, if we were to pick an arbitrary threshold to prevent this visual "noise" (which I did), some songs would benefit from this whereas some songs may get "swallowed" by it as they're just naturally less variable when it comes to audio components leading to a dead visualizer.

Here's an example of the old visualizer:

new alg

To fix this, I first changed the history check from being based on element length to being based on time. This means that faster processors will have a more granular history (as they'll usually have gone through more cycles in y time) but slower processors will still be able to utilize moon-eye as it was intended. Of course, this introduces additional risk of memory overflows as the size of the history datastructure is now unbounded by memory size, but these values are so small that it usually won't matter.

The second thing I did was to modify the multipliers that control how much the pupil increases / decreases at any one time. This results in a smoother experience overall, though has the side effect of creating a less energetic visualizer. I like the smoothness, so think it's a fair tradeoff.

The last thing I did was to add in a mechanism for adaptive thresholding. Basically I wanted to defend against the visualizer performing poorly on both high and low variable songs. To do this, I implemented a threshold ladder that the visualizer can switch to every x seconds based on the hit rate over the current history window. This means it can adapt to both high and low variable conditions over time, so even for a mix which might contain large diversities in the songs / sounds it employs, the visualizer can still adapt to perform reasonably well.

Here's the new visualizer at play:

give it a spin

moon-eye is live on my site (with a new landing page, too!), so head over and give it a spin.

Lmk if you have feedback / suggestions via my contact page.

Always building,

-HAMY.OUT

Want more like this?

The best / easiest way to support my work is by subscribing for future updates and sharing with your network.