top of page

PLANETARY VISUALIZER

ROLE

Solo Indie Developer

DESCRIPTION

This is a planetary visualizer in which everything but the skybox is procedurally generated. It listens to your system audio, whether it is a zoom meeting, a new banger on Spotify or your midnight Youtube mukbangs. It reacts to it all! 

Different music styles cause different patterns and reactions. The entire audio spectrum is used to tweak various elements on the planet. 

YEAR

2022

GENRE

Visualizer

PLATFORMS

CONCEPT

I have done a lot in the past with audio reactive tools. Whether it was with Arduino or in a game. At my university we were given the task to create something that is procedural with the universe as the theme and it should also interactive.

Since I have never done anything with procedural generation I really wanted to create something cool that reacts to music. I started out with the tutorial series by Sebastian Lague about procedural generating a planet and I made this my own by expanding upon it by modifying the shader, base logic and more. I also made use of a wasapi plugin created by Bas van Seeters.

In the past I have done a lot with audio, but never have I really delved deep into what an audio spectrum actually is like. So for this project I decided to learn more about it, which in a nutshell brings me to this page that explains it really well. 

In order to create an accurate visualizer I will have to map the audio spectrum into frequency bands from which each band is responsible for a specific part of the visualizer. Theoretically this should result in the visualizer being compatible with any kind of sound. 

Lastly I want the visualizer to be "smart". This means that it is able to adjust itself based on the current audio that is being played. If this tool is being used for shows or something similar then I don't want the DJ to keep adjusting values during the performance. The visualizer should be able to adjust itself based on specific rule sets. 

All this together became my main concept for this visualizer.

WORKFLOW

After having planned what I wanted to make as a visualizer I started by following the tutorial. Halfway through I could implement some parts that I knew that I had to do eventually such as emission in the shaders. 

The emission was something that I struggled with a really long time, because I wanted the islands to give light with the oceans not giving light. This roughly translated to filtering away the blue colors. I did want to have control over the emission color and intensity so I also had to add a setting for this. I tried multiple methods to achieve the results I envisioned. For example generating a 2D texture with a color array where I used contrasts between black and white. 

This however didn't have the effect I wanted. The shadergraph screenshot is the final version in which I eventually got the proper results by filtering the blue contrasts away. After filtering it away I added the colors which I multiplied by the emission strength. I also noticed that I can flip all colors to negative by making the strength negative. This was a nice discovery which I made use of later on. 

At this point I implemented the wasapi driver plugin that I received from Bas. I decided to make a designer friendly interface in the inspector for those that want to modify some settings. This is so that it can serve as a tool for modifying the planet parameters and visual looks. 

I have used the data from the wasapi driver in the past, however I just mapped it to a frequency band and used it to calculate the rms (root mean square). This was a very simple formula. The downside however was that it wasn't accurate. I mean in a way it is, but not per type of audio. For example the bass drops, the highs etc. had no effect on the visualizer. The overall rhythm did though. 

So for this project I wanted to properly map the audio spectrum to frequency bands. I separated the frequency bands over 7 values. Each corresponding to their own frequency band (sub-bass, bass, midrange, etc.). These values are then used for various settings in the planet. This results in a dynamic automated planet based on the system audio. 

I wanted to automate the settings so that it uses the rms to decide whether it has to upper or lower certain settings sensitivity audio wise in order to produce proper effects. This gets tested and fine tuned with each update as the last video shows. 

INFO

shader.png
The shadergraph
class diagram.png
UML
Putting the visualizer through a test run

AUDIO SPECTRUM

code.png

The audio spectrum gets mapped to a simple float array that acts as a 128 bit frequency band. 

This band is then split up into chunks of 8. Each chunk is then used to calculate the values of the band they belong to based on the frequency range (explained here). The final chunk is also used for the presence. So the brilliance consists of 2 bands instead of 1 due to its bigger range.

Eventually I made it extremely accurate by only reacting to the specific frequency. This however gave me the realization that music in itself is not extremely accurate. The frequencies have a range in which they are the strongest at their center most values. In the testing video above you can see the values in the array on the right side reacting to the sound tests with the center ones having the highest values and it then scaling down closer to the zero. 

 

With the initial finished product I wasn't satisfied. I wanted the sky box settings in the post processing to react to music as well. So I made sure to apply the emission to the bloom settings as well. I also applied some chromatic aberration that mainly reacts to the kicks so that you can "see" the beats.

After the final tests I could see how vocals, bass, kicks, etc. affected the planet and surroundings. I tried to test this with as many music genres as I could find and as diverse as possible. This gave me the results I expected and was hoping for. 

Down the road I only have to make a proper evaluator for the color. Right now it is basically the rms of the bass, but I want it to be more aggressive and more dynamic. So maybe I will tie it to vocals or something different. It all depends on the tests.

The whole project is open source and can be found on my Github, specifically here.

bottom of page