Converting audio into procedural generated visual
Real-Time Audio-Visualizer
Born from a late‑night hobby project, this Unity‑powered audio visualizer has grown into a versatile live‑show tool, turning music into immersive, real‑time visuals. Used at multiple venues and a multi‑day festival, it adapts from small club rigs to large LED walls. Its modular tools transform 3D models and other assets into synchronized performances. I lead development and coordination, guiding the project as it evolves toward a stable, reliable release ready for wider use.
Turn on Audio for best experience -->

Procedural Generation
A collection of custom‑built procedural generation tools in Unity, from endless terrain with layered noise and dynamic cave systems to a fully adjustable city generator. Designed and coded from the ground up for performance, flexibility, and artist‑friendly control.

Meow Mayhem
A chaotic VR game in Unreal Engine 4 featuring unique movement mechanics paired with ragdoll physics for maximum fun. Led team management, created the enemy AI for all mice, and designed most of the VFX using Niagara.

The Dam
A co‑op puzzle game built in Unity for the Oyfo Museum, featuring unique character abilities and a custom dialogue system to teach players about ocean pollution. Developed a server‑authoritative netcode solution and managed seamless local multiplayer gameplay.

Technical Artist - Tools Engineer
Ezra Dusselaar
Engineer in the brain, artist in the heart — I love blending creativity and technical skill to build immersive experiences. My journey began with a passion for art, evolved through exploring fantasy game worlds, and deepened when I discovered my love for technical problem‑solving.Today, my focus is Unity development, C#, Shader Graph, tool programming, and procedural generation, with a strong foundation in Unreal Engine, HLSL, compute shaders, and C++. I’m especially passionate about pushing further into advanced VFX, shader programming (HLSL/OpenGL), procedural generation techniques, and deepening my C++ expertise.I run Cuttlefish Studios, creating low‑latency, real‑time visuals for VJs and venues, and co‑founded Noiseportal, hosting heavy‑bass events in Enschede where my visuals come to life. Outside of work, I explore nature, befriend animals, create art, and enjoy joyrides on my motorcycle.

Features
The real-time audio visualizer project provides a suite of tools designed to help users create rich, audio-responsive visuals with ease and flexibility. At its core, the system captures audio with minimal latency and processes it into usable data, including kick detection, kick intensity, BPM, FFT frequency bands, and overall intensity. This data drives the visual logic and enables tightly synced feedback, making the tool suitable for both live performances and generative installations.One of the most powerful features is the sequencer—a flexible system that lets users define a tree of visuals triggered by audio events (typically kicks). It supports a range of behaviors, from ordered or randomized playback to delays, full animation chains, and precise effect control. This empowers users to craft complex visual narratives while keeping real-time responsiveness intact.The tool is not designed to replace VJs, but to augment their workflow and support smaller venues with limited resources. To that end, the system supports MIDI controller integration and outputs directly to Resolume, a popular VJ software, ensuring compatibility with existing setups. The application is designed to be plug-and-play: connect an audio input—USB, jack, or line-in—and it works immediately, making it easy to adopt without deep technical setup.The project continues to evolve, with ongoing improvements in both the visual toolkit and the audio analysis pipeline, alongside plans to expand customization options and streamline live show integration even further.
Procedural | Audio | Management | Performance
Real-Time Audio-Visualizer
What began as a small hobby project quickly evolved into a multi-person initiative united by a shared vision. Initially, I started experimenting with audio processing in my free time—an area I hadn’t explored before. That curiosity led to the creation of a basic audio visualizer, which instantly hooked me. It combined many of my favorite things: complex algorithms, procedural systems, custom tool development, shaders, and real-time VFX.The project soon became the core of my graduation work, during which I had the opportunity to perform at multiple venues and even a festival, using my system live. After graduating, I kept building on it, and others joined in to help shape the vision further.Today, the project continues to grow. We're focusing on creating powerful, flexible tools that allow users to build and control real-time visuals from audio—pushing both technical boundaries and creative expression.

Procedural | Node System | Tooling | Scriptable Objects | Unity
Procedural Generated City
Within Unity I developed a procedurally generated city system, where streets are generated first, and buildings are then placed along them. The street generator, which I custom made, ensures that roads don’t intersect incorrectly, snap together cleanly, and can be easily customized by designers through adjustable parameters such as street length, width, and angles.Buildings are randomly selected from a modular library of house parts, allowing designers and artists to easily add new building types. The system accounts for building sizes, street widths, and placement offsets, ensuring that houses never intersect with streets or each other.To create more natural layouts, I added an optional algorithm that generates a peak in building height near the city center, mimicking real urban density. Both the height gradient and spread can be adjusted, giving designers full creative control over the city’s look.
Procedural | Compute Shader | Noise | Marching Cubes | Unity
Endless Procedural Generated Terrain
For this project, I developed an endless, procedurally generated terrain system in Unity using C#. My goal was to create a seamless, explorable world where new terrain is generated dynamically as the player approaches unexplored areas. Almost every component of the system was coded from the ground up, giving me full control over performance and flexibility.The terrain is shaped using custom noise functions I implemented myself, including fractal noise and 3D Perlin noise. By layering these, I generated natural landscapes and underground cave systems—rarely opening to the surface but expanding into larger, more interconnected spaces at greater depths.To overcome performance challenges, I integrated compute shaders for the heavy calculations. Offloading generation to the GPU drastically improved speed and allowed for smooth, stutter-free loading of new chunks as players explored.


VR | VFX | Team Managment | UE4
Meow Mayhem
Meow Mayhem is a VR game developed in Unreal Engine 4, created to be fun and accessible for all ages—the only requirements set by the client. The game features unique movement mechanics combined with ragdoll physics, resulting in a chaotic and entertaining experience for both players and spectators.I took on the role of project lead, managing the team and ensuring development stayed on track, with the game completed within a five-month timeline. In addition to management, I contributed directly to several core systems: I developed the enemy AI, handling the movement and behavior of all the mice, and I created the majority of the visual effects, using the opportunity to dive into Unreal’s Niagara VFX system and expand my skills in real-time particle design.
Local Server | Puzzle | Dialogue | Unity
The Dam
The Dam is a local co-op game developed in just eight weeks for the Oyfo Museum. Although it was originally intended as an on-site experience, the project was completed during the COVID-19 pandemic and ultimately wasn’t installed physically.The game was designed to encourage collaborative play, with each character having unique abilities that players must combine to solve environmental puzzles and repair a damaged dam. To communicate the theme of ocean pollution and its effects, I developed an extensive dialogue system featuring emotes and voice lines, allowing the narrative to be both educational and engaging.Another engineer and me also built the server architecture from the ground up, using netcode to create a server-authoritative model, ensuring consistent and synchronized gameplay between players.


Unity | Shader Graph | HLSL | VFX Graph | UE4 | Niagara
Shaders / VFX
Over the years, I’ve created a wide range of shaders and visual effects, both as essential parts of game projects and simply out of curiosity and creative enjoyment. These effects were built using Unity’s VFX Graph, Shader Graph, and custom HLSL, as well as Niagara in Unreal Engine. Whether it was a stylized fire for a prototype, reactive surface shaders, or immersive environment effects, each piece was an opportunity to explore new techniques and push visual quality. These experiments not only enhanced the projects they were part of but also helped me deepen my understanding of real-time rendering, GPU workflows, and visual design across both Unity and Unreal.