Problem
AI video generation is a black box. Creators describe intent in words, receive unpredictable output, and judge results with their eyes alone — no physical intuition, no spatial feedback, no measurable control.
Solution
A physics engine that reads AI video as material data. Brightness sculpts 3D geometry in real time. Motion energy maps to surface frequency. Claude analyzes the form and suggests the next prompt. Fal.ai regenerates without leaving the interface. The loop closes between human intent, physical simulation, and generative output.
Role
Type
Stack
Skills
Sole Designer & Engineer
Solo project
Three.js · GLSL · Claude API · Fal.ai · React · Vercel
Real-time GPU pipeline · Shader authoring · AI system integration · Creative tooling design · UI/UX
Concept
AI video generation operates entirely in language
— you describe, receive, judge, repeat. The feedback loop has no physical layer.
Resonance changes that.
Every frame of an AI-generated video becomes sculptable material in real time. Brightness sculpts geometry. Motion energy drives surface frequency. When something is wrong, you feel it in the form before you can say it in words.
Demo
The
Five
Parameters
INFLUENCE
— The ratio of AI-driven to procedural physics. At 0, pure GLSL simulation. At 1, the video completely sculpts the mesh. The space between is where the most interesting forms emerge.
AMPLITUDE
— The magnitude of displacement. Maps directly to Fal.ai's guidance scale, creating a bidirectional link between physical intensity and generation parameters.
THRESHOLD
— Minimum luminance required to trigger displacement. The photographer's luminosity mask, translated into physics. Only the brightest regions of the AI output form structure.
MEMORY
— Temporal inertia via pingpong framebuffer. The mesh remembers previous frames. At high values, the material moves like viscous fluid — slow to respond, slow to forget.
RESONANCE
— Frequency of response to motion energy, computed via a 4×4 DataTexture spatial frequency map. Low values produce large geological movements. High values produce fine surface trembling.
Technical Architecture
VideoTexture → GPU realtime pipeline (60FPS)
Ridged Multifractal Noise + luminance blend (GLSL)
Pingpong WebGLRenderTarget for MEMORY parameter
4×4 DataTexture frequency map for RESONANCE parameter
Latent space mapping:
INFLUENCE → denoising strength
AMPLITUDE → guidance scale
THRESHOLD → luminosity mask
Generation Health metrics: CONSISTENCY / DRIFT / STABILITY
Session recorder: full parameter timeline, exportable JSON
Claude API: realtime form analysis → prompt suggestion
Fal.ai: in-app generation with canvas frame as image reference
Future
Resonance is an early prototype of a larger idea: that as AI systems become capable of generating entire worlds, the interfaces for working with those worlds need to evolve beyond text prompts and visual judgment.
The next phase connects directly to Runway's General World Model research
— using physics simulation as a spatial consistency validator for generated scenes. If a world model generates a scene where the light source is inconsistent, the physics readout makes that visible as a form anomaly before the creator can consciously identify it.
The longer vision:
a generation feedback layer that speaks the language of materials, not words. For cinematographers, textile designers, architects, and anyone whose creative intuition lives in the physical world.