this.which(youKnow)
The video process utilizes feedback and texture displacement. GLSL coded trochet tiles, 2D/3D fractals, FBM, and polynomic/tetration chaos are used as "control" matrices that displace the images just a little bit (or a lot) on each render. The audio is made by sampling the video using jit.peek~ (after down-sampling and converting to hsl) in a few stationary spots and and then accumulating those values in gen~ to synthesize audio.
There are a few more steps to each of those processes but that gives the gist. Everything was made in Max.