this.which(youKnow)

The video process utilizes feedback and texture displacement. GLSL coded trochet tiles, 2D/3D fractals, FBM, and polynomic/tetration chaos are used as "control" matrices that displace the images just a little bit (or a lot) on each render. The audio is made by sampling the video using jit.peek~ (after down-sampling and converting to hsl) in a few stationary spots and and then accumulating those values in gen~ to synthesize audio.

There are a few more steps to each of those processes but that gives the gist. Everything was made in Max.

Dario's icon

awesome!

Jazer Giles's icon

Thank you Dario!

Glennzone's icon

Really compelling stuff.

Can these sorts of "effects" be applied to live video, and processed and displayed live, or are they more of the rendered variety, or a mix, and this.which(ones) ? ;-)

Jazer Giles's icon

Thanks Glenn! The source images are just static pics, but most of the resulting effects are produced at a high enough frame rate to be considered live video (even on my old 2011 mbp!). Really depends on how much downsampling happens before going into the effects algorithm. Applying this sort of stuff to a frequently refreshing image, such as a movie, gets trickier as these effects rely heavily on feedback!

Year

2018

Location

Northampton, MA

Author