Video through Audio Effects.

Mark King's icon

Hi All,

Can anyone tell me if its possible to process Video through audio effects in Jitter please. I think I need to convert the 4 planes of a matrix to audio signals then process them and then convert them back to a video matrix?

Im new to Jitter so sorry for nit having a lot of details. Im trying to do something similar to this Databending video in Audacity clip

So if anyone has any ideas or has seen a similar post etc. Thanks very much.
Thanks

Andro's icon

I'd start simple.
Get your movie and run it through a jit.gl.slab Pick an effect eg. Td.repos. Send a message to jit.gl.slab getparam.
Once you've found a parameter you like to tweak then do the following.
Get your basic audio signal plug it into snapshot~.
This converts an audio signal to a float.
Use scale to get the number range the way you'd like
Plug that float into the param to jit.gl.slab parameter.
This will modulate the video depending on the audio signal.
You could split the audio into 3 bands and use 3 snapshots to modulate 3 parameters.
You can also just create basic sine sawtooth triangle audio signals to modulate the video via snapshot.

Mark King's icon

Thanks very much for the reply Andro. I'm playing around with jit.slab now and really liking the results. Id still like to experiment with processing video as audio though . I have an interested in analog video and its something I've experimented with in that domain so would be interested to see this in Jitter.

Would it be possible to use jit.poke and jit.peek to convert from video signal to audio and back ?

Heres some more background on why I'm interested in all this.

Heres an analogue video synth using a DOD guitar pedal as an effect send.
https://www.youtube.com/watch?v=BZleqM8geKI

Heres a video I did by processing RGB video through an audio desk
https://www.youtube.com/watch?v=mdeS_V7j0_g

Mark King's icon

Thanks Raja. Im going to try this and will let you guys know how I get on. Another stupid question. What are the blue cables I see when looking at the 'gl' examples . The references say there are only three types of patch chords ?

Mark King's icon

Ah cool , that makes sense. Thanks again.

Mark King's icon

Hi , I still haven't figured this one out. I have managed to replicate something similar to what happens if you run analog video through an audio mixer and recapture the output of the audio mixer as video. Ive included this one. It could actually be useful as a way of creating random visual noise.

So whats happening is the video is being converted to audio by jit.release then turned back in to video by jit.catch. Ive noticed though that the values that jit.catch outputs are very different to the usual 0-255 pixel values Ive seen in all the matrix examples. I dont know if this is because theres some sync problem of the values are converted.

Anyway I'm now trying to output the value of a black and white video in seperate planes , convert that to a number , convert the number to an MSP signal , porcess that and do the reverse. The problem is how do I put the processed values back in to a new video matrix in the correct order .

Max Patch
Copy patch and select New From Clipboard in Max.

I could really use some help here.

Roman Thilenius's icon

this is on my to do list as well, but i have not yet started.

first thing you have to do is making decisions.

because there is no straight forward way how to do it, you have to create your own system.

splitting the video information into color channels and or planes is a good idea. but otoh, you might not need this for a delay effect. maybe you will have to make several different encodings, depending on the purpose...

video is matrices of planes which contain data of 8 or 16 bits, and the jitter objetcs are talking to each other at a loose rate of some 25 or 50 times per seconds.

audio is 32 or 64 bit floating point and objects are communicating in a strict pulse of one vector.

the audio vectorsize as well as the framerate will be different in every runtime and project. the framerate can even by dynamically and contain drastic changes. when "encoding" you have to take all these things into account.

at one point you will have to use something like jit.buffer as raja says, but frist you have to decide how want to build your system.

in an optimal situation the audio will be updated every frame, and you have one audio signal per pixel per channel. but you wont have enough computing power for that before 2045, so you have to make compromises.

the main question when "encoding" to audio is: what do i want the signals range to represent?

-110

Mark King's icon
Max Patch
Copy patch and select New From Clipboard in Max.

Thanks guys, I'm still playing around with this. Here's my latest dead anyways , in this one I converted video to a list and then to audio. I dont really know why I did it but here it is .

Roman Thilenius's icon

whenconverting audio to video i would start using 1 plane per audio channel. the rgb thing has no analogy in the audio world. different channels could be different colors, and the audio gain is the luminance. the audio input may not exceed 0 db/A.

or you could do 2 planes (1*2000 pixel) - and the vertical 2000 pixels represent the normalized abs() of the audio. aka power history. or aka meter. peakamp/rms/rampsmooth/slide...

since the conversion of audio to video involves a form of downsampling, you had to filter or interpolate the audio first... but if you do you will only process subbass audio in the end ... maybe a frequency shift for about minus 10 octaves should be performed first.

the conversion from video to audio seems to be a bit more complicated. eventually i wold try to wok inside downsampled poly~s.

-110