I'm making a Video to Audio project, and I'm using jit.peek~.
As it says in the help file, jit.peek~ reads matrix data as an audio signal. But how?
The third argument is for the plane, so it only analyze one of the tree RGB planes separately.
But how does this process works?
After some audio tests, I found this to be probably true:
jit.peek~ recognizes the color amount of one plain, and translates this to the amount of harmonics. The more color of a certain plain, the more harmonics one can listen to.
Am I correct? If not, please explain why.
Thank you all, and happy patching!