Extracting color informations from defined zones (Jitter/openGL?)
Hi everyone, I'm new to Jitter/openGL world so I hope there is someone that can help me.
I would like to define some rectangular "visible" zones in a large canvas (screen) and be able to extract color information (RGB) dinamically from these zones when a running video on the same canvas will pass on the defined locations.
In reality I have implemented a simple patch that do this using some Jitter objects like "matrix" and "op +" (for zones definition) and "submatrix" (for color extraction) and this is working relatively well. But this way I have some limits, mainly with the video resolution (because I'm working on single pixels definition and extraction I must keep very low also the size of the "main" canvas in order to see/extract the correct data ).
So, my question is: How I can define the zones (plane shapes?) and then extract dinamically color informations from those in the openGL world ? Textures, somethig else...?
Obviously, at the end I need to come back on the CPU and working with matrices because I will use pixel informations (maybe interpolated?).
Thank you in advance for the help!
check jit.3m
Hello andro, thanks for the reply, but probably I have not explained well the situation.
What I'd like to do is have the ability to define some rectangular editable regions (horizontal or vertical) on a bigger screen on which is running a video (in background?).
The elements that I want to extrapolate are only the RGB values of the previously defined regions individually.
Therefore, if the video running passes over the regions, they accordingly will change the color values of those areas, which at this point will be the color of the video itself.
I hope that this description is clearer than the previous one.
And thanks again for the help !!
so yeah, check jit.3m.
Those 4 areas can be made with jit.gl.videoplane , you can use the open gl version of scissors to cut the main image into multiple textures
check phiols awesome patch
https://cycling74.com/forums/sharing-jit-gl-scissor-gl-equivalent-of-jit-scissor/
This also uses jit.gl.multiple so the multiple plane work is done if you just define how many areas you need.
Because it's all textures your bottle neck will be going from the GPU into jit.3m (not a problem with small matrices.)
If its all new to you then I'd skip jit.gl.mutliple, split your texture up into 4 pieces (each corner) send each one into a jit.gl.videoplane AND to jit.3m
videoplane allows you to layer images and textures with others.
This awesome example from max shows almost everything you can do with jit.3m.
https://cycling74.com/forums/newbie-needs-to-extract-colour-information-from-video/
Still it's not clear to me what your trying to achieve. Maybe make a sketch on paper and upload it or show us what you've got for now.
Ok, yes almost all of this stuff is new to me. I began using jitter a few days ago so I will try to figure out how the examples you've posted exactly works.
In the maintime as you suggested I put below a little sketch of the main idea plus a simplified patch that I've already made with jitter.
Basically the patch should be used to control addressable led strips so what I need at the end of the processes are the rgb values of the previosly defined strips (zones_definition in the patch) where every pixel of the matrix represent a real led of the strips.
The idea is to control many strips with artnet so I have also made some custom javascipt code (not present in the posted patch) to manage the concatenetion of the led/pixel (strip0, strip1 etc., in the patch) addresses following the artnet's rules (universes ecc..).
As I said in the first post, the problem of the current approach is related with the size of the "canvas". Defining the "extraction zones" this way I need to maintain the resolution very low in order to have both: (1) the visibility of the defined zones and (2) the relation between defined pixel and individual leds.
Probably my real question is if there is an efficient method to define the regions in an openGL context and then extract the rgb informations and put them in some matrices (maybe using interpolated values?) for subsequent channel aggregation.
I hope now all is more clear and as always, thanks for the help!
Sorry but I realized that probably the presence of an abstraction in the previous patch could compromise the correct functioning.
So here a modified version that should be correct.
if you're asking how to generate frames using GL, and then read those frames back to a jit.matrix to extract the color information, then there are two solutions: jit.gl.asyncread or jit.world / jit.gl.node @capture 1
a third possible solution is to set a named jit.matrix as your rendering destination.
if you're not interested in using GL (as the patch you posted seems to indicate), then i'm not sure what you're asking. to extract the color information from a jit.matrix use either the getcell message, jit.iter, or jit.spill. you can up or downsample the matrix prior to extraction by sending the input matrix to another jit.matrix object with the desired dimensions specified as arguments (which disables adapt mode). you enable or disable interpolation when scaling by toggling @interp on the destination matrix.
Hi Rob,
I am in the first scenario, generating frames from GL. The jit.gl.asyncread works fine.
But, I am wondering how can I just extract just one videoplane ? Not the whole render window.
I've attached my patcher, as you can see, I am playing with the positions of the videoplane, and blending with a mesh.
I thought it should be as easy as outputting the videopane's matrix to jit.3m......
Thanks in advance !