cv.jit and "trigerring areas"

Aug 27, 2008 at 2:50am

cv.jit and "trigerring areas"

hello everybody and fist of all, thanks for this precious forum,
I’m a french art student (and almost new to jitter)
I’ve ever trained myself through several small patches and tutorials but for a coming project,I needed to use conputer vision tools.
My project would be a video captation of crowd evolving in a coridor(from above); and video would be prejected on the floor according to the visitor’s movements.
I don’t really desire something complicated, but I wanted to create several virtual areas in my video captation. each time someone enter the area, a video is triggered at the very place
I download the cv.jit library, but unfortunately,I don’t really know how to create “triggering zones” (I don’t know how to call that precisely) So, I searched for informations in the forum but didn’t find answers (maybe i should have?)I don’t know if my explanations are clear enough but thanks to anybody could advise me.
have a nice day

#39449
Aug 27, 2008 at 9:00pm

Not sure if this is what you want, but it looks for an amount of difference between frames and if the threshold is exceeded it gives you a bang. There are three “trigger zones” top left corner, top right corner and between the two.
With some playing around it might do what you want it to.

Hope this helps.

– Pasted Max Patch, click to expand. –
#138935
Aug 28, 2008 at 4:40am

Thank you very much mib, that was specialy what I wanted to be able to do…
but now I’m conscious that its really complicated for me to understand everything.

so Each time I want to create “areas” in a matrix, i have to use the scissors and glue technic?

(I am verry happy about your help, i will study the patch seriously..)

But, because I’m curious,I wondered what would be the solution for creating non-rectangular trigerring areas.
does jitter offers a way to draw such shapes?

thanks again

#138936
Aug 28, 2008 at 4:52am

I am sure it’s possible, but I am no jitter expert. maybe someone else has a better solution…

#138937
Aug 28, 2008 at 2:20pm

This type of thing is very doable with the cv.jit library. Here’s something to get you started:

Zachary

max v2;
#N vpatcher 419 51 1080 829;
#P origin 0 -49;
#P window setfont “Sans Serif” 14.;
#P window linecount 3;
#P comment 216 663 359 196622 Now from here you can apply whatever logic suits you to make the blobs trigger things based on position and/or size (it’s all Max from here);
#P window linecount 2;
#P comment 220 34 400 196622 IF the camera is above crowd , looking down at the tops of their heads , this may be a useful method to explore;
#P window setfont “Sans Serif” 9.;
#P window linecount 1;
#P comment 118 385 339 196617 < -- this changes the minum size requirement to be recognized as a "blob";
#P comment 148 307 253 196617 < -- framesubbing (you could also use cv.jit.framesub);
#P newex 55 303 89 196617 jit.op @op absdiff;
#P newex 55 270 27 196617 t l l;
#N vpatcher 776 266 1193 541;
#P outlet 32 232 15 0;
#P inlet 51 63 15 0;
#P window setfont “Sans Serif” 9.;
#P window linecount 1;
#P newex 51 88 27 196617 t l l;
#P newex 32 203 29 196617 gate;
#P newex 68 147 27 196617 > 0.;
#P comment 164 119 160 196617 < -- in Max5 , we can use [zl sum];
#P window linecount 0;
#P newex 68 118 94 196617 expr $f1+$f2+$f3;
#P window linecount 2;
#P comment 95 62 181 196617 we don’t want this to be included in our analysis , so we must filter it out:;
#P window linecount 0;
#P comment 95 27 295 196617 when no blobs are detected , cv.jit.blobs.centroids will output a 1-cell matrix with all 3 planes set to 0. (i.e. cell 0 val 0. 0. 0.);
#P connect 4 0 5 0;
#P connect 5 0 8 0;
#P connect 7 0 6 0;
#P connect 6 0 5 1;
#P connect 6 1 2 0;
#P connect 2 0 4 0;
#P pop;
#P newobj 216 542 67 196617 p filterZeros;
#B color 12;
#P comment 304 638 229 196617 < -- prints blob number followed by x y and area;
#P comment 313 534 314 196617 left outlet passes label number of blob (counting from 0);
#P newex 216 566 48 196617 t b l;
#P newex 216 613 48 196617 zl join;
#P newex 216 590 27 196617 i;
#P comment 313 517 314 196617 middle outlet passes 3 element list for each detected blob (x y area);
#P window linecount 2;
#P comment 198 334 184 196617 < -- create binary image (all values above value pass 1 , othe others pass 0);
#P window linecount 1;
#P comment 204 139 141 196617 < -- double-click to see inside;
#N vpatcher 10 59 456 325;
#P outlet 139 226 15 0;
#P window setfont “Sans Serif” 9.;
#P window linecount 1;
#P message 254 142 69 196617 getwhitelevel;
#P flonum 179 123 35 9 0. 1. 3 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P message 179 142 70 196617 whitelevel $1;
#P message 98 142 65 196617 getblacklevel;
#P flonum 23 123 35 9 0. 1. 3 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P message 23 142 68 196617 blacklevel $1;
#P message 235 176 44 196617 defaults;
#P message 249 97 65 196617 getsharpness;
#P flonum 179 78 35 9 0. 1. 3 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P message 179 97 68 196617 sharpness $1;
#P message 99 97 68 196617 getsaturation;
#P flonum 24 78 35 9 0. 1. 3 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P message 24 97 69 196617 saturation $1;
#P message 243 53 60 196617 getcontrast;
#P flonum 180 34 35 9 0. 1. 3 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P message 180 53 61 196617 contrast $1;
#P message 99 53 69 196617 getbrightness;
#P flonum 24 34 35 9 0. 1. 3 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P message 24 53 70 196617 brightness $1;
#P message 356 53 35 196617 gethue;
#P flonum 314 34 35 9 0. 1. 3 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P message 314 53 38 196617 hue $1;
#P window linecount 2;
#P comment 281 175 100 196617 reset image controls to default values.;
#P connect 18 0 17 0;
#P connect 5 0 4 0;
#P connect 11 0 10 0;
#P connect 16 0 23 0;
#P connect 22 0 23 0;
#P lcolor 2;
#P connect 20 0 23 0;
#P lcolor 2;
#P connect 19 0 23 0;
#P lcolor 2;
#P connect 17 0 23 0;
#P lcolor 2;
#P connect 3 0 23 0;
#P lcolor 2;
#P connect 1 0 23 0;
#P lcolor 2;
#P connect 15 0 23 0;
#P lcolor 2;
#P connect 9 0 23 0;
#P lcolor 2;
#P connect 7 0 23 0;
#P lcolor 2;
#P connect 13 0 23 0;
#P lcolor 2;
#P connect 12 0 23 0;
#P lcolor 2;
#P connect 6 0 23 0;
#P lcolor 2;
#P connect 4 0 23 0;
#P lcolor 2;
#P connect 10 0 23 0;
#P lcolor 2;
#P connect 14 0 13 0;
#P connect 21 0 20 0;
#P connect 8 0 7 0;
#P connect 2 0 1 0;
#P pop;
#P newobj 120 137 82 196617 p image_control;
#B color 12;
#P newex 216 637 84 196617 print blobCoords;
#P newex 216 496 63 196617 jit.iter;
#P window linecount 2;
#P comment 172 245 199 196617 < - create left-to-right mirror image (can also be done using srcdim attributes);
#P window linecount 1;
#P newex 244 104 278 196617 jit.window grid @size 400 300 @pos 10 50 @depthbuffer 1;
#P toggle 33 24 15 0;
#P newex 33 69 55 196617 t b b erase;
#P newex 33 47 46 196617 qmetro 2;
#P newex 33 104 207 196617 jit.gl.render grid @erase_color 0. 0. 0. 1.;
#P window linecount 2;
#P newex 55 521 106 196617 jit.gl.videoplane grid @transform_reset 2;
#P number 79 384 35 9 0 255 3 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P window linecount 1;
#P message 79 403 67 196617 threshold $1;
#P newex 55 244 115 196617 jit.dimmap @invert 1 0;
#P window linecount 2;
#P newex 55 428 112 196617 cv.jit.label @charmode 1 @threshold 200;
#P user jit.pwindow 433 149 162 122 0 1 0 0 1 0;
#P number 161 332 35 9 0 255 3 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P window linecount 1;
#P newex 55 331 103 196617 jit.op @op > @val 20;
#P newex 55 496 132 196617 cv.jit.blobs.centroids.draw;
#P newex 55 218 66 196617 jit.rgb2luma;
#P comment 123 219 123 196617 < -- convert to greyscale;
#P newex 55 468 106 196617 cv.jit.blobs.centroids;
#P message 162 159 46 196617 settings;
#P message 127 159 33 196617 close;
#P message 94 159 30 196617 open;
#B color 13;
#P newex 55 187 146 196617 jit.qt.grab 320 240 @unique 1;
#P hidden newex 89 47 93 196617 bgcolor 90 90 90;
#P comment 282 497 187 196617 < -- here is the data that you work with;
#P connect 21 0 19 0;
#P connect 19 0 20 0;
#P connect 20 2 18 0;
#P connect 20 0 18 0;
#P connect 20 1 2 0;
#P fasten 26 0 2 0 125 183 60 183;
#P fasten 3 0 2 0 99 180 60 180;
#P fasten 4 0 2 0 132 180 60 180;
#P fasten 5 0 2 0 167 180 60 180;
#P connect 2 0 8 0;
#P connect 8 0 14 0;
#P connect 14 0 36 0;
#P connect 36 1 37 0;
#P connect 37 0 10 0;
#P connect 10 0 13 0;
#P connect 15 0 13 0;
#P connect 13 0 6 0;
#P connect 6 0 9 0;
#P connect 9 0 17 0;
#P connect 16 0 15 0;
#P fasten 36 0 37 1 60 295 139 295;
#P connect 11 0 10 1;
#P fasten 10 0 9 1 60 357 182 357;
#P fasten 6 0 24 0 60 490 221 490;
#P connect 24 0 35 0;
#P connect 35 0 32 0;
#P connect 32 0 30 0;
#P connect 30 0 31 0;
#P connect 31 0 25 0;
#P connect 24 1 30 1;
#P connect 32 1 31 1;
#P fasten 2 0 12 0 60 211 412 211 412 141 439 141;
#P pop;

#138938
Aug 28, 2008 at 3:25pm

And here’s an abstraction that breaks the coordinates into grids of variable sizes. You can hook this up to the output of that previous patch, or modify it to suit your purposes. Hope it helps.

best,
Zachary

#138939
Aug 28, 2008 at 10:07pm

Oops. There was a mistake in that last attachment. Wish I could edit the post. Anyways, here it is again – corrected.

Zachary

#138940
Aug 30, 2008 at 4:11am

thank you again for your nice help
I will have spare time to seriously study that this week end.

If I desire to project video on the floor (at the very place of my video recognition) should I use IR projector and IR camera?
does anyone could advise me on any reliable system?

#138941
Aug 30, 2008 at 9:07am

Zach and all, first off..THANK YOU SO MUCH!

I have been plowing away with Jitter for the last month for a project, and that patch and abstraction are light years better than my tracking system.

I have one question, that should be easy enough.

One fundamental thing I don’t understand is how to group the x.y information and blob label as one item…i’ve tried packing and such and can’t get there.

I now have both your z.grid abstraction and overhead tracking text patch.

What I’m trying to do is such: After adjusting the camera image and using the cv objects, I will break the camera feed from above into 4 grids running along horizontally (now by using your wonderful z.grid abstraction and not my convoluted mess!).

When a user walks into one of the grids a bang is fired. simple enough. But then I don’t want another bang to fire until either that user has left and returned, or another user enters while he/she is still there. Ideally multiple users could have triggered the bang once and still be in the same grid. Essentially only a NEW blob to each grid can trigger the bangs.

How could this be done? It seems easy enough but I have been pulling my hair at it! I can only seem to get either a stream of x,y values or a blob number, but no way to link them up and process as a combined identity.

THANK YOU THANK YOU THANK YOU! says the red-eyed artist slaving at 5am.

Ben

#138942
Sep 4, 2008 at 4:10am

I have a simple example patch of how to do VNS-style area movement detection using a histogram. It only uses two cv.jit abstraction for convenience, but they’re very easily replaceable. As far as I know, that’s the easiest way to do this kind of thing.

There are a lot of comments in the patch explaining how it works.

Jean-Marc

– Pasted Max Patch, click to expand. –
#138943
Sep 6, 2008 at 1:19am

Great example patch Jean-Marc!

Here are two other simple examples – the first using an x/y labeling scheme, the second using a single-number labeling scheme like in Jean-Marc’s patch.

Zachary

1st example:

– Pasted Max Patch, click to expand. –

2nd example:

– Pasted Max Patch, click to expand. –
#138944
Sep 8, 2008 at 3:20pm

thanks for those great patches.
in Jean Marc’s example, I wondered how to get single values from the jit.cellblock.
which outlet should I use to split the list? and obtain as many separated value as zone number?
I also wondered how to trigger my videos with a simple bang without going back to the beginning of the tape if there is still movement in the zone? i don’t know if it’s very clear, sorry
thank you for your advices.

#138945
Sep 9, 2008 at 3:12am

Quote: marc wrote on Tue, 09 September 2008 00:20
—————————————————-
> thanks for those great patches.
> in Jean Marc’s example, I wondered how to get single values from the jit.cellblock.
—————————————————-

This is basic Jitter, but you can either use jit.spill to dump the contents into a list, from which you can access individual elements using zl nth. You can also use jit.iter, to output each value one by one. You can also use jit.matrix and the “getcell” message. All of this is on the actual output of the jit.matrix object, not cellblock.

See the slightly modified example below:

– Pasted Max Patch, click to expand. –
#138946
May 27, 2009 at 6:39pm

this is just awesome, thanks you guys for sharing these examples. They are a great learning tool.
Hope to be able to share some works with you soon… cheers!
Omer

#138947
May 31, 2009 at 8:51pm

Thanks for the excellent examples. Much appreciated.

Greg

http://www.qfwfqduo.com

#138948
Feb 12, 2010 at 3:13am

Sorry, I’m very new to MaxMSP and I was wondering how you converted those gibberish text max5_patchers you all were writing into something that would work on Max.

Thanks.

#138949
Feb 12, 2010 at 10:48am

You copy the text and then “new from clipboard” in max 5 file menu.

Ad.

#138950
Feb 15, 2010 at 7:54am

On the note of triggering, I was hoping anyone could help me trigger/track a certain color in a live video (jit.qt.grab)..

Ad: what do you do after you copy the text into max 5? how do you make sense of it? its still in gibberish code!

#138951
Feb 15, 2010 at 8:25am

Hey never mind about the copying text, apparently, I should copy “begin_max_patcher” and “end_max_patcher” too! oops.

#138952

You must be logged in to reply to this topic.