cv.jit.blobs.recon: extract/substract people from blob detection

flim's icon

hello!

in an ongoing project of mine I've "paved" a white wall with black squares (movable via magnets). when moving these squares around, sounds arise (with the help of [cv.jit.blobs.bounds]. of course my sounds should only go along with the squares being shoved around - and NOT be triggered by the mere movement of the people entering the room :)

this is hard to realize just by using [cv.jit.blobs.bounds] - any idea how to extract/substract people from "blob detection"?

I've tried working this out with [cv.jit.blobs.recon] - and am far from achieving satisfying/stable results..

grateful for any further hints..!

all the best

-jonas

zerox_'s icon

if you have stable light conditions
you could use for the background subtraction maybe only one background picture
and then limit the with cv.jit.blobs.bounds recognized blobs to a certain size and proportion

flim's icon

hey xerox! thanks for the quick reply

my light conditions will indeed be stable.. but I didn't quite understand what you mean with "using only one background picture"..

cheers..!

-jonas

zerox_'s icon

i mean the background picture for the difference picture
(you take a snapshot of the environment without objects and subtract then current frames from it so you get an picture with only the objects which weren't there when you took the background picture)
if the light is stable you dont have to update your background picture

flim's icon

thanks, zerox! searching the forum via "background" + "substraction" did give me quite a constructive insight..

now I hust have to find out how to predefine/limit the blob sizes to be detected (the answer should lay in modifying cv.jit.blobs.bounds.draw)..?

cheers!

-j

flim's icon

so I've tried to modify cv.jit.blobs.bounds.draw (turning jean-marc's abstraction into a subpatcher)

unfortunately I wasn't able to make cv.jit.blobs.bounds limit its recognized blobs to a certain size and proportion. mainly focused on fooling around with subpatcher prepare_for_lcd's [jit.iter] output

I'd say I'm back to: HELP.

cheers..

-jonas

zerox_'s icon

ok. here is an quick test
the single blobs are not sorted
the first id is always at the upper left corner
it would be nicer to sort them via cv.jit.blobs.sort and grab the single blobs with jit.findbounds
i would prefer to do this dynamically in java
maybe somebody could answer this question ;-) https://cycling74.com/forums/problems-with-non-matrix-output-values-of-jitter-objects-in-mxj

[mod - removed uncompressed patch]

flim's icon

hej xerox!

I've tried copying 'n pasting the code a couple of times, without success.. something might be missing at its tail..? (I wouldn't know for I'm used to the 'copy compressed' outlook)

cheers in advance..! :)

-jonas

zerox_'s icon
Max Patch
Copy patch and select New From Clipboard in Max.

sorry. i didn't use "copy compressed"

flim's icon

:)

thanks, xerox! 2 in 1, nice.. I'm at work right now, will have a closer look at your patch as soon as I get home. and post a version that suits me best (if any modification necessary at all..)

can't help you with the java part though. (: although I've found this neat patch for blob sorting issues recently..

cheers!

Max Patch
Copy patch and select New From Clipboard in Max.

flim's icon

hey xerox!

sorry for the late reply.. I've been quite busy this week (haven't found time to deal with my request until now)..

I'm curious: what's the [unpack] [pack] bit in your "p drawbounds" for? (+4 and +14)

cheers! :)

-jonas

zerox_'s icon

it sets the position of the drawn "write $1" id number

orrinward's icon

If you are trying to isolate each individual blob recognised I have just achieved something like that successfully.

I've been using cv.jit.blobs.centroids to find each blob in the space, and then several instances of cv.jit.label to separate the blobs into their own matrices, making it easy for me to then extract x/y locations from each blob and use them as controllers.

I'm relatively new to Max and this is the way I have found to be very successful in turning objects interpreted by video into trackable controllers for sound.

flim's icon

hej orrinward!

nice! :) if I get you right, my approach comes from the opposite direction (bg subtraction).. with the help of cv.jit.blob.bounds I'll extract every blob that suits my size/proportions. that way I will have quite an amount of blobs detected, but only the important one's will be given "sound admission" - no people interfering.. but therefore my light has to be absolutely stable

I'm quite new to jitter - wouldn't call this modified, just a "tidy" version of xerox's patch:

Max Patch
Copy patch and select New From Clipboard in Max.

now I'll have to make jitter not losen its grip on my detected shapes so one can play around with real objects without destroying one's audio engineering by his mere presence :)

could you maybe post an example patch? cv.jit.blob.centroids is way appealing for being the easiest way of converting object movement into anything you want (although I'm having the most fun with audio engineering at the moment)..

cheers! & all the best

-jonas

orrinward's icon
Max Patch
Copy patch and select New From Clipboard in Max.

Yes of course I can. I've been using background subtraction as well, but for that I was using the patches found here - https://cycling74.com/tutorials/making-connections-camera-data/

Excuse the junk in there, it's a rough patch. The small top jit.pwindow shows all recognised centroids. the large left pwindow shows background subtraction (tweaked with the gate and background coefficient)and in the bottom right you see 4 pwindows, showing individual labels.

I've also shown an example of converting the matrix data to control a pictslider. This is what I have used in my Sound Social installation, http://www.orrinward.co.uk/soundsocial

This patch is the starting point that has allowed me to quite accurately use people on surveillance footage to control the location of sounds within a networked sound environment. It sounds like your setup is built to have clear contrasting image differences so this could work even better with your setup. It works successfully at tracking individual objects on a low-res video feed with a lot of flicker, so yours should be fine.

My patch was made for PC and I just noticed yours is for Mac, so be sure to change any dx.grab to qt.grab

flim's icon

thanks for sharing, orrinward! and nice installation :)

my internet died on me, THEREFORE I'm at a friends, hehe (and wouldn't have expected to be as lucky as to be given new input/inspiration on my project during my short visit..)

I just now realize how alarmingly uncomfortable I feel without THE WEB. ;)

I'll have a closer look at all of this.. although I haven't made much patching myself in this particular case, I'm happy :) progress (even initial observation) in max is so much fun..

and thanks again for further explanation, xerox!

see you soon

-j

flim's icon

hey orrin!

I did some combining :)

but it gives me the feeling of inefficiency (cv.jit.blobs.bounds seems very task intensive). and I'm having trouble of labeling my blobs in a proper way (my blob labels should stick to their objects after initial detection, no matter where they move to). the swapping has to be dealt with..

see you soon!

-jonas

Max Patch
Copy patch and select New From Clipboard in Max.

flim's icon

for anyone who's interested :) so far so good. my blob labels still tend to swap horizontally though. meaning: blobs having passed detection requirements (size/proportion) change labels after "overtaking" one another, being moved up & down.. I'll search the forum, this problem should've been dealt with already

most annoying though: can it be that cv.jit.blobs.bounds eats alot of CPU? camera feed doesn't suffer at all - but further patching is quite nerve-wracking (placing cords becoming quite a challenge due extreme latency)..

any help appreciated..!

all the best! -j

Max Patch
Copy patch and select New From Clipboard in Max.