My last post got no replies, and I think thats probably because I was asking a bit too much!
So I’m going to try and keep this one simpler..
Can someone tell me, after you have used jit.scissors to split a live camera into say @columns 4 @rows 1 – How do you start individually affecting what makes these different matrices play a sound?
So if I wanted to make it that when the left column recognises a certain amount of movement it plays a sound, when the right one does it plays a different sound. how would I go about doing this? Also, is there an easier thing for the camera to pick up rather than movement? Such as color, brightness etc.
If it is of any help, I already have the cv.jit pack and I’m currently trying to work some of them out.
"From here, you might want to know if this motion is happening in a particular region of the scene. To do this we can do a very simple masking operation. We simply supply a matrix with white pixels to designate a region of interest and multiply the output of our frame-differencing patch by this mask."
-> Have a look into the camera data tutorial from the c74 webpage:
-> once you understand the masking-operation in the motion-intersection subpatch you should find easily the solution
C74 RSS Feed | © Copyright Cycling '74