I am creating a multitouch interface to control auditory responses. I am familiar with the matrix, and understand setting the parameters for the ARGB planes, but how can I set parameters for IR? Would I only use the Alpha plane?
Ideally my camera will have a bandpass filter to only accept the IR and reduce noise.
Also, how do I make the tracker recognize different objects (touchs)?
cv.jit objects to start with. You won’t be using all 4 planes of data, the tracking objects require only one plane (via jit.rgb2luma). So I don’t think you’ll need any kind of filtering on the IR camera before getting it into Max, you can do all the thresholding etc. once the image is in—jit.op @op < , jit.brcosa, etc.
cv.jit.blobs.centroids will recognize a ton of multiple simultaneous points, but keeping them sorted can be difficult. If you just need coordinates and you don’t care what "order" they’re in, you’re set to go.
Again, if anyone has some more wisdom on the sorting issue, I’ve put that out there numerous times on the forums and have yet to find a workable solution…
Awesome, Thanks! I downloaded the externals and more or less have the thing up and running, only it appears to be functioning in a manner opposite to what
I would like.
I am hoping the camera will pick up no light unless some of the IR is frustrated out of the membrane, scattering it towards the camera, and I would like it to localize this light. It appears that when no light is present the whole screenshot goes white (and it one giant blob) and if I shine a light (LED bike light to test) at the camera, the who screen goes dark.
Do I need to do an invert colors, or do you think I’ve got it set up wrong?