I am accustomed to thinking about projected matrixes as having x,y coordinates, relative to the jit.window. Objects tracked by the camera also have x,y coordinates, so it is easy to detect if the tracked object is in the space of the projected matrix.
I am having trouble thinking about how to apply this to either rotated matrixes, videoplanes, slabs or gridshapes. I guess there is probably an expression that I could write, which will take rotation into consideration.
I’m sure I will figure this out soon, but I’m hoping to find a simple efficient solution, and examples or suggestions would be great.
perhaps what you are looking for are the "screentoworld" and "worldtoscreen" messages to jit.gl.render.
opengl positions objects relative to the origin (0,0,0). this includes the camera attribute of gl.render. so moving the camera doesn’t affect the object position in world space, but does affect it’s position on the screen.
searching for those two terms should provide some hints.
Now I understand that my question was very unclear. I am working on a multitouch table. The camera tracks blobs of light reflected by fingers on the table. I want to know when a person is touching a projected videoplane. When the videoplane is aligned with the grid of the matrix, I can find out if the x,y coordinates of the finger are between the x,y coordinates of the corners of the videoplane. When the videoplane is rotated, I am not sure what to do. There will be multiple videoplanes, all will be rotated at different angles.