I want to take a small rectangle sample from the computer desktop (with jit.desktop), then if that rectangle/window were to move somewhere else on the screen, Max could search for it within the entire desktop image and give me the x y coords.
I could get it to work by grabbing this rectangle from the large image, over and over, pixel by pixel and compare it to the search image until found. Or is this logic already out there in a simple object/way? Thanks!
no simple way. there are much more sophisticated approaches that would be used as the first step for Augmented Reality Marker tracking (looking for squares in a frame).
Two ways you can do this, by comparing a bitmap image, or algorithmically.
If you are comparing a bitmap image, you need to program in image ahead of time, you can use cv.jit.learn (or undergrad, I forget what it’s called). Simple pattern detection. With a brute force approach, you’d compare a frame to the pre-stored image and then increment if the image isn’t found. This would be brutally inefficient and time consuming.
If you are up for the challenge, you might try using cv.jit.lines to narrow your search. If you had a rough idea of how big the square you are looking for is you could use the boundaries of the lines as a starting point for testing for the image you’re tracking.
The way markerless tracking in AR works is (roughly) that the Points of interest are turned into a bunch of vectors for both the live image and the trained marker, a search of the live image is narrowed by looking for patterns only if they are within a familiar geometry (square-ish) then the models are compared.