Forums > Jitter

training cv.jit.learn merely by altering screenshots?

July 4, 2010 | 5:04 pm

hi!

for those who have worked with cv.jit.learn: I find it quite cumbersome to actually train cv.jit.learn with live cam footage, so I’ll try to train my object with the help of an automation that "edits" single images (initial screenshot taken from my cam feed).

by this I intend to avoid having to move my shapes of recognition around, closer/further away – all manually in front of the camera.

in my case I’m talking about a square on a wall which can be spun around. could this turn out to be more reliable than live training in front of the camera?

I’ll do some tests next week. in case you have also trained cv.jit.learn with stills (for live camera assignments), please let me know how it worked out!

cheers!

-jonas

– Pasted Max Patch, click to expand. –

July 5, 2010 | 1:29 pm

It could be much less reliable.

If you move an image around digitally, it should always keep essentially the same shape, which means that cv.jit.moment will always output the same values, as the moments and invariants do not change depending on position. In real life, though, because of lens distortion and changes in lighting (which you should nevertheless minimize) the shape is going to change slightly depending on where your object is. That’s why it helps a lot to show cv.jit.learn patterns taken at different positions. Now, you could fake lens distortion, but at this point, it’s possibly more trouble than it’s worth. Your best bet is probably to use a little salt and pepper noise to roughen the shape’s edges, you might also try slight changes in width-to-height ratio with various rotations, as well as slight slanting. Those are techniques sometimes used in image recognition to train algorithms.

The training doesn’t have to be live, but if you’re going to do some simulations, you have to be really careful that they accurately model the sort of variation that you would expect during deployment.

In any form of machine learning, training your model is always the most difficult and critical part. That’s why it’s very hard for any one but Google to pull off something like Google Goggles. They have the mother of all datasets to train their recognition algorithms.

Jean-Marc


July 5, 2010 | 1:57 pm

thanks, jean-marc!

since there’s no pause/continue function, does this mean I have to load the file of my 1st pose as soon as the 2nd one’s ready – save this one once finished (and keep proceeding like that with all my poses to keep it in one file)..? so changing pose should only be done after saving (and before loading) my .mxb?

I just want to be really sure this is the old school way, for I’ve got quite some poses :)

all the best

-jonas


May 4, 2011 | 12:09 pm

please could someone answer questions like these? :’(


Viewing 4 posts - 1 through 4 (of 4 total)