Im building a patch to transfer motion into sound using the cv.jit.faces object.
The patch itself works just fine – no issues there. The issue comes when I try to bundle the patch into an application. I looked around and downloaded the cg.framework, added that to the ‘support’ folder and included the jit.openexr object into the ‘support’ folder of the bundle. I also included the requisite objects from the cv.jit library (including the haar…xml file).
Unfortunately, when I open the application, the face tracking does not work. I’m all out of ideas.
Any helpful suggestions? I ought to be submitting this on Wednesday as part of a working draft at University and if anybody has any ideas, it would be appreciated. The patch works fine, just not as a bundle!
Yes! we are having the exact same problem.
Can anyone help?
Of any help…
I encountered this problem in building a standalone with cv.jit.faces in it (OSX 10.6.8, MAX 5.1.9).
The tracking won’t work…
After some trial and error: I can get the object working in the standalone by manually giving the read command and then manually selecting the file haarFaceCascade2.xml from the file menu. A read message including the path and filename doesn’t work… so I can’t get Max start the object working itself.
This is a problem because I need it in a standalone application for an artwork in two weeks and it has to run automatically straight from a reboot.
Are there any workarounds known?
I mailed with Jean-Marc and he is aware of the problem. There must be a problem with the Max’ search paths. He has no clue why the full path to the xml doesn’t work in a standalone. Anyway he gave me the nice hint to use the Max runtime, so this solved my problem.
If there are people who can shed a light upon this search-path issue their suggestions are welcome.
I found a way to make it work in Mac: If you don’t include the path when building the application, but copy the haarCascade xml manually to the Package, Contents/Resources/, it works for me. Cheers!
or "read" and chose the xml file :)
@DuncanTaylor I understand this is kind of an old thought for you, you may no longer have the source content readily available, but I am very interested in your project.
I am trying to design something that needs a set of x/y coordinates based on the position of the face recognized within the webcam’s video matrix. It sounds as if you might have had to create similar calculations to convert the face recognition to sound.
If this notion is accurate please direct me in the best direction you see fit to pursue this result. If you could share some of the patch that worked well for you (that could help me) please do, if this is unreasonable for your line of work I understand.
Thanks so much for your help,