Closed for Company Meetings: Between September 18 (5pm PDT) and September 22 (12pm PDT), Support and Sales requests will be delayed. Only time-sensitive issues will be addressed during that time. Thanks for your patience.
FWIW, the documentation is a bit unclear on this. the ref says:
"Kinematic flag (default = 1). Use this mode to disable dynamics on the body, enable collisions, and animate using position and quat attributes."
For starters, the default on kinematic appears to be 0, not 1. Also, it's a bit confusing:
Does kinematic 1 disable dynamics _and_ enable collisions? You have to be able to do both independently, no? From what I see, @kinematic 1 disables dynamics, but collisions are reported whether or not the flag is set. Is that right?
Maybe "still _report_ collisions with other bodies"? (sorry, just being pedantic ;-) )
While I'm on this topic, I happen to have my nose into this because I'm having my first go at jit.phys, & trying to use it for collision detection, which is proving a bit of a challenge. Using matmat's physics 2d detector as a starting point (https://cycling74.com/forums/cameratophysic2d/), I was trying to set a bunch of arbitrary spots to use with collision detection with video input. Since I was looking for zones, rather than interacting with objects, I set the objects the video image would collide with to @kinematic 1. What I found was that I can only get a collision report when I move a jit.phys.body object into the video image using a 'position' message. If I move the video into the object, I don't get a report (obviously, the video image is also a jit.phys.body object). Am I missing something incredibly obvious here?
My goal is to use the collision reporting ability of the jit.phys.* world to detect if the video image of a user is interacting with various arbitrary (and sometimes moving) zones. So all I'm looking for is collision reports, not any visual activity. If you check out this video, you can see what is happening: https://vimeo.com/80914492.
If I move the video image (which is, as in your example, a [jit.phys.body]) toward another [jit.phys.body], no collision registers. However, if I do the opposite, and move the body toward the video image, a collision is registered, which seems a bit odd.
I tried playing with the depth settings as you suggested, but to limited avail. If I set all the bodies to the same depth precisely (6.0) I have the problem above with some objects and not others. Also, they will sometimes respond after being moved. If I move the depth to 5.5, every object registers a collision, even if there is no image present. If I move over 6.5, nothing works, and the video image ceases to show any structure when you turn on [jit.gl.physdraw]. Do have an idea of how depth registers with collisions when @remove_plane is set to 3?
yeah i can reproduce some strange things here.
i would try it without the remove_plane attribute, and see if that improves things.
since you don't have any dynamic objects, the remove_plane is unnecessary.
if it's still not working reliably, there might be a better way to do this by substituting the dynamicmesh for a phys.multiple using simple spheres.
if i remember correctly, mathieu came up with a pretty ingenious solution for this, turning a matrix into a phys.multiple grid.
the deeper I go with it, the weirder it gets, I can tell you. I get no results at all with remove_plane set to 0. I've been through a billion combinations -- I'm surprised the patch I sent is behaving predictably. I've had an object respond to video if I move it (at all) first, if I bump it into the video first, if I send position data from [pak position 0. 0. 0.] instead of dumping it from a coll, but none of it consistently. They respond more reliably if the objects are _not_ in a poly~, I mean it's just crazy. If it would help, I'll send you the whole megillah off-list -- let me know.
I'll look into using phys.multiple & send a progress report.
Well, after a few hours of headbanging, I have failed to figure out how to get an image matrix into a phys.multiple grid. My head hurts. My sour grapes approach is to think it wouldn't be very efficient....
Thanks very much! It works somewhat better, but it's still kind of unreliable for my needs, unfortunately. Plus, it generates a huge barrage of collision data, most of which has to be thrown away, & is too complex to parse for my purposes. Very educational, however - thank you very much. Possible uses for future projects, but for this one, I'm going with a less fancy approach, I think...