networking multiple kinects
Hi,
This is not regarding a specific max/msp/jitter problem. I am wondering about the feasibility of a project involving a number of kinects (about 6) that are used in conjunction to detect the number and positions of multiple visitors in a fairly large room (abt 23' by 26') and use the output to control sound and video projections. I have been searching for similar projects on the internet but haven't had any luck. Have people developed a way for data from multiple kinects to be synchronized in such a way that they act as a single surveillance system?
Also, what is the max number of kinects that can be connected to a single computer?
Hope you can provide some relevant examples, if there are any.
thanks in advance.
Think 6 is the max, give it the open $1 message where $1 is the device number.
If you are networking, consider sending the result of the analyzed matrices (often a smaller matrix) rather than trying to send full sized matrices over the network.
Jit.spill And the Cv.jit objects will be your friend. Probably also mxj net.maxhole
Good luck, sounds like fun!
Here's something I made a while ago that you might possibly find useful for sending Kinect data over a network- patches to convert between the float32 depth matrix from the Kinect and a 2 plane char with the data split into a MSB and LSB.
Hello all
Somebody has a confirmation on how many Kinect cameras can be connected to MaxMSP.
I'm looking for this same information on OSX.
Also what is the maximum distance for the Kinect from the subject ?
thanks
I have 2 Kinects generating data for 1 3d scene. Could easily be scaled up to more. From this thread: http://67.23.3.6https://cycling74.com/forums/kinect-perspective-correction
> I've got 2 computers each running a Processing sketch which uses the SimpleOpenNI library for Kinect skeleton analysis. They both send their joint coordinates over OSC to Max running on one of the computers.
I draw the joints to a 3d GL context so I can manually rotate and translate the 2 skeletons till they overlap pretty good (compensating for the 120° horizontal angle, vertical angle, etc). I do this visually though I'm sure there's more scientific methods to do this but this works for me for now. (Any tips on this?)
Then I have some logics going on to merge the 2 skeletons in 1 resulting skeleton. I iterate through all of the joints. If I have a valid reading (according to the confidence parameter) in both skeletons I take the average coordinate. If one is valid and one is invalid I take only the valid one. If none are valid I don't update the joint.
Lastly I apply smoothing to the resulting skeleton.
(Btw, I hope that at some point the OpenNi/NITE libraries will have this built in, as well as fixing the 2 cam's on 1 computer skeleton tracking issue. Doesn't work as it should for now.) <
About multiple Kinects, if you only use the depth map (no skeleton tracking) you can have multiple on one machine. Don't know if there's an upper limit but each has to be connected to its own USB bus (multiple USB ports can be on the same USB bus, mind you). I got an extra PCIe USB card with 4 independent busses on it for this.
But if you need skeleton tracking it doesn't work. There's a bug or it's simply not supported by the NITE libraries. Skeletons come out very jittery and instable, as if the data from the different Kinects get mixed up. So in this case you need one computer per Kinect.
Lastly, it appears 2 Kinects in one space don't interfere much with each other. Not sure if this will still be true with 6. That makes for a lot of structured IR light projected in the room.
Maximum distance for skeleton tracking is 4.5-5m (this is a hardcoded library limitation). Depth doesn't have that limitation but noise increases significantly with distance. Depends on your accuracy needs.
Thanks a lot DTR for your detailed experience.
Are you on Windows ?
I also posted on another thread - loking for anyone with success running multiple Kinects (and multiple instances of OpenNI/NITE) on a single machine. I need to do a project with to Kinects in two different locations simultaneously, feeding a single machine that will use the two inputs to control two different portions of a single animation.
Re: dtr, did you have enough difference in the two Processing sketches, and their resulting OSC outputs, to separate the data when you tried with a single computer? I'm going to use Synapse instead, but not sure if I should try running two instances of Synapse, or rebuild the main to handle multiple Kinect IDs within a single application... or maybe to use Synapse to build a Max/jitter standalone or import the code into a Gen object (which I haven't played with yet). Any suggestions welcome!
>Re: dtr, did you have enough difference in the two Processing sketches, and their resulting OSC outputs, to separate the data when you tried with a single computer?
Sorry I don't understand what you mean by that.
@marie-helene: no i'm on mac