I recently saw a video were the user moves his arms and the projection on the wall displays wings attach to the users arms. It was very cool.
they used open frameworks for Kinect tracking, maya for modeling, and Unity 3d for animating it in realtime.
Anyways the i like the idea of matching 3d animations to user interaction using the kinect. I am wondering if this is possible using the new features of Max 6 such as amim etc… ? Like trees growing from people hands, etc..
I was thinking of learning Unity 3d but Iam already familiar with maxmsp so Wanted to ask you guys what do you think.
this is possible with max 6 and jit.gl.model.
check out my reply in this thread:
the main idea is to expose the model’s node hierarchy using the "copynodestoclipboard" message.
you can then control the model using whatever input your desire (mouse, kinect, brainwaves, etc).
you can also use the anim.nodes to get the absolute world-position of parts of the model (eg, the hand) and bind other objects to that position.
check out that patch, as well as the examples in Max6/examples/jitter-examples/render/model/
let me know if you have further questions.
perfect Robert thanks for the reply!!
C74 RSS Feed | © Copyright Cycling '74