Laban movement analysis externals, anyone?!?
I'm about to start working on a larger project using accelerometers (four of them, one for each limb of the dancer in the case of a solo, or one for each dancer in the case of a quartet. The accelerometers are 3 degrees of freedom). I am now looking for ways to implement real-time laban analysis, but I don't have anywhere near the programming or mathematics chops to whip up something like this on my own. So my question to the community is if an external is already made by some cleverer personality. Even at a cost I'd still be interested, in fact...
Sounds like a fabulous project! I suspect that would be way beyond the capabilities of a simple external. It would have to perform temporal pattern-recognition, probably using pre-trained neural network structures. It would be a major software research project. You'd probably need a fully functioning piece of purpose-built software written in a low level language to achieve real-time laban-style analysis. Although who knows... somebody feel free to prove me wrong. Maybe you could find some laban analysis software and talk to the developers to see if they can adapt it to accept the raw (or pre-processed) data input from Max (ie via udp or some such communication ), but whichever way you go I suspect it's a long-term goal ...
Thanks, Floating Point! Yes, I suspected there would be much more to it than a simple external... But one can always hope :-) These guys seem to be on to something: https://www.academia.edu/2815841/FLOW_EXPRESSING_MOVEMENT_QUALITY
Nice idea, Kflak! - I guess you dropped some hints about this before. I think Isadora has something along these lines built in - if not you should certainly talk to Mark Coniglio about it. It's something he's been thinking about for ages - I remember literally 20 years ago they had something running using some crazy home-made sort-of motion capture suit. I could also probably put you in touch with Thecla Schiphorst - I've only met her a couple of times, but I've got friends who know her well.
Hi Jo! Nice to meet you in here :-) I would be very interested in hooking up with the guys you mention.
Hej,
I am just researching in a similar field. I want to predict the intention of a movement based on some sort of analysis. How far did you get?
Best jonas
Hi Jonas,
After a bit of consideration I let the Laban part of the project slip by. It was simply too advanced for what I was trying to achieve. It is still on the back-burner, though, and I would very much like to pick it up when my programming skills catch up with the assignment. In the meanwhile, I suppose the links I posted above are still valid, and if you are up for a challenge, they might just give you what you need to get going :-) Out of curiosity, what is your project about?
There are a lot of simple things you could do to get good information about the movement intention. The primary one is to extract the delta value of the sensors (absolute (current value minus previous value)), a high value giving you a high probability that something big is happening, a low value the opposite. Then using some kind of averaging over several data points will give you more of a time perspective. I found the slide objects incredibly useful for this kind of stuff.
Another way of getting specific gesture data is to implement the ircam mubu/gesture follower suite og objects. I haven't been digging into this yet, but it would give you a starting point for statistical analysis of movements. Machine learning, that kind of stuff...
Thanks for the fast answer...
I have been looking at the muBu, and it's great! Propably I will use parts of it anyway. The thing with the current gesture recognition material I've found ist the need of a, lets call it, absolute gesture. What I am interessted in is an estimation of movement qualities. Is there a rough movement, or is it controlled. Is it gentle or hard.
My plan is to create an light ray based interactive installation where each ray is based on an agent which will react according the movements quality, with an animal like behaviour.
I've just found two interessting resources:
http://moco.ircam.fr/
http://movingstories.ca/movingstories/
Great stuff. I would definitely want to participate in the IRCAM workshop if they do it again this year... In theory, it shouldn't be so difficult to pull out the data that you would need. I suppose that you could do a lot of it with some basic statistical tools, like average values, the amount of deviation from those values, etc. I got a very reliable indicator on a kind of stabbing quality by sending a list through a vexpr object [vexpr abs($f1-$f2)], where $f1 is the current list, $f2 is the previous value, and then set a threshold after that, something meaningful depending on how you scale it. The higher the threshold, the bigger the movement energy. I used an approach like this as a trigger for various processes in my project, and it worked beautifully. Combining many different triggers/thresholds from these data already gives you heaps of data that you can use to inform whatever process you want to work on, without ever having to resort to actual statistical analysis. I did see someone releasing a machine learning library for max recently here on the forum. For more advanced analysis this might be a way to go as well...
Nice, post, the delta should be equal the acceleration of the tracked object. So what you suggest is to set some tresholds detecting different accelerations.
MuBu is mainly machienlearning else the ml library from the Art Fab of the College of Fine Arts (https://github.com/cmuartfab/ml-lib). They have some more algorythms implemented. The MuBu, however have a way of implementation to save and prepare features.
Acceleration is nice to detect the force and speed of a movement. Currently I am trying to estimate how direct a movement is. My current approch: detect the direction of the movement and estimate how much the current differentiate to the previous. But my result are not satisfying jet. I need to try with some finetuning and propably some filtering of the raw data. Some interessting work regarding movement data filtering I just found here: https://www.uio.no/english/research/groups/fourms/projects/sma/subprojects/mocapfilters/index.html
What kind of sensing are you using? I use accelerometers, which can give pretty clear ideas of directionality... using the change object can be a great way to detect changes of direction. There is a feature there that will only output values if the direction changes from positive to negative, or vice versa. Haven't really tried this approach, but should yield some usable results.
This would be a very good job for an array, in fact, coming to think of it. Say you store the last 8 samples from the movement captor in an array, then you should be able to figure out if it is moving in one direction by running a comparison between the different delta values of the different points in time. It would probably look something like this in very pseudo code:
array [sample1, sample2,...sample8]
deltaArray [sample2-sample1, sample3-sample2, ... , sample8-sample7]
Sum up all the values of deltaArray and divide by the length of the array to get the average of the last 8 samples. Then compare each of the 8 values to the average to get the deviation of each one of them, then compare these with each other. If all the values are more or less the same, you would have a constant movement in the same direction, i.e. direct use of space in Laban terminology. If they are fluctuating wildly, then you would have indirect use of space. If constant movement, then the size of the delta values would tell you how big the movement is, if it is small, it would be a slow movement.
In this way you should pretty much be able to get direct/indirect movement as well as movement energy using only standard max objects. I haven't tried this approach myself, I am basically just thinking out loud here, and I am sure there are computationally much more elegant ways of doing this... Let me know if you are doing something along these lines, I would be very happy to hear more about it!!!
And thanks for making me think more about this, I am getting excited about trying this out soon...
I had some trials with a leap motion, but need a bigger space so I am going to try with a kinect.
Do you just have an accelerometer, or is it a complete IMU? With an IMU, you can get estimate something like relative position. (http://www.x-io.co.uk/gait-tracking-with-x-imu/)
Just having a accelerometer, you always have the gravity in the measurement. You could calculate the magnitude. Moving towards the earth, you get 1g. With the use of an IMU you can substract the gravity and get relative acceleration. This vector indicates the direction it moves. For trials, most smartphones have built in IMUs and both for iOS and Android you can find apps sending the raw data through OSC.
I'll keep you updated how my project evolves.
I have been using minibees with great results: https://www.sensestage.eu
With these I get a very clear reading of where things are in relation to the ground, with for example height and rotation of hand when using wristbands. I am not sure if they are IMUs in the sense you mean, but I do get absolute position readings from using them in relation to ground, but not in relation to space. I have even been able to write a posture recognition algorithm that works in any orientation (which obviously a Kinect can't make happen, since there is the problem with sight-lines etc...) For absolute readings with a lot of precision you could probably combine the two.
It seems just to be a single accelerometer. But probably you would easily be able to expand it with an IMU. Take also a look at https://github.com/YCAMInterlab/RAMDanceToolkit/wiki/Overview, it is C++ coded, but propably interessting for you if you work with a kind of sceleton tracking.
Best
Jonas
Cool! Thanks.
Been playing around a bit here with a max patch that would give a sense of direction in relation to the previous 7 samples. Haven't field tested it yet, but the numbers work... One is based on average, the other on median values. I have a feeling average should work better, however the median might work well if there is a lot of noise...
Oh damn, looking at the RAMDance now. Incredible stuff... Seems like I have to learn C++... Was kind of hoping to avoid that :-)