Sep 25 2011 | 6:31 pm

Hello, first post.

I'm working on a multi-sensor setup (3axis gyro and 3axis accelerometer) that I plan on using in a multimedia performance project(www.takahashisshellfishconcern.com).

I'm using an Arduino compatible board with built in sensors, sending that data wirelessly using XBee modules, and getting it into max using the serial object. The sensors will be mounted on my wrists to gather angle/orientation/motion data while I paint. (see videos on webpage linked above)

I have the data coming in fine (though not calibrated at the moment).

Now I've been doing lots of reading up on doing something meaningful with the numbers. The primarily goal would be to determine absolute angles of the sensors and mapping that to processing in max. In order to do this I've read that you need to use some kind of software filtering. I've run into several examples/types including Kalman filtering, Complimentary filtering and Marg filtering.
I know that to determine three absolute angles I would need to incorporate a magnetometer, which is something I'm considering, but given the example below, I think I can compensate for the drift/error accumulation.

This looks ideal:
http://www.youtube.com/watch?v=Egl75nv9E7s

And there is the (C) code posted on the project page:
http://code.google.com/p/imumargalgorithm30042010sohm/downloads/list

There's also a (lengthy) paper covering the math involved.

I must admit most of the math is waaaay over my head, particularly in the paper.

So basically I have some C code that does exactly what I need. (Convert 6DOF sensor data into 3 absolute angles, with drift along the yaw).
I tried building the math in max, but am at a loss on how to do most of it due to order of operation combined with max's right to left paradigm.

Is it possible to do this kind of calculation in straight max or do I need to build an external to do the math?

If it can't happen in max, is there a more max/object oriented solution for determining absolute angles from a sensor array?

A much simpler question here. What is the best way to calibrate the sensors? Leave it sitting flat and get a [mean] from that, then offset the value by that amount? Would I lose range in one direction by doing it that way? (For example, my gyro data seems to hover around 0.2 and goes from 0. to 1. if I center it to 0.5 will it only go from 0.3 to 1. after calibration?)

Here is the main body of the C code:

void IMUupdate(float gx, float gy, float gz, float ax, float ay, float az) {
float norm;
float vx, vy, vz;
float ex, ey, ez;

// normalise the measurements
norm = sqrt(ax*ax + ay*ay + az*az);
ax = ax / norm;
ay = ay / norm;
az = az / norm;

// estimated direction of gravity
vx = 2*(q1*q3 - q0*q2);
vy = 2*(q0*q1 + q2*q3);
vz = q0*q0 - q1*q1 - q2*q2 + q3*q3;

// error is sum of cross product between reference direction of field and direction measured by sensor
ex = (ay*vz - az*vy);
ey = (az*vx - ax*vz);
ez = (ax*vy - ay*vx);

// integral error scaled integral gain
exInt = exInt + ex*Ki;
eyInt = eyInt + ey*Ki;
ezInt = ezInt + ez*Ki;

// adjusted gyroscope measurements
gx = gx + Kp*ex + exInt;
gy = gy + Kp*ey + eyInt;
gz = gz + Kp*ez + ezInt;

// integrate quaternion rate and normalise
q0 = q0 + (-q1*gx - q2*gy - q3*gz)*halfT;
q1 = q1 + (q0*gx + q2*gz - q3*gy)*halfT;
q2 = q2 + (q0*gy - q1*gz + q3*gx)*halfT;
q3 = q3 + (q0*gz + q1*gy - q2*gx)*halfT;

// normalise quaternion
norm = sqrt(q0*q0 + q1*q1 + q2*q2 + q3*q3);
q0 = q0 / norm;
q1 = q1 / norm;
q2 = q2 / norm;
q3 = q3 / norm;
}

Here is my max code (which doesn't work due to stack overflow):