How to mix Eul & Quater (maths and motion sensors)
So pardon the poor 'oil & water' pseudo-pun in the title, but I recently got a fancy pants motion sensor (NGIMU) and am struggling with how to best deal with the on-board sensor fusion stuff.
Basically it's a 9DOF IMU that has on-board sensor fusion algorithm stuff so you can get calibrated and drift-free orientation sensing. Pretty slick.
So it can output a ton of different things, but what I was focussing on was the Euler output (roll, pitch, yaw) as I can conceptually make sense of that in terms of what the movement means, but also , more importantly, how to map that onto parameters. As in, roll = speed, pitch = frequency, yaw = amplitude, etc... (to use a basic example).
This works fine BUT if your pitch is pointing straight up (which is pretty common in a sensor/IMU context) the Euler output suffers from gimbal lock and the other axes go crazy.
So that leads me to Quaternions (which it can also spit out). Now my understanding of math is not great, but I gather that this is a more ideal representation because it does not suffer from the same 'gimbal lock' problem. Now the problem is, I don't really understand quaternions. I mean this in an absolute sense, but more practically, in how to make use of them musically/mapping-ly.
So my round-a-bout question is, when dealing with quaternions, how do you then correlate that to something euler-esque? Or something that makes musical/conceptual sense (moving this way causes this corresponding sonic phenomenon, etc...).
On a smaller Max-specific note, if I use something like [jit.quat2euler] on the quaternion output from the sensor, will that too suffer from gimbal lock? As in, is gimbal lock a phenomenon inherent to a euler representation of orientation or is it a byproduct of outputting euler coordinates from an algorithm?
A Monday evening bump before letting this fade into forum limbo...
I don't know much about this, but probably your best bet is to try doing a conversion from quaternion to euler representation and then remap to audio parameters. Why don't you just try it to see what happens?
I've given that a test again and I'm not really understanding what/why is going on here.
If I take the [jit.quat2euler] output, I get the euler version.... sort of. Two of the axes are swapped around (roll becomes yaw and vice versa) and more importantly, it still seems to suffer from gimbal lock.
Now I accidentally discovered something weird (when connecting cables around the patch) in that if I take the first three values of the quaternion, those seem to somewhat represent the euler coordinates in a usable way(but again, out of order with roll = yaw).
Having a look at the wiki for quaternions doesn't shed insight (to my level of maths), but I have no idea why this would be the case.
It's definitely not perfect as it doesn't seem to wrap around perfectly (like if I do a 360% spin on the yaw axis, the values are now a bit weird for that axis, etc...)
in euclideanspace.com there is a converting quat to euclidean page; on that page the first two paragraphs explain (basically a set of formulas) how to convert to euclidean and avoid gimbal lock-- it looks like you could implement that in max with a few expression boxes. sorry can't provide the exact url-- I cant copy-paste into this forum for some reason. It also seems to explain why your roll-yaw axes are reversed (NASA conventions versus VRML convention)
Oh man, that's so useful!
I actually had to google my way to the conversion page as the navigation of that page is pretty funky.
The NASA vs VRML thing is a godsend as I was thinking I was going crazy there! I'll look at the page(s) further, but it's unclear to me if that only refers to Euler representation or if there is also a "NASA quaternion" vs "VRML quaternion" thing. (as in, jit.quat2euler spits out the 'wrong' representation, so I'm wondering if there's a funny math thing going on in the quaternions I'm getting out (the 'spinning object' UI I get from the jit.quat2euler helpfile doesn't look right coming out of the sensor).
I had done some manual quat to euler conversion a couple years ago in Max when trying to turn the output from a Myo sensor into usable euler angles too.
I used this formula from the OpenFrameworks library:
Quaternion(x,y,z,w)
// Calculate Euler angles (roll, pitch, and yaw) from the unit quaternion.
float roll = atan2(2.0f * (quat.w() * quat.x() + quat.y() * quat.z()),
1.0f - 2.0f * (quat.x() * quat.x() + quat.y() * quat.y()));
float pitch = asin(2.0f * (quat.w() * quat.y() - quat.z() * quat.x()));
float yaw = atan2(2.0f * (quat.w() * quat.z() + quat.x() * quat.y()),
1.0f - 2.0f * (quat.y() * quat.y() + quat.z() * quat.z()));
And I turned it into this:
The version from that webpage (also quoted below) seems to account for gimbal lock ("singularities"), so I'll try to implement that in Max, though I'm still unsure about the order of quaternions coming out of the sensor. (this is the relevant code from the sensor)
For future forum searching the link in question is:
http://www.euclideanspace.com/maths/geometry/rotations/conversions/quaternionToEuler/
Here are the equations they mention:
heading = atan2(2*qy*qw-2*qx*qz , 1 - 2*qy2 - 2*qz2)
attitude = asin(2*qx*qy + 2*qz*qw)
bank = atan2(2*qx*qw-2*qy*qz , 1 - 2*qx2 - 2*qz2)
except when qx*qy + qz*qw = 0.5 (north pole)
which gives:
heading = 2 * atan2(x,w)
bank = 0
and when qx*qy + qz*qw = -0.5 (south pole)
which gives:
heading = -2 * atan2(x,w)
bank = 0
And the java representation:
/** assumes q1 is a normalised quaternion */
public void set(Quat4d q1) {
double test = q1.x*q1.y + q1.z*q1.w;
if (test > 0.499) { // singularity at north pole
heading = 2 * atan2(q1.x,q1.w);
attitude = Math.PI/2;
bank = 0;
return;
}
if (test < -0.499) { // singularity at south pole
heading = -2 * atan2(q1.x,q1.w);
attitude = - Math.PI/2;
bank = 0;
return;
}
double sqx = q1.x*q1.x;
double sqy = q1.y*q1.y;
double sqz = q1.z*q1.z;
heading = atan2(2*q1.y*q1.w-2*q1.x*q1.z , 1 - 2*sqy - 2*sqz);
attitude = asin(2*test);
bank = atan2(2*q1.x*q1.w-2*q1.y*q1.z , 1 - 2*sqx - 2*sqz)
}
(rather than edit the lengthy post above, here is the relevant info from pocking through the NGIMU code)
So there is a "different" representation of Quaternion representation too.
The OpenFrameworks quaternion2euler code I have above presumes (x,y,z,w) whereas the NGIMU is calculating off (w,x,y,z).
AND
They represent the Euler output in "Aerospace sequence" (ZYX) instead of (XYZ).
From the code:
* @brief Quaternion. This library uses the conversion of placing the 'w' * element as the first element. Other implementations may place the 'w' * element as the last element.
* @brief Euler angles union. The Euler angles are in the Aerospace sequence * also known as the ZYX sequence.
it looks like the most important revelation in your research is the test variable and its branchings in the java code you posted-- if you implement it you could play around with the values it is tested against (-/+0.499) to see if they prevent the gimbal lock in the conversion from quat output from the sensor to euler in the max patch... good luck!
fyi this video just came out explaining some geometric aspects of quaternions (not directly applicable to your practical problem, but perhaps helps to give some intuitive understanding of quaternion-space)
What you could look at is the xyzw values of the quaternion - they should all be from -1 to 1 and as you rotate around the 3 axis these 4 values will change. as I understand the quaternion it's kinda similar to the angle axis representation - you have 3 'vector' components and 1 'scalar' component. whenever you convert to Euler angles you will always face the issues with singularities so from what I have learned is to keep the conversion until the last possible moment if you are. The question is what range of movement are you hoping to use? if it is less than -90/90 then you can use Euler angles no issue. The other thing you can do in max is to take the orientation quaternion from your device and set a reference point - as that is giving you the orientation in relation to Earth reference frame.
In my patch I have built in a switch mechanism that allows you to take an orientation quat - unpack it and invert the w component ie. * by -1 then repack it and multiply that to your current quaternion and now you have a new reference frame.
Interesting.
I'm definitely hoping for more than -90/90 as it will be attached to an arm that may move around freely, and it's quite easy for an arm to end up pointing directly up (exactly where the singularity kicks in).
I've not looked at this in a bit, but I had pushed it a bit further than my last post, making a patch that demonstrates the problem a bit more clearly, along with a recorded version of the direct sensor output:
Would be curious to see your patch, with the inversion, to see if that would help here.
(there's a video referenced in the patch, but it's not integral to the problem, it just demonstrates how the quaternion representation in jitter doesn't correspond with the physical movements, regardless of how I rotate the quaternion)
At first glance I can see the zl rot object is not flipping your quat values properly. when I put through my own quats the values are not being put into the correct order.
I would suggest running it into an unpack 0. 0. 0. 0. then re packing it x,y,z,w
at least then your quats are good.
I have had the same issue with rotating the 3d objects in jit.
I wonder if its right hand vs left hand rule calculation. I will try reversing my matrix and see how things look then?
Yeah I don't know about the jit object, Im not sure what the colours of the axes on the jit object correlate to ie. red=x axis, blue=z, green=y that would help work out as maybe then we need to rotate the object itself before we start
I have tried as many permutations as I can but for the life of me can't get the jit.object to rotate like the youtube videos show. My orthogonal matrix is correct when my sensor axis are lined up with north and gravity I have 1,0,0/0,1,0/0,0,1 and quat 0.0.0.1 xyzw (using the jit.quat ordering) but the object just rotates the opposite angles to anything I do or some other whatever rotation.
Oh gosh, I have made it work. presumeably it is something to do with the orientation of the axes in the quat and what the jit object takes.
I unpacked the jit.quat then using expr $f1*-1 inverted the x and y component. now as if by magic by 3d cube copies exactly what my sensor device is doing. """"crazy"""""
can someone help understand this? presumeably it is left and right hand representation? does that mean I can fix this by inverting 3x3 orthogonal matrix? mine is right hand I believe z up, y forward and x right.
hey, haven't been following everything in this interesting thread, but your unpack/pack action is a little off. the way you do it you are loosing sync of the data (watch out for order of execution), better use something like [zl rot -1] for that.
EDIT: or [zl rot 3] like in rodrigo's patch.
if this doesn't give you the correct ordering, i might be missing something here...
Ugh, just hit "reply" and lost a whole post....lame.
So that's interesting about inverting individual components. I don't think I tried that, though I did try every possible ordering/permutation of the quaternions and wasn't able to get anything going. I don't have the sensor with me at the moment so I can't test that out in detail, but I will give it a spin.
It seemed that no matter what I did one of the axes was always wrong (e.g. roll = yaw), where it would kind of move right, but as if you were just leaning over.
The [zl rot 3] should be the same as your [unpack] -> [pack] example, and as @Volker mentions you'll have data sync issues with that since the [pack] will fire when you reach the 2nd (from the left) of your [unpack] object. Probably doesn't have a profound impact since the data is coming so quickly, but may mess up some of the more complicated maths in other parts of the patch. (just edited my pasted code above so the colors match when reordering the quaternion)
The reordering (and possibly lack of inversion?) also makes some of the singularity compensation code from working too, which is frustrating as I have bits of code to test, but can't seem to format the data correctly for it.
My knowledge is really simple so I just break things down to see how it works, so wasn't aware of delay coming out of the pack/unpack. looking at the how to invert various components of the jit.quat item. presumeably to invert the whole quat we attach a "trigger getinverse list" object and then the inverse quat should come out the dump output. and if we want to inverst some components but not all is there where vexpr item would work? if we have a list of 4 numbers (the quat) can we multiple 4 other numbers to it (1 1 1 -1) or whatever ordering we want? is that how vexpr works?
I really think it must be to do with left hand and right hand. the other thing is that the xyz axis on the sensor may not correlate how you think. I check xyz by getting the raw acceleration and magnetometer and gyro then you can look at what happens to these 3 values as you move the device around. figure out where is the "nose" and if this correlates to y or Z. my sensor does X to right, Y is forwards and Z up (left hand I think). I have seen descriptions where the Y and Z axis are swapped. Im trying to use two devices one on the hand and one on the forearm and by comparing the rotation between the two establish movement within the hand/wrist as a way of monitoring surgical movements.
Sorry haven't been able to reply earlier - for some reason if I try to post from my phone It only stores 1 word in the post.
so It seems send the quat into a t b l, then the bang to a message 1 1 1 -1, then into vexpr $f1*$i2 it will output the quat with w value inverted.
so can do the same but moving the -1 around and can convert the other axes whilst still keeping the 4 values in sync? I hope.
It's not so much a delay, but an order of operations thing.
You can still doing it by unpacking/repacking, but I would use unjoin/join so you can specify a @triggers inlet:
Very handy if you want to have control of when things happen. So with the @triggers attribute you can make a combination of pack/pak that's agnostic as to the type of data flowing through it.
You can also do the vexpr solution you mentioned, multiplying it by "1 1 1 -1", but then you still have to reorder it after (or before).
As far as I understand, the quaternions, although they xyz resemble euler representation (as in pitch/roll/yaw), it's mathematically something quite different going on.
I can also just request the euler output from my sensor, but then I'm back to square one with singularities.
I´m also struggling with the conversion from quaterions to euler-angles for a while - I´m building a tiny and cheap sensor (MPU-6050 + ESP8266-01) that directly spits out quaternions into wifi using j.Rowbergs great I2Cdevlib for onboard data fusion. That library also offers conversion to euler-angles but the results are not really convincing, suffering from gimbal-lock and other weird behaviour.. So I compiled various other solutions i found and compare them in this patch:
Obviously the jit.quat2euler (second duck from left) object converts the quats (far left) nearly perfect. I could just use that object, but I would love to do the conversion directly on the ESP-chip for reasons of efficiency and to not be dependent on exclusively using Max with that sensor. So I´d be super interested to know what algorithm was used in the jit.quat2euler object!! or maybe someone has ideas on how to optimize one of the other algorithms...
Man, that's super handy as a comparison!
(the fact that they are little ducks freaking certainly helps...)
I'll have to test this out with the quaternion outputs from my sensor to see if any of them work out well as I wasn't able to get good results out of [jit.quat2euler] in my case.
Out of curiosity, are the quaternions coming out of your sensor in line with the scaling and order that [jit.quat2euler] is expecting? As in, does the duck's motion line up with your sensors motion?
yep, the quaternions are absolutely aligned with the sensor. - i´m having my on little ducky puppetry ! . Only need to do some manual calibration especially for yaw, since its only a 6-dof sensor that calculates orientation from the accelorometer.
But I rotate the quaternions order on the ESP chip - as in zl rot -1 to match the jitter-quat-order.
we use this one for quat2euler:// assumes normalized quaternion
double test = q->x*q->y + q->z*q->w;
if (test > 0.499) { // singularity at north pole
xyz[0] = 0;
xyz[1] = 2 * jit_math_atan2(q->x,q->w);
xyz[2] = JIT_MATH_F64_HALF_PI;
}
else if (test < -0.499) { // singularity at south pole
xyz[0] = 0;
xyz[1] = -2 * jit_math_atan2(q->x,q->w);
xyz[2] = -JIT_MATH_F64_HALF_PI;
}
else {
double sqx = q->x*q->x;
double sqy = q->y*q->y;
double sqz = q->z*q->z;
xyz[0] = jit_math_atan2(2*q->x*q->w-2*q->y*q->z , 1 - 2*sqx - 2*sqz);
xyz[1] = jit_math_atan2(2*q->y*q->w-2*q->x*q->z , 1 - 2*sqy - 2*sqz);
xyz[2] = jit_math_asin(2*test);
}
That's really good to know, particularly since I was trying to just implement that in vanilla Max. And also good to see that it corrects for singularities as well.
Hopefully seeing the order of arguments here will help me figure out what's going on with my sensor. I don't have it with me at the moment, so when I have it next I'll do some further testing and report back if I get to the bottom of it.
Wow!! Thanks for sharing that code, Rob! That will be really useful. I will implement it tonight on my sensor..
I just realized, that I already had your algorithm in my comparison-patch: its identical to the euclidianspace subpatch (same source as the link you provided ). I repatched it slightly different now, just to be shure. But the patched algorithm still gives me different results compared to jit.quat2euler. weird.
Then I implemented your code as java-script: that is 100% equal to jit.quat2euler... .
is there a logical explanation for this? Where is my mistake??
/******** j-quat2euler.js
quaternion to euler conversion aka jit.quat2euler *********/
var xyz = [0,1,2]; //output array
function list()
{
var a = arrayfromargs(arguments); //input array
var x = a[0];
var y = a[1];
var z = a[2];
var w = a[3];
var test = x*y + z*w;
if (test > 0.499) { // singularity at north pole
xyz[0] = 0;
xyz[1] = 2*Math.atan2(x,w);
xyz[2] = 0.5*Math.PI;
}
if (test < -0.499) { // singularity at north pole
xyz[0] = 0;
xyz[1] = -2*Math.atan2(x,w);
xyz[2] = 0.5*Math.PI;
}
else {
var sqx = x*x;
var sqy = y*y;
var sqz = z*z;
xyz[0] = Math.atan2(2*x*w-2*y*z , 1 - 2*sqx - 2*sqz)*180/Math.PI;
xyz[1] = Math.atan2(2*y*w-2*x*z , 1 - 2*sqy - 2*sqz)*180/Math.PI;
xyz[2] = Math.asin(2*test)*180/Math.PI;
outlet(0, xyz);
}
}
Hmm, not sure why that wouldn't be working, though I'm not super hot on js maths stuff.
I did manage to finally get some testing in with the sensor with a lot of the suggestions made here. Sadly I wasn't able to get something usable out of it still.
It was great knowing that the native [jit.quat2eueler] uses that euclideanspace code, so it saves me from having to build it in Max (though I am curious in how you get on with your js version @cobtob).
Regardless of how I fed the quaternions into [jit.quat2eueler], the euler output wasn't correct. I tried xyzw, xyzw, and every other combination you could think of. I also tried @ian's suggestion(s) of inverting the quaternion data and still nothing. It actually seems like no matter what I did, I would get some singularities in the output data (which kind of makes sense in that the code for avoiding it checks for specific criteria which weren't being met).
I also came to the realization that it's possible that my visualizer is going to be wrong (as in, not oriented correctly towards the physical representation). I'm ok with that, and am sort of abandoning getting a visualizer working for now, but want to focus on getting usable euler output.
On a more positive note, I did manage to test the NGIMU sensor with the native Windows GUI software, which has fancy graphs, lots of configuration options, etc... So that was useful to make sure it was all working ok. I also updated to the most recent firmware as well, for good measure.
I'm going to email x-io to see if it's possible for them to implement the same euclideanspace code that's implemented in Max to save me (and presumably others) the headache of having to deal with this, as well as maybe thinking that I found a bug or two while testing.
Hopefully that goes somewhere.
Sorry to hear, you´re still having problems with your sensor. I also have had a rather hard time with my issue. Although I triple checked, I still didn´t find the problem with my patch (or cyclings math library :)) - but for now I don´t really care. When I saw Rob´s code (in my javascript) create exactly the same result as the jit.quat2euler object I was confident to give the sensor-embedded version another try: but the first night or trying it gave me the same weird results I had seen before with strange behaviour at singularities and such ... Then I rechecked my sequence of quaternions - knowing that quaternions arriving into jitter are correct: xyzw. as I mentioned before, I had rotated the order on chip for jitter-format. Now - reconstructing from jitter-order - I realized that the I2Cdev library I use to read the sensor obviously has a bug! The quat names seem to be mapped to the wrong data, so all chip-embedded calculations based on the wrong quat sequence produced strange results. no suprise.
So I changed this in my arduino sketch.
float w = q.x; // q.x -> this is the data pulled from the lib
float y = q.y;
float x = q.z;
float z = q.w;
Now everything works perfectly: euler angles sent from the sensor produce exactly the same results as quaternions in jitter, no more gimbal lock or odd singularities, yey!
Rodrigo, your sensor should be able to spit out the same quality of data, I´m quite shure.. Maybe you should try the weird quat order WYXZ I ended up with. There is a chance the x-io guys are using the same library.. :) The behaviour you describe would definately result from wrong quat-sequence. I´d be surpised if x-io sensors just couldn´t put out correct quaternions - thats why people buy these sensors.. good luck
Yeah, the sensor should be able to spit out good data, and in testing it with their Windows only GUI, it's pretty flawless (they have a graph/rotation thing). It also does all sorts of fancy onboard sensor fusion stuff, so the data it is spitting out is calibrated/compensated...seemingly just not the euler representation.
This is the quat->euler code from the NGIMU:
/**
* @brief Converts a quaternion to Euler angles in degrees.
* @param quaternion Quaternion to be converted.
* @return Euler angles in degrees.
*/
static inline __attribute__((always_inline)) FusionEulerAngles FusionQuaternionToEulerAngles(const FusionQuaternion quaternion) {
#define Q quaternion.element // define shorthand label for more readable code
const float qwSquaredMinusHalf = Q.w * Q.w - 0.5f; // calculate common terms to avoid repeated operations
FusionEulerAngles eulerAngles;
eulerAngles.angle.roll = FusionRadiansToDegrees(atan2f(Q.y * Q.z - Q.w * Q.x, qwSquaredMinusHalf + Q.z * Q.z));
eulerAngles.angle.pitch = FusionRadiansToDegrees(-1.0f * asinf(2.0f * (Q.x * Q.z + Q.w * Q.y)));
eulerAngles.angle.yaw = FusionRadiansToDegrees(atan2f(Q.x * Q.y - Q.w * Q.z, qwSquaredMinusHalf + Q.x * Q.x));
return eulerAngles;
#undef Q // undefine shorthand label
}
I've emailed them to see if they can just change it to the same code that jitter is using.
I would alter the code directly, but it's part of a whole package/library thing, and I wouldn't know how to build a firmware .hex from it.
I'll do some more testing to see if I can get quaternions->euler working in Max though. I'm pretty sure I tried every possible permutation of WXYZ, but I could be more methodical about it to see if there's something I missed.