How do _you use accelerometer data?

Jan 6, 2007 at 12:29pm

How do _you use accelerometer data?

hi all,

Thinking aloud ….

I’m wondering if any one who is already using accelerometers in their
max patches would be willing to share what kinds of ways they are
using the data, and how they go about making it useful.

I’ve started thinking about this more as I’ve just bought a Wii
remote. Although you can get linear position data using IR (I’ve only
tried it with candles so far!), the easiest stuff to get from the
remote is acceleration data and switches.

One of the main ways that I use controller data is to control the
position of the grain playback head in a buffer, and the pitch/speed
of playback. Although I’m thinking of other interesting things that
can be done with acceleration, I still want to be able to control
buffer position, and I think that this is the biggest challenge in
using accelerometers – somehow converting acceleration, which always
tends to rapidly return to zero, into some kind of linear/positional
data. (I’m thinking handheld here – obviously if you have the sensor
mounted on something which rotates (like a bike wheel) or
continuously moves in some other fashion this is not such an issue,
but what I want is to use handheld sensors).

My suspicion is that accelerometer output is by definition unusable
for something like linearly moving a playback position – but I’d love
to be proved wrong! I suppose moving the sensor in a circle, faster
or slower, would be one way to do this – you’d need strong arm
muscles though. Hm – thinking about this – the acceleration in any
one axis would still be constantly changing, so maybe that isn’t the
solution.

So far 2 ways of interpreting the data have occurred to me:

1. Linear(ish) – apply the accel data directly to a parameter;
alternatively scale, reverse, or map it (eg with [table]). I think
I’m calling this the “weeow” option, as that’s the main result of
using the data like this – you get a fairly rapid change as you move,
and then there’s always a return to a zero point as the movement
stops. Offsetting, scaling and mapping just changes the shape of the
weeow (weeeoooeeoooeeow? oweewweeewoeeweweo? )

One interesting visual analogue of this is that effect where you
create a kind of a virtual screen for a projected image by rapidly
waving a thin wand up and down in the air through which the image is
being projected. (tiring).

2. Thresholds – more rapid movements fire off different events (delay
times, effects settings, configurations etc etc). Though as you have
to pass through lower thresholds to get to the higher ones (your arm
doesn’t instantly jump from one speed to another – at least, my
doesn’t!), there’d need to be some way of detecting the peak point of
a movement unless you actually want all of the settings from zero to
the current peak of a movement.

One way I’d thought of using this is to direct the threshold triggers
to a randomised playback position value so that bigger, faster
movements cause more randomisation of playback. Doing something like
this with 2dwave~ and munger~ might also produce something interesting.

Anyway, that’s as far as I’ve gotten with this. What thoughts/
suggestions do others have about using this kind of sensor?

thanks

David

#29520
Jan 6, 2007 at 2:12pm

About a year ago, I had to make a decision as to what motion-tracking system I wanted to invest in. I was very interseted in inertia sensors, but lacked the funds to buy any, so I got into video-tracking instead, which has led to some interesting work. Since a friend of mine bought a Wii remote for his family for Christmas, I went and dug up the articles I had collected about measuring inertia and how to use the measurements. I haven’t tried it yet, but here are a few notes.

First off, here is a great article:
http://www.cs.unc.edu/~tracker/media/pdf/cga02_welch_tracking.pdf
which talks about motion tracking in general, and is so well written that it is simply enjoyable to read.

Secondly, inertia can be measured to update the actual position in space of the object. The basic precept is that any 3D object has 6 variables defining its position in space. These are the x, y and z coordinates and the rotation of the object itself in any of those three directions (pitch, yaw and roll). Added to this is acceleration in any of these directions. In reality, it is the acceleration which we are interested in, which draws curves of position in space and time and can serve to convert the motion of the object being measured into data which is relevant for your purpose.
Here, we would actually want to measure accelleration; the inertia is probably unimportant. The equation a=dv/dt=d2x/dt2 can be used to extract position and rotation information from the accelleration information.
(In your case, it isn’t even terribly important to really measure your exact position; you only have to measure a loosely relative position after each period of inactivity. After all, you are only going to control some sounds, not navigate a spaceship, although it follows the same rules.)

In the case of music, it is simply a matter of mapping available data to desired data. A short list of possible musical data would be along the lines of:
pitch, volume, length and timbre
Of course, the use of Max/MSP opens up a never-ending library of sound parameters which can be controlled by incoming data. Basically, any knob or slider or button you use in your patch can be controlled externally, so it would be impossible to list everything. Still, it can be helpful to categorize the possibilities, which leads to four broad categories:
Triggering events
Continuous Control of aspects of an event
Triggering sequences of events
Continuous Control of aspects of a sequence

These four categories are basically what are reflected in the structure of the MIDI protocol.

Although the list is endless, an example of what one could with measurements of acceleration is to trigger a note whenever acceleration in the x axis is detected, set the pitch of the note with position in the y axis, VARY the pitch with variations in the position in the y axis, control the volume with variations in the z axis and, just to be tricky, use the rotation in the x axis to modulate some aspect of the timbre.

Using video-based motion tracking, I do the same, and have a matrix set up to easily choose what motions to map to what aspect of the program. I hope that when I find the time to experiment with accelerometers, I will find latency and precision superior to that which I can attain with video.

Hope this helps.

#92458
Jan 6, 2007 at 2:59pm

David Stevens skrev:
> hi all,
>
> Thinking aloud ….
>
> I’m wondering if any one who is already using accelerometers in their
> max patches would be willing to share what kinds of ways they are
> using the data, and how they go about making it useful.
Well, I would reckon that if you include TIME as a factor in these
calculations you could get pretty good approximations of position data –
for instance, if you move your hand slowly to the left for 3 seconds you
would get a somewhat steady acceleration for those three seconds, right?
Like your example with the bicycle wheel? So if you poll this data at a
fast rate you could use that to accumulate, giving you a rising
“absolute position” value.Like, accel-value*polls(this would be
time)=absolute position? And you can of course poll all the axes of the
wiimote, giving you some pretty good data, I should think?

Andreas.

#92459
Jan 6, 2007 at 3:11pm

patch!

:)

#92460
Jan 6, 2007 at 3:41pm

Something like that is what I was trying out yesterday – the bit that
was missing from my scratch patch was accumulate, so thanks for that.
I’m starting to form a clearer image of how to get the data I want!

On 6 Jan 2007, at 14:59, Andreas Wetterberg wrote:

> Well, I would reckon that if you include TIME as a factor in these
> calculations you could get pretty good approximations of position
> data if you poll this data at a fast rate you could use
> that to accumulate, giving you a rising “absolute position” value.

#92461
Jan 6, 2007 at 3:57pm

Hey Dayton,

thanks for an interesting response – lots of food for thought there.

I’ve been working with various sensor set ups for a while, and I
think that video is probably the simplest (physically) to set up and
the most reliable (no failing sensor connections or fragile wires
everywhere) – though of course there are issues relating to the
environment whatever system you use. (It’s probably the cheapest
remote sensing system as well).

Some of the attractive things about the Wii remote are that there are
no wires, no concern about light levels, no fiddley sensors to attach
to the body, and it’s very portable. I’m not sure how useful the IR
bit will be for me though, as one of the places I play most has very
unusual light quality, and I ‘m not sure it would be practical to
have the IR transmitters 5 metres away.

David

On 6 Jan 2007, at 14:12, Dayton wrote:

>
> About a year ago, I had to make a decision as to what motion-
> tracking system I wanted to invest in. I was very interseted in
> inertia sensors, but lacked the funds to buy any, so I got into
> video-tracking instead, which has led to some interesting work.
> Since a friend of mine bought a Wii remote for his family for
> Christmas, I went and dug up the articles I had collected about
> measuring inertia and how to use the measurements. I haven’t tried
> it yet, but here are a few notes.
>

#92462
Jan 6, 2007 at 7:55pm

Quote: david stevens wrote on Sat, 06 January 2007 08:57
> Some of the attractive things about the Wii remote are that there are
> no wires, no concern about light levels, no fiddley sensors to attach
> to the body, and it’s very portable.

Light is definitely the most touchy aspect in video-based tracking. Since I work with dancers and most of our performances are concieved for theaters, we have the opportunity to specify minutely how we want the light to be, but it requires alot of work. Shadows are a big problem.

I just did some research into the Wii remote, and I am VERY pleased that I did. I had no idea how cheap it was. It is based on the ADXL330 chip, which has relatively high accuracy for the phenomenally low price. I don’t know how fast the poll-rate is for the a/d chip, but it must at least be higher than the frame rate of the games it is used with. At a very minimum I would guess that this could be 40ms, although any worthwhile chip could give you 20ms or less. (This translates into latency, although in a somewhat unpredictable manner. Still; loads better than video.)

It is still not clear to me what you get as an output to the computer, but it seems likely that you not only get the x, y and z axis (by the way, the x and y axis are measured with 3x more resolution than the z axis) but also computes the pitch, yaw and roll before sending the information. If this is true, then it is very useful.
The most unclear part of the functionality is the z axis. It seems that the use of the “sensor bar”, which is merely a defined array of ir-LED’s, is important for two variables; the calibrating of the sensor-chip’s coordinate-system with the real-world coordinate-system by defining a fixed point as “front”, as well as actually measuring the distance between the player and the sensor bar. This makes sense in a gaming application, but if the z-axis acceleration cannot be measured independantly, this would be unfortunate.

In any case, a periodic recalibration of the x, y and z translations (zeroing the positions) would allow the performer to move about naturally and still have full control over the sound (and/or video) based on his or her personal coordinate-system while the rotations, being in relation to gravitational pull, are automatically recalibrated by our inner-ear’s ability to balance ourselves. The absolute position can be calculated in two ways: having a marked position which the performer KNOWS where he is and triggers the appropriate recalibration, and at the same time the cumulative position information can be saved in order to compare.

I must buy one of these things; I’m burning to try it out. A year ago, I outlined my ideas to my father, who was an aerospace engineer for McDonnel-Douglas from the 60′s until a couple of years ago, and he explained to me the basic precepts of tracking used in aerospace aplications, both near the earth and outside of the earth’s gravitation. The calibration is an important aspect in this type of work, and he described it as “a series of approximations”.

Controlling music with such information is pretty straightforward, but leaves alot of questions to be answered personally. What really seems interesting is the applications to the use of OpenGL. Where’s my wallet…

#92463
Jan 6, 2007 at 11:50pm

#92464
Jan 10, 2007 at 11:11pm

#92465
Jan 11, 2007 at 9:18am

#92466
Jan 11, 2007 at 11:28am

Alright. I let it get the better of me and went out and bought one of these things.
First off; I am on a PC (laptop, XP, P4 3Ghz) so I am not using the aka.object. Instead, I am using GlovePIE with BlueSoleil and communicating with Max through MIDIYoke.
This may be of interest to Mac-users because it sheds light on the uses from another angle; Carl Kenner, the programmer, did some wonderful work for us (thank you!)

Here are some of my own observations after the first few of hours and a fair amount of research:

The Wiimote is very sensitive, displaying the sort of problems which commonly arise in working with hyper-instruments: they can seem nearly as complicated and sensitive as conventional instruments, so that a true mastery of the device would entail training similar to that necessary for mastery of a violin or other instrument. The main difference is that a programmer can simplify things, emulating a more perfect control of the device.

The most effective measurements possible are (using OpenGl terminology) rotations in the z-axis and x-axis. Acceleration in any direction can be measured, but using the typical formulae for computing position, in order to extract useful numbers from this data proves to be extremely inaccurate due to latency-jitter and drift.

Measuring Rotation in the y-axis is not possible without the sensor-bar, which would make the Wiimote only useful in a fixed-coordinate system with a line-of-sight connection to the bar, and (except when using self-constructed led-arrays) only within 1 to 5 meters of the bar.

From the examples provided with GlovePIE, it seems that the most effective preparation of the data is to use it as impulses in desired directions, and the amount of impulse (degree of rotation or amount of acceleration) can map to the speed of movement in that direction. Using [slide] and [accum] would be a good way to accomplish this. Zero-position recalibration in periods of non-activity might be important as well.

For those who might need it, here is the information which GlovePIE provides about movement:

Possible Wiimote measurements without the sensor-bar using GlovePie.
(A left-handed Direct-3D system.)

The rotations are distinctly different than in OpenGL. In OpenGL, an axis works like a roasting-spit, so that rotation in the x-axis means that the top of the object tilts forward and back (like a man bowing) I don’t know why it appears differently in GlovePIE. Here is a bit of the information provided with GlovePIE, annotated when necessary:

• Pitch (rotation in the z-axis, corresponding to rotation in the x-axis in OpenGL) -90(pointing at floor) 0(parallel to floor) +90(pointing at ceiling )
• Roll (rotation in the x-axis, corresponding to rotation in the z-axis in OpenGL) -90(top is pointing left) 0(top is pointing up) +90(top is pointing right )
• SmoothPitch (less accurate but smoother)
• SmoothRoll
• RawForceX (obsolete)
• RawForceY (obsolete)
• RawForceZ (obsolete)
• Gx (1Gx is acceleration in the x-axis equal to gravity)
• Gy
• Gz
• RawAccX (Measured in meters per second per second)
• RawAccY
• RawAccZ
• RelAccX (The same, but with the force of gravity filtered out. Not as accurate.)
• RelAccY
• RelAccZ
• RotMat (3×3 Direct3D style rotation matrix)

The sensor bar is just a bunch of Infra Red lights which are always on. You can make your own fake sensor bar with candles, Christmas tree lights, or Infra-Red remote controls with a button held down. Or you can order a wireless sensor bar off the internet, or you can build your own.

You can read the position of the infra-red dots that the Wiimote can see with:
wiimote.dot1x, wiimote.dot1y

wiimote.dot4x, wiimote.dot4y

You can tell whether an infra-red dot can be seen with Wiimote.dot1vis to Wiimote.dot4vis

You can tell the size of a dot (between 0 and 15) with Wiimote.dot1size to Wiimote.dot4size

Have fun; I am.

#92467

You must be logged in to reply to this topic.