accuracy problem?

Apr 2, 2009 at 6:08am

accuracy problem?

Hello there, i hope someone there can give me a hint about the scheduler accuracy in Max!
I am working on a project where I need to get precise coordinate data from a list. The idea is to represent the displacement of GL shape on the space following the path made by the sound. We are planning to use the Wave Field Synthesis System in Leiden, Netherlands, which has 192 speakers placed inside a room. Because we want a particularly good accuracy between image and sound we were running some tests in order to be sure the data will have as little delay as possible.
The first problem I had was a terrible bad interpolation display between values when I reduced the metro to 1. I ran a test with the cpu clock and i discovered something: if i don’t overdrive the scheduler when my metro has 1 millisecond interval, my values of bang go to 2.25 with a mean error between 0.69 to 0.75(sometimes even 0.8 ), which seems to be very high and not very accurate at all!. I tried the patch in a friend’s computer and in one of my teacher’s computer and it seems that their mean absolute error was not over the 0.4. One important issue is that both were running max/msp 5.0.5.
I am running max/msp 5.0.6 in a MCP 2.53 GHz + 4 GB Ram, two video cards NVIDIA(9400+9600 GT).
I attach here the test patch
Any hint how I can improve my scheduler accuracy?
Thank you very much for help!!

Emmanuel

– Pasted Max Patch, click to expand. –
#43153
Apr 3, 2009 at 12:53am
efe wrote on Thu, 02 April 2009 01:08
Hello there, i hope someone there can give me a hint about the scheduler accuracy in Max!
I am working on a project where I need to get precise coordinate data from a list. The idea is to represent the displacement of GL shape on the space following the path made by the sound. We are planning to use the Wave Field Synthesis System in Leiden, Netherlands, which has 192 speakers placed inside a room. Because we want a particularly good accuracy between image and sound we were running some tests in order to be sure the data will have as little delay as possible.
The first problem I had was a terrible bad interpolation display between values when I reduced the metro to 1. I ran a test with the cpu clock and i discovered something: if i don’t overdrive the scheduler when my metro has 1 millisecond interval, my values of bang go to 2.25 with a mean error between 0.69 to 0.75(sometimes even 0.8 ), which seems to be very high and not very accurate at all!. I tried the patch in a friend’s computer and in one of my teacher’s computer and it seems that their mean absolute error was not over the 0.4. One important issue is that both were running max/msp 5.0.5.
I am running max/msp 5.0.6 in a MCP 2.53 GHz + 4 GB Ram, two video cards NVIDIA(9400+9600 GT).
I attach here the test patch
Any hint how I can improve my scheduler accuracy?
Thank you very much for help!!

Emmanuel

It’s true, scheduler-based timing is NOT designed to be totally accurate. For most processes this is not a problem, even with video, as we don’t really notice small timing errors. For sound, however, we definitely do, especially if there are two repetitive sounds that gradually drift out of sync.

Overdrive puts audio processing at highest priority, so some other things (mainly the GUI redrawing and window control) are more sluggish. Try moving or resizing windows without Overdrive, and you’ll get some delays/glitches in a metro or qmetro’s timing, but the UI update is smooth; with it on, the moving/resizing is more glitchy, but a signal-based tempo should stay solid. Using an signal-based timing control (see phasor~ help file for one example) with Overdrive on should keep things in sync just fine. Since there’s always tradeoffs when doing more than a CPU can handle in perfect-seeming sync, Max has Overdrive as an option to keep things timed right, but at the slight expense of other processes.

Interesting patch by the way! It really illuminates what’s happening scheduler-wise.

Also I’m jealous of any room with 192 speakers! Sounds like a perfect environment for some Maxing. How many sound channels is that? 192?? Or do some share the same channel? Either way, that’s a soundscape I’d love to hear.

For cool room-testing possibilities you might look into multislider-plus-pattr, where each slider sends a volume level of a sound (even just a plain cycle~) to that channel, then you can interpolate between stored settings and hear the washes of tones flow through the room. Though the GL displacement idea is also very cool. If you get some good documentation of the room and the sound please post a link!

Smile

#154724
Apr 3, 2009 at 5:57am

Hello Christopher and thank you for your fast answer.
I am more in the project by an invitation from the composer, a collogue student of the Institute for Sonology here in the Netherlands. He’s more in charge of the sound composing.
The project so far has been really interesting as I realized how small my idea of ‘space’ really is. So far when we come to work with Jitter we are facing concepts as coordinates, angles, etc, but this project so far is more related with ‘distances’ and ‘displacements’ based on the concept of Wave Field Synthesis:

http://en.wikipedia.org/wiki/Wave_field_synthesis

People here at the Institute are doing an interesting research on the subject:

http://www.koncon.nl/public_site/220/Sononieuw/UK/frameset-uk.html

http://www.gameoflife.nl/

The WFS that we have here was designed at the Delf University and is kept in Leiden at the Scheltema Complex:

http://www.scheltemacomplex.nl/

As far as I know there are WFS systems in Berlin(really cool actually), Canada and couple more places. Each institution has their own specifications for the systems and interface.

The interface to control of the system here was written by Wouter Snoei using super collider:

http://www.woutersnoei.nl/

If you are inside superCollider you can download some of the classes and install them. Some of them are really nice!

The system has a server where you upload the sound files and with a XML score which provide diverse data: position, angle, amplitude, etc. Such XML file is generated using a GUI written in superCollider. So far some people here have been able to use a live input using pd(< --the evil twin!<--joke).

I don’t know where you live but if it happens that you are in the Netherlands the next tuesday there’s a concert with the WFS starting 19:30. Among other pieces, some good maxers will present some music(my mentor and one of the super gurus here jvkr is included in the program).

hope all this nice .data helps!

Emmanuel

#154725
Apr 4, 2009 at 9:28pm
efe wrote on Fri, 03 April 2009 00:57

The system has a server where you upload the sound files and with a XML score which provide diverse data: position, angle, amplitude, etc. Such XML file is generated using a GUI written in superCollider. So far some people here have been able to use a live input using pd(< --the evil twin!<--joke).

Wow, that all looks really cool. Wish I could be there to see it, ah well. Sounds like some really interesting stuff happening.

I read about the XML elements you’re using, which again made me think of using pattr. Hopefully you’re implementing this in your design if you’re using a Max patch, it’s really powerful. Also fun is to generate control data using things like video input, signal processing, or jit.bfg (very cool object). Maybe these will give some more ideas. You can also store control data as matrices in a jit.matrixset, though pattr doesn’t directly support matrices. There are workarounds for this though, like mimicking the interpolation between settings using jit.xfade or jit.op.

Looking forward to hearing how it all goes, best of luck!

#154726
Apr 10, 2009 at 1:05am

Also, assuming you’ve looked at jit.catch~, jit.release~, jit.peek~, jit.poke~, jit.buffer~ in terms of being able to apply lists to audio parameters.

Great project!

B

#154727

You must be logged in to reply to this topic.