defer a method to change buffer~ content ???

Aug 1, 2007 at 10:52pm

defer a method to change buffer~ content ???

Hi guys,

in one of my externals I have a method to normalize the content of a
buffer~ object. Obviously the normalizing routine is executed in
response to a typed message (‘normalize’). The algorithm checks for the
greatest sample in the buffer~ and then multiplies every sample by the
reciprocal.
Pretty straight forward. However what’s not clear to me is if I need to
defer or deferlow the part of the algorithm that deals with modifying
the buffer~ content.

In code I have something like:

void myobj_normalize(t_myobj *x)
{
t_buffer *b = x->buf;

if (b && b->b_valid) {

float *table = b->b_samples;
long chan = x->chan;
long frames = b->b_frames;
long nc = b->b_nchans;
int i;

// I believe I don’t need to worry about deferring
// when reading from a buffer~

float maxval = 0.;
for (i = 0; i < frames; ++i) {
float val = table[i * nc + chan];
float absval = fabsf(val);
if (absval > maxval) {
maxval = absval;
}
}

if (maxval > 0.) {

// now I need to write to buffer
// so thread priorities become an issue

// defer to low priority ???

x->scale = 1. / maxval;
defer_low(x, (method)myobj_doscale,0L,0,0L);
}
}
}

do I need to defer (or deferlow) the obj_doscale method as above or not
?

Thank you.

- Luigi

————————————————————
THIS E-MAIL MESSAGE IS FOR THE SOLE USE OF THE INTENDED RECIPIENT AND MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION. ANY UNAUTHORIZED REVIEW, USE, DISCLOSURE OR DISTRIBUTION IS PROHIBITED. IF YOU ARE NOT THE INTENDED RECIPIENT, CONTACT THE SENDER BY E-MAIL AT SUPERBIGIO@YAHOO.COM AND DESTROY ALL COPIES OF THE ORIGINAL MESSAGE. WITHOUT PREJUDICE UCC1-207.
————————————————————

Shape Yahoo! in your own image. Join our Network Research Panel today! http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7

#33121
Aug 2, 2007 at 9:31pm

Luigi,

I don’t think you need to deferlow in this context. But it is probably relevant to wrap your main loop inside a critical region. Your code shouldn’t break if interrupted by another thread, but if the audio processing thread interrupts your loop it’s likely to click.

I haven’t programmed as much with thread-sensitive code as other stuff, so there may be some important point I’m overlooking. But pending a more authoritative response…

hope this helps,
Peter

#109803
Aug 2, 2007 at 9:58pm

Hi Luigi,

I have a nearly identical normalization function in my buffet~ object. I don’t use defer_low or any other protection. If you try to normalize a buffer while it’s in use, you would be almost guaranteed to get a discontinuity no matter how you protected your normalization function. Instead, I send a bang out from buffet~ whenever an operation has been completed. That way I can wait to play a buffer until I know that it has been completely normalized. If you find another solution, please let us know.

Cheers,

Eric

#109804
Aug 2, 2007 at 11:47pm

Peter and Eric,

thanks for your replies. You guys rock.

The reason why I was in doubt if to use a defer or deferlow call when
changing the content of a buffer~ is not to avoid clicks. Of course if
I change the content while the buffer~ is being played, most likely I
will get clicks, but that (at least for the moment) is not my concern.
On top of that, the buffer~ object has a light-weight locking mechanism
provided the b_valid and b_inuse flags, which is enough to handle
thread issues relative to buffer~ operations and the audio processing
thread.

The dilemma that I have is that – especially when buffers are long -
altering their content by iterating with a loop sample by sample might
take a considerable amount of time. If the ‘normalize’ message is sent
at high priority it might be disruptive to the timing of the Max/MSP
environment.

That’s why I thought it would be a good idea to always defer or
deferlow.

This is the thought process I went through.
I am far from being sure it is the right one.

- Luigi

— Eric Lyon wrote:

>
> Hi Luigi,
>
> I have a nearly identical normalization function in my buffet~
> object. I don’t use defer_low or any other protection. If you try to
> normalize a buffer while it’s in use, you would be almost guaranteed
> to get a discontinuity no matter how you protected your normalization
> function. Instead, I send a bang out from buffet~ whenever an
> operation has been completed. That way I can wait to play a buffer
> until I know that it has been completely normalized. If you find
> another solution, please let us know.
>
> Cheers,
>
> Eric
>
>

————————————————————
THIS E-MAIL MESSAGE IS FOR THE SOLE USE OF THE INTENDED RECIPIENT AND MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION. ANY UNAUTHORIZED REVIEW, USE, DISCLOSURE OR DISTRIBUTION IS PROHIBITED. IF YOU ARE NOT THE INTENDED RECIPIENT, CONTACT THE SENDER BY E-MAIL AT SUPERBIGIO@YAHOO.COM AND DESTROY ALL COPIES OF THE ORIGINAL MESSAGE. WITHOUT PREJUDICE UCC1-207.
————————————————————

Moody friends. Drama queens. Your life? Nope! – their life, your story. Play Sims Stories at Yahoo! Games.

http://sims.yahoo.com/

#109805
Aug 3, 2007 at 11:02am

>
> The dilemma that I have is that – especially when buffers are long -
> altering their content by iterating with a loop sample by sample might
> take a considerable amount of time. If the ‘normalize’ message is sent
> at high priority it might be disruptive to the timing of the Max/MSP
> environment.
>
> That’s why I thought it would be a good idea to always defer or
> deferlow.
>

That’s a good point, and I do in fact use defer_low for the faster-than-realtime processing in bashfest~. But there’s also a Pd version of bashfest~ which cannot use defer_low, and it performs comparably to the MaxMSP version.

Just for fun, I loaded a 21 minute stereo file into buffet~ and then normalized it with a sine wave running in the background. The normalization was almost instantaneous with no disruption of the audio.

But, you are right – this operation does affect Max-level timing noticeably, tested on a metro object. Then again, there are so many ways to disrupt Max-level timing (I got a similar disruption by switching from Max to the finder and browsing files) that Max simply cannot be relied on for steady beats at the control level (as I’ve written about elsewhere ad nauseam).

Since it’s easy enough to try, why don’t you write your normalization function both ways, and let us know if defer_low protects Max timing better than without. (I would guess that it does, but you never know until you try it.) The other issue would be how much slower your normalization routine goes when deferred. I would generally prefer faster DSP performance to tight Max response since I never use Max timing when accuracy is critical.

Best,

Eric

#109806

You must be logged in to reply to this topic.