Signal and I/O vector sizes

Gary Lee Nelson's icon

I am finding that I need a better understanding of this. I looked through
documentation but can't seem to find an explanation. Can someone point the
way?

Cheers
Gary Lee Nelson
Oberlin College
www.timara.oberlin.edu/GaryLeeNelson

AlexHarker's icon

What's the question exactly - what are they?

Not sure what you already understand, so apologies if this doesn't advance things at all....

(As I understand it)

So signals get procesed in blocks (so many samples at a time) rather than one at a time. That's a vector in MSP terms.

I/O vs is how many samples are passed to/ from the audio driver at a time. Therefore it affect's latency between input - computer - output. It also affects your ability to take sudden processor hits. if the I/O vs is large enough then it doesn't matter if in the middle of an I/O vector you make a short but large demand on the processor (like doing an FFT) because the samples are still ready by the time the driver needs them, although within that vector some samples may be processed slower than 'real-time'.

Signal vs is how big the chunks are for calling MSP routines on them (doing the work in Max). Smaller means more overhead but (esp. if you're running the scheduler in audio interrupt) tighter timing between the max and MSP worlds. Keeping them bigger means less overhead incurred from calling the intialisation code of each dsp routine so you get more out of your cpu (how much depends on what objects you are using).

I can't find any docs about it either. If you could explain what you want to know a bit more specifically maybe I could have another shot and put together a better explanation.....

Regards,

Alex

Gary Lee Nelson's icon

This relates to an earlier thread Re: forwarding signals. I am using
SoundFlower as an array of audio busses within MSP and Stephan Tiedje
commented that I would incur a delay in the signal and/or I/O vectors.
I verified that this is true. I was able to modify my patches so that
everything I am synthesizing, playing or mixing passes through the same
vectors. Now everything is in precise sync. My question now is...how small
is small for a vector. If I use vectors of 64 or 128 what is the cost over
larger vectors ... 256, 512, 1024?

On 10/13/07 5:08 PM, "Alex Harker" wrote:

>
> What's the question exactly - what are they?
>
> Not sure what you already understand, so apologies if this doesn't advance
> things at all....
>
> (As I understand it)
>
> So signals get procesed in blocks (so many samples at a time) rather than one
> at a time. That's a vector in MSP terms.
>
> I/O vs is how many samples are passed to/ from the audio driver at a time.
> Therefore it affect's latency between input - computer - output. It also
> affects your ability to take sudden processor hits. if the I/O vs is large
> enough then it doesn't matter if in the middle of an I/O vector you make a
> short but large demand on the processor (like doing an FFT) because the
> samples are still ready by the time the driver needs them, although within
> that vector some samples may be processed slower than 'real-time'.
>
> Signal vs is how big the chunks are for calling MSP routines on them (doing
> the work in Max). Smaller means more overhead but (esp. if you're running the
> scheduler in audio interrupt) tighter timing between the max and MSP worlds.
> Keeping them bigger means less overhead incurred from calling the
> intialisation code of each dsp routine so you get more out of your cpu (how
> much depends on what objects you are using).
>
> I can't find any docs about it either. If you could explain what you want to
> know a bit more specifically maybe I could have another shot and put together
> a better explanation.....
>
> Regards,
>
> Alex

Cheers
Gary Lee Nelson
Oberlin College
www.timara.oberlin.edu/GaryLeeNelson

Roman Thilenius's icon

Quote: Gary Lee Nelson wrote on Sat, 13 October 2007 16:05
----------------------------------------------------
> This relates to an earlier thread Re: forwarding signals. I am using
> SoundFlower as an array of audio busses within MSP and Stephan Tiedje
> commented that I would incur a delay in the signal and/or I/O vectors.
> I verified that this is true. I was able to modify my patches so that
> everything I am synthesizing, playing or mixing passes through the same
> vectors. Now everything is in precise sync. My question now is...how small
> is small for a vector. If I use vectors of 64 or 128 what is the cost over
> larger vectors ... 256, 512, 1024?

as i dont have your patch, i cant try it out.

but 32 would be normal and the CPU difference between 32 and 1024 is in real life situations maybe around 5%.
you will find that diffeeretn IO devices will allow differerent settings; somewhere between 2 and 16 drivers usually refuse to process.

more (read: shorter) than 32 is not neccesary for tight timing, bigger vector sizes dont make much a difference.
i am only switching from 32 to 1024 when i have performance or stabilty problems during programming.
32 is what protools or cubase use between channels and for plug-in, if that gives you an idea.

AlexHarker's icon

Quote: Gary Lee Nelson wrote on Sat, 13 October 2007 16:05
----------------------------------------------------
> This relates to an earlier thread Re: forwarding signals. I am using
> SoundFlower as an array of audio busses within MSP and Stephan Tiedje
> commented that I would incur a delay in the signal and/or I/O vectors.

OK - I've read the thread so I'm clear about what is going on. You also need to worry about the buffer size setting in soundflowerbed if overall system latency is a concern.

For me the delay you're suffering here is not really worth it for the clean patching and I'd go with Stephan's suggestion of send~ and receive~s in an abstraction that gives you exactly the same look patch as you have now, but adc~ dac~ are abstractions. It could noticeably take your latency down overall...

Anyway, each to his/her own so assuming you still want to go this route my thoughts are:

The exact overheads are unkown - it really is easiest to try and see, because some objects have quite a lot of overhead in their dsp code, some hardly any - you also get the overhead of each call in the dsp chain. 5% that Roman suggests sounds plausible for an 'average' case if such a thing exists.

You incur exactly twice the I/O vector size in max/msp - once to get into soundflower (output) and once out.

I'd thought 64 was normal vs inside DAWs but that's just a memory that I read it somewhere so Roman is probably right - somewhere in that region shouldn't tax the processor too much - it's only when you get down to 16 and below that I'd really expect to see a big difference. For I/O I'd be aiming as small as possible without losing samples or pushing the cpu too far.

I generally put my I/O buffer up high for FFT stuff and other than that keep it as low as I can with a 64 sample vector size. In your case I'd want to run the I/O size as near to the vs as possible, because of the further Soundflower buffer you're going to incur at the end of it all.

But then I think you'd be better off making your own routing patch that can be like SoundFlower in max using either send~ and receive~ (which'll give you cpu and memory overheads), or if you aren't opposed to the slightly dirty send and receive approach, then doing it that way and conserving some power. You only need program the abstractions once and you can get exactly the same look and functionality and you won't have to worry about extra latency at all....

Regards,

Alex

Stefan Tiedje's icon

Gary Lee Nelson schrieb:
> My question now is...how small is small for a vector. If I use
> vectors of 64 or 128 what is the cost over larger vectors ... 256,
> 512, 1024?

You could setup a test with poly~, create a poly~ with some significant
DSP demand, then just try different signal vector sizes, and watch the
CPU meter. In general you could say, that some part of the calculations
happen only once a signal vector, thus that part is doubled if you cut
the size into half. I don't know much about the relation of the parts
which are processed for each sample and those which are processed once a
signal vector. Probably the difference between sigv of 1 compared to
sigv of 1024 would give an idea...

I am usually happy with a sigv of 64...
And if my patch starts stuttering, I increase it...

Stefan

--
Stefan Tiedje------------x-------
--_____-----------|--------------
--(_|_ ----|-----|-----()-------
-- _|_)----|-----()--------------
----------()--------www.ccmix.com

Lambros Pigounis's icon

Hello

I am writting in this thread (even though so old) because of a soundflower and buffer problem I get in Max.

I send audio from Ableton Live to Max via soundflower.
In Max i run an Ambisonics spatialisation patch.
Then the final signal comes out of Motu 828.
The Motu clock is set to internal.

I am experiencing some audio cracks caused (I assume) by the combination of buffer sizes between Live and max buffer settings. I guess it would be also the Soundflowerbed buffer setting too.
Usually the Ableton Live buffer size is 128. Then in Max, I have tried different combinations of buffers but I still get the cracking sound.

I,ve been reading again and again about the logic of setting buffers, but in this case I am getting quite discouraged after so many attempts to combine a good chain of buffers between the two apps.
Im sorry I can't send a recording of the kind of "cracking sound" I hear, but at this stage it is impossible. I hope the description helps...

Recently I started using jackpilot instead, and so far everything sounds fine. I guess I need to let the patch run a bit longer in order to confirm this.

i would really appreciate any insight and help!
Thanx in advance