higher Sampling Rates vs lower Vector Sizes
I am using jitter a while, but i am totaly new to msp.
I have a question concerning audiorate triggering.
For a video installation i need a very accurate metro. I am not playing audio.
I want to build a system as stable as possible, because it has to run 10 hours a day.
I am using the metro~ object, which is basically a pasor~ / edge~ construction.
I get good results with:
Sampling Rate 96000
I/O Vector Size 256
Signal Vector Size 32
Sampling Rate 44100
I/O Vector Size 64
Signal Vector Size 16
So my question is:
What is the most stable variant ?
Higher Sampling Rates or lower Vector Sizes ?
My machine is a mac pro running an opengl patch, so the cpus have nothing to do.
Do you have "in audio interrupt" on? In general, it is better to have "Scheduler in Override" off when working with Jitter (turning "Scheduler in Override" off automatically turns off "in audio interrupt" also).
If you turn it off, "Signal Vector Size" shouldn’t have any effect on triggering. When "in audio interrupt" is on, Max and MSP are synced at the time grid (measured in samples) defined by the "Signal Vector Size". And "Signal Vector Size 32" at "Sampling Rate 96000" last (*almost) the same amount of time (in milliseconds) as "Signal Vector Size 16" at "Sampling Rate 96000".
when using [metro~] the vectorsize does not matter, as it only affects messages.
but i can not imagine a situation where you would need MSP signal accuray for
a jitter video installation anyway. which are the gl objects you are using which
take audio signals? :)
First i want to thank for your fast answers.
I love this forum !
I did my tests with
"Scheduler in Override" : off
"in audio interrupt" : off
You are right, the "Signal Vector Size" does not have much effect for the timing.
Only the "I/O Vector Size" and the "Sampling Rate" have.
My question is:
Does the changing "I/O Vector Size" or "Sampling Rate" affect the stability of my standalone.
To describe more, what i am trying to achieve:
I have build a server/client videoplayer system.
One computer is the server, which generates the video frames and sends it with [udpsend] to the client computers.
The server is triggered by a professional media server hardware and have to play in sync with it over minutes.
The sync does not have to be "frame accurate", but should not run totally out of sync over a period of 5 minutes.
So I am trying to generate a master clock (timecode) with the most accurate timing possible in max.
My testpatch is the folowing : metroTest.zip