Managing generated audio buffers
I feel like there is an obvious (read standard) way of doing the following, but it continues to elude me.
Suppose I have a Jitter external. This external generates samples that are stuffed into a buffer (array). The number of samples placed within the buffer is not constant. So, if a qmetro is triggering the external, a sequence of 10 bangs may generate a nonuniform sample sequence:
[ 0 0 10 5 200 4 1 1 0 3]
The numbers above represent the number of samples placed within a static buffer upon the external receiving a bang.
Now, I would like the content of this buffer to be utilized by MSP, and dumped out as an output signal. So I have a buffer writer (Jitter thread) and a buffer reader (DSP thread). How can I get these to play well with one another?
I'm guessing the DSP thread has higher priority.
What is the right way to go about getting the generated buffer placed into the DSP stream? It seems like there are a few problems here, one being that the DSP thread is going to lock out the Jitter thread from filling the buffer as expected. The DSP thread will simply fill however many samples it needs to, while the Jitter thread waits (and possibly gets behind).
This type of buffer management must be handled by things like DirectSound. Does anyone know how?
-k
Hi Kyle,
What you are describing is a producer/consumer problem. You can find
lots about this by doing a web search for those terms. The typical
solution is to use a mutex to grant exclusive access to the shared
memory to either the producer thread or the consumer thread. However
in this case because the audio thread has a higher priority and can
not wait for a lower priority thread to finish, you have to be very
careful not to pause the audio thread.
It's certainly possible to do what you want to do, but would you
considered outputting matrices and using jit.buffer~ to trasnfer the
data into the MSP world? This might make your life a lot easier.
Ben