Are peek() and poke() somewhat processing intensive?
I recently congratulated myself heartily for coming up with the idea of storing variable values in the samples of a buffer with poke and then recalling them when needed with peek. This allowed me to clean up my code immensely, getting rid of many [history]s ( which complicate things with their 1 sample delay) and well as [latch]s (which can't "store" 0 easily, etc). But around this time i began to get dropouts in my audio and had to raise my signal vector size from 32 to 64 to alleviate it, and feel like things maybe have slowed down a bit in general. Is it generally inadvisable to use this method? It's so much simpler as a way to actually get a variable 1) to update immediately, 2) stick around sample after sample without having to be set again and 3) be accessible from outside gen. There are some places in my code where I have to do something like
var1 = stats.peek(x);
every sample, though i try to avoid it. Is there some better way?
It doesnt seem too bad when i use it. It's adding another audio object to your patch, but id always guessed (perhaps incorrectly?) that if a 'real' audio playback object can pump out 44.1k audio without hassle, that using peek/ poke at a fraction of that read/write speed must be proportionally far less demanding.
Yeah but there are probably two or three places where I'm peek()ing every sample, so it's not at a fraction of that rate, it is at that rate. I'm just wondering if there's inherently something more taxing about peek() and poke() over say, history. Maybe because it has to talk to a buffer external from gen?
You can use data() to keep it all in gen~, unless you need to have the buffer be external for some reason. .
As for performance issues, I have no advice, I'm just as curious as you are.
Yes, buffer is essential to me over data as i have not found a more quick and concise way to communicate with the outside patcher.
Its not as bad as you may think. gen~ reads every sample, but being an audio module itself it does so in multisample blocks. So if you order the coding right in codebox, the data should still be cached from the cpu data cache line prefetch. As for the amount cached, it varies alot by processor depending on data cache and translation-lookaside-buffer size, so its difficult to provide more concise insight. But it does make sense to put a few arithmetic operations between each fetch to allow cache filling.