copying audio when out of resources
not for the first time i recently ran into a situation again where i could not record all the audio i need.
i am processing a bunch of audio signals. with dozens of source files to be played - and dozens of files to be written at the same time.
what you can easily do when you hit the limit of your hardware with one method... is to use the other one, i.e. :
- writing channels into buffers (when diskrecording starts to interrupt at n channels)
- writing directly to disk (when there is not enough RAM to use buffers.)
but what do you do when both doesnt work?
the only thing i can currently do is to
- mix these two techniques
- use stereo or quad files instead of mono files
- write 24 bit instead of 32
- or, when possible, do the job in multiple sessions one after another. (the app already has a NRT mode so this is doable as long as channels do not need to be processed together for some reason.)
but these are all not very nice to do.
i also dont want to do use another computer (aka render server, like i do it with video and databases), because the material is too big to permamently transfer it over the LAN.
Hi Roman
i'm sharing that for long time... trying really first ideas: different HD/SSD to read or write? Raid ? Share, alternate between buffers the time to record in real time and save on disk to free it for another record ?
In my user's mind, a computer likes to organise his own rythm. Audio is too slow, boring for him. How to let him make fast and asynchronous some jobs ? Poly~ multithreading, polybuffer~?
-- thinking on the fly
friendly
michel
in fact using the SSDs will of course speed things up compared to my boot drive (which is for a certain reason still a HD i also had reasons to write the files there)
but besides making my own life easier i am also interested in a general solution - and i dont think that i will hit my HDs limit so soon... writing and reading the same number of files with nuendo works fine... i guess it also has to do with max running at 80% CPU. (or 90% at NRT)
one of the "key problems" is that sfrecord~ simply does not care when a chunk can not be written successfully - and it also does not notify you, it just produces crackling output files.
for the rest, yeah, you name the right things, but i dont see how they would make a difference.
for nonrealtime i have yet another idea:
you could process stuff in NRT for 1 minute (or wherever your RAM limit for n channels of Y effect gets you), record to buffers, then you stop! audio in order to be able to export the buffer content, then run the next minute.
of course this 1. will only work for output and 2. causes tons of weird automation needs regarding filenames and such. and cuts wont be sampleexact (as in predictable or easy to calculate), as dac~ on/off operates in vectors.
same issue with downsampling: it will be almost impossible to put such an app (which also needs to run in realtime, and btw also involves vst~) into a poly.
i have been experimenting a bit with turning the audio on and off every 2/2 seconds while under NRT.
reading and writing chunks to disk now have a bit more air to breathe, at the cost of the overall rendering time.
until now it seems that this does not cause any file in/out errors from the sfplay~ and sfrecord~ objects.
but eventually stability in some other fields is in danger with that? (FFT? poly~? picky device drivers?)
a more in-depth test is yet to be done.
any hint how to programmatically find out if an audiofile was correctly written or not will be accepted.