May 5, 2006 at 2:03pm
It’s been a few years since I even looked at sfplay~ for any “heavy lifting” as far as sampler instruments go… However, since most commercial samplers today are disk-streaming based, I wonder if anyone has any reports on sfplay~ performance with current hardware (you know, S-ATA, 8 MB cache, and so on…)? With the addition of playback speed, sfplay~ seems a pretty decent candidate for certain sampler functions…
Also, looking at the documentation, it looks as though either the “open” or “preload” message will cause sfplay~ to load the sample “head” into RAM (disk buffer?), based on the disk buffer size. Given a large disk buffer size, is the playback of this first portion handled in essentially the same way (by MaxMSP, not me) as playing from the buffer~ object? I guess what I’m asking is whether, given a hefty disk buffer size, sfplay~ can compete with play~/buffer~ in any way, shape, or form.
Sorry for what sounds like a “newbie” question. I assure you, it’s not. What I’m after is the “inside dirt” on sfplay~… I’m really drawn to the simplicity of file-handling in sfplay~/sflist~, and of course the potential for minimizing RAM usage, but I don’t want to get into a huge programming job just to find it’s not up to the task when it comes to playback.
May 7, 2006 at 12:22pm
Okay, I’ll try re-phrasing this…
Given a short sample: samp_144k.wav (a sample 144k in size)
Will a play~/groove~ + buffer~ sized to the sample (144k -> ms @ sr), play back samp_144k.wav in essentially the same way as an sfplay~ with a disk buffer of 144k? That is, are play~/groove~ + buffer~ any more efficient internally to MSP than sfplay~ + disk buffer?
My dilemma is that either I’ll be doing a lot of reading to buffer~ objects and playing from groove~ during playback, or I’ll be preloading a bunch of samples (with a large disk buffer setting) to sflist~ and playing them back from sfplay~ during playback. Both will be disk intensive. It seems to me that sflist~ will be easier on the system for preloading, on average, because it’s only ever loading the disk buffer, whereas buffer~ must load all of the sample data to be played. Because the disk buffer is always the same size, the work load should remain roughly consistent when using sfplay~, whereas with buffer~, the disk activity will spike each time a new sample needs to be buffered. Is this clear thinking? So, given that the disk activity will be high either way, which is really the better option?
May 8, 2006 at 6:55pm
This is a very relevant question, but I cannot supply the answer. During some preparations for a class I was giving, I made a 12-voice sampler using sfplay instead of my normal buffer-based techniques. I could not hear any extra latency nor observe any disk-use problems, but I didn’t spend much time searching for them, either.
This sort of sampler has some obvious advantages over a buffer scheme, particularly when it comes to larger installations or a great amount of sound material. One problem which I can imagine is interuptions in smooth disk-use when the OS needs to do something else.
So: who has the real information?
May 8, 2006 at 7:11pm
Woohoo!!! A reply!!!
Obviously, I’m with you on the importance of this question. I’m embarking on a massive re-write of a patch I made, and I’m seriously considering the sfplay~ route. I’m glad to hear you had a decent experience with it. Personally, I can’t see any great reasons why it would be a problem, other than what you mention about HD activity level — though you add an important point in mentioning system-related/background HD access.
However, in my situation I’m just looking for the “lesser of two evils”, since with constant reloading/resizing of buffer~s I’m still at the mercy of the HD. (And besides, there is no great way around system priorities, short of renicing the app (or Max), which actually appears to do very little, in my experience.) So, on an semi-educated-guess level, I don’t see any major problems with going for sfplay~… but it would be nice to get the “official” position, in case there’s some evil lurking behind sfplay~’s innocent exterior.
Anyway, thanks for the support. Hopefully we’ll see a little more action on this question now!
May 10, 2006 at 5:27am
It depends how often you play the same material. If you play everything
When the Giga Sampler arrived at the market there was a memory problem.
As sampler usually wants to play a sample more than once, if you load it
I can easily fit hours worth of samples into the memory, If you still
If you have a lot of samples cues, which you want to load from disk, you
These are logical assumptions… To proove them, you should test it. You
Share the results…
   
14, Av. Pr. Franklin Roosevelt,
May 10, 2006 at 6:50am
Thanks for the input. My system will change the samples in memory fairly regularly, but only to a certain point; during a certain stage of work. After that, the samples are likely to stay the same, so it’s hard to say where my model sits in the whole Giga scheme. But I have considered the possibility of using _both_ sfplay~ and buffer~/groove~. If I could devise an abstraction/poly~ which took the same arguments for either of the two options, then I could create a system which dynamically “decided” which to use, based on context/usage (and “re-usage”). But I’ll take your advice and build a patcher to test them… If I can manage to build both versions using the same arguments, it should make testing very simple.
Also, if you’re wondering why all this is an issue? Yes, I am dealing with _far_ too much sample material to fit into memory. Ouch!
Jul 11, 2013 at 9:51am
This thread was written quite a while ago, but I am still encountering similar problems.
Jul 11, 2013 at 10:47am
as long as we are not talking about several gigabytes of audio i would like to make the following statement: sampling should normally always happen from RAM.
zero latency from disk is impossible in a real time situation, it can only work in an application where the note events to be played are known before, so that the audio files can be loaded _before they are needed.
you jmight think it is a bit of an overkill to load 10,000 files into 10,000 buffers, but this seems to be your only option when you want to have 10,000 ready to be played live from a physical controller.
Jul 11, 2013 at 11:11am
i wouldnt consider streaming samples on the fly :/ ive been trying similar things and facing similar problems . the solution i found was based on lets say “dynamic allocation” and “garbage collection” .
0.An empty buffer (in my case polybuffer)
as far ive been measuring , hitting the list to retrieve path ,filling buffer and your playback object at the first time will introduce really small latency that is acceptable (i cant hear it),
Jul 13, 2013 at 8:08am
Thanks for your replies!
You must be logged in to reply to this topic.