Building a sampler using every chromatic pitch?

Ben Vining's icon

Hello all,

I'm working on a sampler in Max, and the timbre and sound quality is the most important consideration to me. So instead of just using one or two samples and shifting them to every pitch, I want to be able to have an individual sample for every chromatic note C1 to C8. My plan is to create a buffer~ and groove~ object for each sample, load all of the samples into their buffers~ on startup, then play/stop the samples using groove~ in response to MIDI keyboard input.

Is there some blatantly obvious reason not to do this? Sure, the patching is repetitive and time-consuming, but will there be any performance drawbacks from having so many buffers, assuming I don't care that it takes longer to program? If my math is right, it actually shouldn't eat up too big a chunk of my computer's RAM --
assuming each sample is about 3 seconds long (and looped with groove~), then you'd have:
44800 (sr) * 3 sec * 2 channels * 8 (bytes per sample) = 2150400 bytes, or 2.15 MB for each sample.

If I'm storing a sample for every note chromatically from C1 to C8, that's 84 different samples, so the entire thing should take 180.6 MB of RAM. And my MacBook Pro has 8 GB of RAM, so this should be no problem, right?

If anyone wants to see the patcher as I have it right now, I can post her. She's already turning into a bit of a beast, as you might imagine.

VincentC's icon

Hi, just a simple question, but why not putting the samples in one big file and play with the offset and looping regions in your groove~? I've seen multiple 'sample-files' in Ableton-packs, makes sense to me. Also, from a programming perspective, you could make one single buffer~, make players using this buffer~ in a poly~ for easily polyphony.

Ben Vining's icon

If each sample is about 3 seconds long, they may have to be looped to sustain longer notes. And each sample will have different sustain loop start and end points, which I would have to fetch and tell groove~ to use for each different sample... all while trying to get notes to fire with no latency in real time. So it actually seems more straightforward and potentially more reliable to just create an individual buffer~ and groove~ for each individual sample, that way when the samples are loaded on startup, I can fetch the sustain loop data for each sample, tell every single groove~ what sustain loop points to use, and then I'm set -- I don't have to do any calculations or attribute-changing to fire any note I want to instantly, it just takes a "startloop" message. Right? Or am I missing something obvious?

VincentC's icon

Let‘s assume there is no significant difference between using one large buffer and 84 small buffers (can’t give clearance on that, never checked), I would need 10 instances of a player (or maybe 20 for forearm-cluster-playing), while you would need 84 instances, one for each of your notes. Regarding fetching, I do checked that (cpuclock), getting an array from a specific key in a dict costs 0.02 to 0.06 ms cpu-time, which will round to 0 ms latency.

Source Audio's icon

Don't want to make suggestions, but just to report that I have
created few Sample Players for customers in the past using sfplay instead of buffer.
Preloading a bunch of samples of any length, even very long ones takes very short time,
and if one organises polyphony, it could mean only as many players
as needed voices