using waveform display to edit soundbites/samples
Hello.
I'm working on a patch right now that is meant to trigger fast percussive samples via a MIDI drum pad. For my own uses, I can simply take the sound I want into a DAW for editing and bounce them before loading them into Max, but I'm looking for an easy intuitive way to do this in Max itself, especially since I may end up doing some of this sample editing on the fly.
My goal is to use the waveform display to set the starting and ending point of a sample within a buffer object. I've seen how this can be done in the waveform display help file with the groove~ object, but I don't want to loop the samples.
I've been using the play~ object in my patch up to this point, but it doesn't seem to have these start and end point capabilities. Is there some way to set the starting point of what's stored in the buffer via the waveform display, NOT have it looped, and still retain the ability to be triggered in rapid succession?
On the same topic, is there a way to get these samples to finish playing without the next trigger cutting the first one off? For example, if I have a 1 second sample in a buffer and am playing faster than once per second, I want each successive sample to play for it's entire 1 second duration (essentially on top of each other) regardless of how fast I'm playing.
Hope my questions aren't too confusing.
Thanks!
> On the same topic, is there a way to get these samples to finish playing
> without the next trigger cutting the first one off?
If you want to hear 2 sounds at the same time, you need 2 playback decks,
i.e. 2 "groove~" or the like.
> For example, if I have a
> 1 second sample in a buffer and am playing faster than once per second, I want
> each successive sample to play for it's entire 1 second duration (essentially
> on top of each other) regardless of how fast I'm playing.
If the number is not predictable, you will better integrate the play system
(groove~ for example) inside a [poly~]. You will be able to use several
voices up to the maximum you will decide.
[poly~] is difficult to learn at the beginning, but it is worth it.
I would second the sentiments on poly~. It will handle multiple instances of your sample patch. If you set it up correctly, you could use multiple waveform~ objects to read different parts of your buffer, as well.
Here is an example of using waveform~ to output the necessary information to trigger a sample with play~, using line~. If you go through the MSP tutorials about poly~, you should be able to pack this into a patch that allows you trigger multiple samples at once from one buffer (including simulatenous triggers of the same part of the sample).
If you get stuck, just come back for more help!
Wow, guys. Thanks for the amazing advice. I should be able to get this working exactly how I want based on this info, but I'm curious Nick if you could explain how line~ is getting play~ to work this way. zl sends line~ 4 floats, the first which is clearly the start point, the third the end point, and the fourth the time in milliseconds, but the second number is always 0. Is that just a place holder of sorts?
Thanks again!
Nope! every set of two floats that line~ receives tell line~ to go to a specific number over a certain number of milliseconds. So, the first two numbers are the starting point of the sample, and a 0. This tells line~ to go to the starting point in 0. milliseconds. The following two numbers tell line~ to read from the start point to the end point.
You can get the same effect by sending line a single number, such as the format [(start point), (end point) (time)], but it has to send the instructions over two separate events (due to the comma) and is less efficient..could lead to worse timing etc.
Gotcha.
OK, here's something I'm wrestling with now.
I'll probably end up with 8 samples that need to be triggered separately, though the fastest they'll be played is 16th notes around 120bpm, so 8 times a second. Assuming no sample exceeds 2 seconds long, then I would need 16 subpatches loaded via a poly~ to ensure the samples will generally overlap fine without interrupting each other, right?
Now the dilemma is the fact that some samples will need to go through some subpatched effects at specific moments while others will need to go through a different effect (or none at all), and these effects may need to change dynamically i.e. change the effect that a sound goes through in realtime. To me, this sounds like it's going to be a lot of routing, a lot of subpatching, and a many instances of poly~. Is there a simpler way to do this that comes to mind?
I'm hitting the drawing board again!
Just to show that I'm thinking-
My idea is to have a subpatch with the aforementioned samples loaded into buffers that are routed via gate~'s to the effects, which will each reside in their own poly~ patch.
So, something like this:
The difference being that the finished one will have 8 buffer~'s set up.
Sorry about the mess.