Poly~ groove~ headache


    Jan 04 2007 | 8:51 pm
    Hi
    I am trying to create a simple poly~ patch that reads from a long (10min) audio file, and randomly plays short (5secs) sections at 1sec intervals, thus creating a multi-layered polyphonic sound.
    I am finding it very frustarting to get anything working properly. I am using poly~ but am have problems when sending random "setloop" points to groove~ in each voice.
    I have included the patch and subpatcher below. Just copy and select "new from clipboard"
    I realise this is very simple, but it is driving me mad!
    Any help (entirely rewritten patch;-) would be much appreciated.
    I hope this is clear.
    Thanks
    David
    Parent patcher
    max v2;
    Subpatch name it "time2~"
    max v2;

    • Jan 04 2007 | 9:19 pm
    • Jan 04 2007 | 9:38 pm
      On 04 Jan 2007, at 21:51, David Atkinson wrote:
      >
      > Hi
      hi
      there are a few problems with your patch... ;)
      you should have another look at how to use groove~
      also check out what prepend set does to a message box
      and polyphony management is taken care for you by the poly~ object.
      no need for setting the target manually in this case.
      see if the attatched files help you.
      volker.
      /* save as time22 */
      /* main patch */
      
    • Jan 04 2007 | 11:13 pm
      Hi Volker
      Thanks so much for the patch. It is doing exactly what I intended. I'm apologise for my original patch being in such a bad state. I find using the poly~ object with sample players very frustrating.
      A few questions if you can spare the time.
      Am I right in assuming in your patch, poly~ is looking after voice muting?
      What is "prepend note" communicating with?
      Also, would it be best to use ADSR~ to set different envelopes for each voice from the main patch?
      Kind regards
      A very grateful
      David
      On 04 Jan 2007, at 21:51, David Atkinson wrote:
      >
      > Hi
      hi
      there are a few problems with your patch... ;)
      you should have another look at how to use groove~
      also check out what prepend set does to a message box
      and polyphony management is taken care for you by the poly~ object.
      no need for setting the target manually in this case.
      see if the attatched files help you.
      volker.
      /* save as time22 */
      /* main patch */
      
    • Jan 04 2007 | 11:47 pm
      On 05 Jan 2007, at 00:13, David Atkinson wrote:
      > A few questions if you can spare the time.
      'tis near bedtime overhere zzzz
      > Am I right in assuming in your patch, poly~ is looking after voice
      > muting?
      nope, you have to manage that yourself. poly~.help will help you to
      find out how to do it.
      > What is "prepend note" communicating with?
      well, with poly~ ;) again poly~.help has the answers.
      with "note" or "midinote" messages poly~ will automatically manage
      polyphony.
      while you can append more or less anything to a "note" message,
      "midinote" assumes you want to send midi note data, i.e. pitch and
      velocity pairs.
      > Also, would it be best to use ADSR~ to set different envelopes for
      > each voice from the main patch?
      you can use adsr~, but in your case i would rather stick with line~.
      i find that adsr~ is a little cpu hungry.
      and since you are not dealing with 'conventional' noteon/off data, i
      doubt that it will be much easier to use in your patch.
      bonne nuit.
      volker.
    • Jan 08 2007 | 1:24 am
      Hi, volker,
      I found your patches very interesting but cannot understand the relationship among trapezoid~, phasor~, sah~. Could you explain a little bit about what you did in these patches, especially the first one ?
      Thank you very much.
      PS: I love the result of the last one, too. Very cool. I hope I can learn about how you did it.
      Best,
      Chien-Wen
    • Jan 08 2007 | 3:23 pm
      hi chien-wen,
      > I found your patches very interesting but cannot understand the
      > relationship among trapezoid~, phasor~, sah~. Could you explain a
      > little bit about what you did in these patches, especially the
      > first one ?
      i guess you are refering here to the patches i sent in the
      crossfading loop thread?
      http://www.cycling74.com/forums/index.php?
      t=msg&th=23747&start=0&rid=0&S=46b0543e4c642afe4aefa7ee39531db8
      the question was how to loop a certain amount of a soundfile without
      having clicks at the loop points.
      if you select an arbitrary part of a soundfile for playback/loop you
      will most likely get a discontinuity in the sample flow, cause the
      beginning of the selection doesn't fit perfectly to the end
      (litterally you make a jump backwards in time). this will be audible
      as a click, which in some cases is not desirable.
      to avoid this, one can make the ends meet by a fade in at the
      beginning and a fade out at the end of the loop.
      so, no more clicks, but you have changed the actual sample values of
      the loop (by forcing an envelope, a window) - you can call this an
      amplitude modulation. this is what the first patch does.
      the phasor~ multiplied by the loop length is the read-pointer, that
      reads data with the play~ object out of buffer~.
      with trapezoid~ you can take care of the fading at the loop points
      (you can observe this yourself by hooking up a scope-object to
      phasor~ and trapezoid~ to look at the signals). the trapezoid signal
      is multiplied by the actual sample values from play~.
      (see attachment below).
      the sah~ looks complicated but serves a simple task:
      (the idea stems from nobuyasu sakonda's famous granular patch
      http://www.bekkoame.ne.jp/~nsakonda/maxpatch.html)
      if you'd like to move the playback position while the loop is playing
      (or change the loop size) you shouldn't do this while the playback
      pointer is "inside" the loop, but rather when it's at the end or the
      beginning (which of course is the same) - otherwise you end up having
      clicks again.
      an easy way to examine the loop start/end is to watch the ramp of the
      phasor~. this is done by the comparison [ the actual value of 0.5 is arbitrary (could be anything in the range
      0 < x < 1). the comparison will change its output state when the
      phasor-ramp jumps back from 1 to 0. and this change of state is used
      to trigger the sah~ object and 'tells' it to sample a new value from
      its input.
      with this setput we can make sure that changes of playback position
      or loop length only happen at the turning points of the loop, where
      amplitude values are zero, to avoid generating clicks.
      but!
      course there is a gap in the audio flow at the loop points (from fade
      in/out).
      to get rid of that, instead of simple fade in/out we could make a
      crossfade between start and end of the loop, actually overlapping the
      two parts. this is a little tricky and we certainly need two
      'players' to achieve it.
      so the idea of the second patch i posted is to have two loops which
      are actually twice as long as in the first example. this is done to
      synchronize the loops - we don't hear the second part of the loop,
      cause we use a 'custom made' envelope instead of trapezoid~, where we
      can determine that the fade out is starting in the middle of the loop...
      the two players/loops are 180 degree out of phase, i.e. synchronized
      so that the first loop fades out when the second one fades in. thats it.
      i think i remember having seen something similar a few years ago in
      the lloopp-collection by klaus filip.
      phew! it takes ages to explain something in words, which is easily
      programmed in 10 minutes...
      well, hope it helps,
      volker.
    • Jan 09 2007 | 10:26 am
      Thank you very much for the detailed explanation. I have tried to understand the similar things about cross-fade or windowing function (e.g in harmonizer), but still have not figured it out. But I understand it a little bit better now after reading your explanation.
      I still have two questions about the 1st patch:
      1. At the moment when the phasor signal ramp down or up to change the result of the "
      2. What does the number 1000 represent in the "!/1000" above phasor~ object ? Why 1000 ?
      Thank you very much for the help.
      Best
      Chien-Wen
    • Jan 09 2007 | 11:55 am
      On 09 Jan 2007, at 11:26, Cheng Chien-Wen wrote:
      >
      > I still have two questions about the 1st patch:
      >
      > 1. At the moment when the phasor signal ramp down or up to change
      > the result of the " > trapezoid~ is not really 0. Why does sah~ sample its input at this
      > time ?
      the result of the comparison is always either true (1) or false (0)
      to trigger the sah~ we need to go from 0 to above 0 (e.g. to 1). this
      change is going to happen if the ramp from phasor~ jumps back from 1
      to 0. ( it must be a ramp up, so in this example you have to provide
      positive frequencies)
      the change in the other direction of the comparison, from 1 to 0, is
      happening when the ramp is passing the 0.5 mark.
      but this is not doing anything to sah~.
      so yes, the change is happening when the phasor~ is 0.
      again, scope~ is your friend for examining this stuff.
      >
      > 2. What does the number 1000 represent in the "!/1000" above
      > phasor~ object ? Why 1000 ?
      well, there are 1000 millisec. in 1 sec.
      this is to express time in frequency.
      T = 1 / f
      f = 1 / T
      where T is time in seconds and f is frequency in Hz.
      this is fundamental to almost everything you do with sound.
    • Jan 09 2007 | 11:58 am
      Ok, I think I figure it out the first patch.
      Please let me know if I misunderstood it.
      I think the saw tooth wave generated by phasor~ actually "ramp down" very fast so as long as sah~ hold the sample during the "ramp down" of the saw tooth wave, it is safe.
      And 1000 refers to 1 sec for frequency calculation.
      Does phasor~ actually "abruptly drop down" from 1 to 0 ? Or does it gradually "ramp down" ?
      If it abruptly drops down in no time, then it makes sense to me that sah~ samples the input when comparison result transits from 0 to 1 and makes no click.
      Do I think correctly about the 1st patch ?
      Thank you very much.
    • Jan 25 2007 | 9:54 am
      On 22 Jan 2007, at 12:05, David Atkinson wrote:
      >
      > I have attempted to adapt your patch using poly~ and a multislider
      > to change the different sample positions of each voice.
      > Unfortunately I have run into some problems. When I change a value
      > on a multislider other voices seem to stop playing. I sure this is
      > something to do with voice managment in the subpatch, but I just
      > cannot work out whats wrong.
      >
      hallo david,
      you have to decide, which parameters you want to change globally,
      i.e. for all poly instances at the same time, and which params you'd
      like to change individually for each instance.
      in your patch looplen, amp and samplelen are 'global', and you can
      use simple send/receive pairs to provide all instances with updates.
      you can also use the 'target 0' message to address all instances -
      but the target message is communicating with the poly~ object,
      therefore you must always send it to the first inlet, followed by the
      actual value you'd like to change, which is sent to the corresponding
      inlet of the poly~ (see example below).
      this is basically the same for sending values to individual instances
      - target message always to the leftmost inlet of poly~.
      i guess the ringmodulation just before the dac~ was just an oversight?
      volker.
      /* save as sliderSub */
      /* main patch */
    • Jan 26 2007 | 11:46 am
    • Jan 26 2007 | 5:14 pm
      As is always the case with Max, there are multiple
      ways to approach things. Assuming that you've spent
      the necessary time with the tutorial on poly~ to
      know enough to address a specific voice, I find that
      the best general hygeine involves sending any
      information I need to a given instantiation {voice]
      in a poly~ as a single list and then doing the
      whole prepend thing, unpacking inside the poly,
      and routing the necessary messages from there.
      Anything you need to do globally [calculations on
      buffer length using sfinfo~, for example] you
      can just use send and receive for, taking advantage
      of Max's global data space. Stick in a mute message
      to turn your voice on and off and you're good to
      go.
      There certainly are people who opt for a squillion
      in and out objects inside their poly~, but not
      ganging stuff is just asking for trouble.
      Your mileage may vary.