Same pitchshift algorithm sound different in GEN and MSP !?!

    Jan 13 2014 | 11:40 pm
    In this little attached patch i have the exact same pitchshift algorithm in gen and in msp... but it sound different! (Better, more fluid in gen)
    Can you tell me why or point my mistake? Thanks!
    (I'm benchmarking different monophonic pitchshift algorithms in this topic: )
    ...And I would love to add some kind of formant control in this gen algorithm... if, by chance, anyone have some directions to do that... :-)

    • Jan 14 2014 | 11:38 am
      Apparently it's a behaviour of tapout that cannot delay less than the vector size... man... hmm, It would be nice if there would be a warning in the max window when one try to do so...
      So when i replace tapin/tapout by delay~, now the sound is the same! (apparently)
      Also, then i don't understand why people are using tapin/tapout instead of delay~ to make some overlap-add pitch shift algorithms, which make the delay go down to 0 all the time, and thus sound bad...
    • Jan 14 2014 | 11:40 am
      ( here was a topic about this strange tap out thing: )
    • Jan 14 2014 | 1:08 pm
      tapin~/tapout~ introduce one vector size of delay to permit feedback: you can loop back tapout~ to tapin~, assuming you put some gain control in order to destroy your ears ;-) so the minimum of delay that you can have is one vector.
      delay~ on the other hand let you have any delay you want, allowing the feedback to be within the object (so you can't add other effects before in the feedback loop, it's just gain control).
      Gen works at the sample level and let you have 1 sample delay, which explain the sound difference that you may have with small delays.
    • Jun 27 2016 | 2:19 pm
      I have a similar type of question. I was trying to implement a feedback-rich patch I have from back in the day in gen~. I couldn't get it to behave similarly. Replaced filters with gen~-filters, boiled it down to tapin~/tapout~ behaviour. I am aware of the signal vector size issue and all that, so I decided to compare the delay lines. Turns out that, as shown in the attached patch, tapin~/tapout~ behaves differently if the delay time is controlled with a signal rather than with an int/float (!). So apparently, some internal algorithm (interpolation?) turns on when a signal is connected. Does this make sense? Am I missing something? I can't find it in the docs, either.
      Much appreciated!
    • Jun 27 2016 | 3:41 pm
      (unless i have missed that tapout~ really has that interpolation you are talking about built-in for some weird reason...) there is indeed a difference between sending signals and messages to signal objects, which can result in a different sound: sending a number to tapout~ will cause a internal chance at the beginning of the next vector, sending a signal will ... well, do the same, but is not bound to the scheduler. so under circumstances it can end up changing things one vector later. (i once noticed this behavior with some other signal object, dont remember now what it was)
    • Jun 27 2016 | 3:51 pm
      from the max 7 reference:
      signal If a signal is connected to an inlet of tapout~, the signal coming out of the outlet below it will use a continuous delay algorithm. Incoming signal values represent the delay time in milliseconds. If the signal increases slowly enough, the pitch of the output will decrease, while, if the signal decreases slowly, the pitch of the output will increase. The continuous delay algorithm is more computationally expensive than the fixed delay algorithm that is used when a signal is not connected to a tapout~ inlet.
      so it is intended. iow: for glide-free jumps in the position of tapout~, use messages exclusively.
    • Jun 27 2016 | 4:03 pm
      That's clear. However, in the given patch, I don't change the values! Plus, I don't necessarily want glide-free transitions, I want to understand what happens. The patch I posted shows that the difference between two signals identically delayed by tapin~/tapout~ controlled with a message on one hand and a tapin~/tapout~ controlled by a signal on the other is not zero, i.e. there is some remaining signal, i.e. the signal is somehow changed by tapin~/tapout~ when controlled by a signal! I don't see how this relates to the quote from the reference (I found that one before), since the delay times are nog changed in this example, thus whatever kind of interpolation or whatever takes place should not change the signal in this case, I would say. But clearly I am missing something here. In any case, I would like to rebuild tapin~/tapout~ behaviour in gen~, for which I need to understand how tapin~/tapout~ functions internally. In any case, it's beyond the signal vector size issue, as far as I can see. I hope someone from Cycling '74 can enlighten us on this one.
    • Jun 29 2016 | 5:03 pm
      OK, figured it out after some browsing through the forums. Not that I found the exact answer, but some comment about subsample precision led me towards the answer. See attached patch! It turns out that if you want a delay time with tapin~/tapout~ that's exactly the same compared to delay~, you need to add half a sample delay. Funny and weird, as long as I can't look inside the thing.
    • Jun 29 2016 | 5:25 pm
      That's to say: when the delay time is controlled with a signal of course!