Phase Distortion (for audio input)
I'm trying to understand how phase distortion for audio inputs might be achievable.
I understand the concept of synthesising sounds via phase distortion, like it's used for the Casio CZ.
Would the same concept work for phase distortion of incoming audio signal?
If yes, how is it possible to modulate the phase of an audio signal in gen~?
As far as I understand from talking to other gen~users, phase modulation could be achieved by using delays.
I'm not really sure how to implement phase modulation (distortion) though. I guess it's just a matter of modulating the delay value somehow?
Here is an example patch with some comments to illustrate my confusion about this.
Maybe someone could chime in to clarify :)
Thanks
ya, exactly: you just modulate the delay - using a delay is basically the same as using a buffer to capture the phase in, and then scrub through it differently than a straightforward playback for the modulation.
unfortunately, you can't get mathematically specific to duplicate your cycle~ results in scope~ the way you'd like(you would have to capture at exact zero-crossings, and match length of capture to frequency so that you could read it back in a predictable way like that... too much work, and wouldn't be worth it considering you'd then have a system which wouldn't be flexible with constantly changing signals/frequencies).
this is also not the answer you're looking for, but just making sure you know: if you did want 'cycle' within 'gen~' to be controllable by phase, you can just use the 'cos' op instead of 'cycle'(its input is fed with a phase measured in 'radians', so you'd multiply an input signal ranging between 0-1, by 'twopi', to get radians into it)
...in case you were actually looking for a phase-controllable cycle within 'gen~':

☝️also, if you use live audio-input to drive the phase-input of a sine/cosine wave, you have one basic kind of phase-distortion, too: sine-based waveshaping. speaking of which, waveshaping is a kind of phase distortion too, so if you'd like phase distortion, you can also use any waveshaper algorithm(try typing 'cheby' in an empty patcher box within the outer 'Max' patcher world, to get to the 'cheby' MSP example by Luke Dubois: you can use things like this, involving 'lookup~'(there's a 'lookup' op in 'gen~', too) - 'lookup' reads from the center of the wavetable as the starting point, so in a palindrome-looper sort of way, it can avoid creating discontinuities by reading the edges of a static wavetable differently, looping them back into continuity - might be easier than using a delay... either way, if you modulate a delayed live-audio-input, you might have to be careful about clicks where the read/write in a delay/buffer collide).
just some ideas, hope it helps 🍻
wow - thanks for your thorough explanation Raja. Very appreciated!
Since the gen~ [cycle] in the example patch was to illustrate an "live audio" input, I'm not too worried about the result being identical to the phase modulated synthesis.
I have a couple of questions and will try to keep it short:
1. Whats the proper way to scrub through the delay in gen~?
I can imagine a sine or phasor would make sense, but I haven't found a way to modulate values of the [delay] operator with a cycle for example or even the output of the kink processed phasor.
2. If I got you right, I could use live audio input to drive the phase modulation of a cycle as you described. This is called sine-based waveshaping, right?
I would have thought that it's the other way around. Like sine-based wave shaping would be to modulate the phase of an input signal with a sine.
3. I just learned about look up tables while researching phase distortion. So it would be more effective to use lookup tables to phase distort live audio input?
Discontinuity while modulating phase was something I thought could be an issue, I will try to take a look at the lookup tables for waveshaping.
4. So basically you're saying the difference in phase distortion and waveshaping isn't too big, so I really could achieve better results using waveshaping lookup tables instead of going the route of modulating the phase of the input signal?
Thanks again, this is very helpful. Always happy to learn something new from you :)
1. Whats the proper way to scrub through the delay in gen~?
(i tend not to use delay in gen~, i've had too much fun writing my own using the sample and poke ops, so i can't claim to be the expert on the particular op but...)... i'd say the easiest/most-straightforward way would be to setup a maximum delay time as the first argument, also choose "@interp cubic"(any type of interpolation to help with smoothing the changes) and then modulate the delay-time(this doesn't really 'scrub' in a completely controllable way, but by changing the length of delay in real-time you end up playing back a modulated form of the phase being delayed).
2. If I got you right, I could use live audio input to drive the phase modulation of a cycle as you described. This is called sine-based waveshaping, right? I would have thought that it's the other way around
good point(i'm not always the best at semantics), basically, if you just consider that 'waveshaping' is simply shaping an input signal with some kind of transfer function, in this case, the sine/cosine wave is used as the transfer function(so the waveshaping, being based on the sinusoidal curvature could be called a sine-based waveshaper... but it's not the most exciting waveshaper, the chebyshev polynomials seem to give more distortion options)... i think, technically, for this particular context i described where you drive the phase of cycle~ with audio-input, the term is just "phase modulation"(modulating the phase of the cosine/sine is more rooted in controlling the synthesis than live-input distortion)... but it gets confusing, because with the cycle~ object in max/msp, you can replace the internal cosine with any waveform you want, and this is how you can get a nice waveshaper, too(wave~ is similar, there's also a 'wave' op in gen)... so you can see instantly how 'phase modulation' and 'phase distortion' can be pretty interchangeable.
in any case, lookup~ is best for waveshaper usage(i only mention this about driving the phase of cycle~ like that, as a way of bridging between seeing it as an oscillator, and seeing it as a transfer-function/wavetable).
3. I just learned about look up tables while researching phase distortion. So it would be more effective to use lookup tables to phase distort live audio input?
ya exactly: lookup tables are the most efficient form of phase-distortion. the word, 'phase distortion' is also a loose term: technically a 'phaser' or 'flanger', by shifting the phase of a delayed signal, is also distorting phase(the advice you were given to use a delay will probably most readily be suited for something like this... you have to modulate the delay-time more drastically to get 'distortion' like a guitar-amp style distortion...) so from what i've seen so far, waveshaping, being a subset of the many forms of 'phase distortion' seems the most effective and efficient for more powerful forms of distortion.
4. So basically you're saying the difference in phase distortion and waveshaping isn't too big, so I really could achieve better results using waveshaping lookup tables instead of going the route of modulating the phase of the input signal?
it's more like 'waveshaping' is a type of 'phase distortion'(you are reshaping the phase of the input-signal by the transfer-function of the waveshaper - and people don't refer to waveshaping as phase-distortion because it can have many different uses, even though distortion is one) - this is also difficult to write about in technical terms because of the way 'phase' is related in the amplitude-time domain(particularly by msp objects and other coded audio functions since they must remap to the -1/1 range), normally, phase is more like the angle of movement around a circle, but when translating that type of sinusoidal movement to a function which outputs into the amplitude-time domain, it simply reads as shifting amplitude values normally between -1 and 1(as opposed to 0º to 360º or from -π to π(often same as 0 to 2π)... which is how phase is normally referred to)...
...so in short, any way you can find of reordering the samples or shifting the shape they describe between input signal and output, will be a kind of phase-distortion applied to that input-signal here in the amplitude/time-domain(you can also work nice detail with phase in the frequency/time-domain using FFT, but it's not as efficient).
oh wow, i've been ultra-wordy here, hope that doesn't confuse. if it does, i basically just recommend taking a closer look at 'lookup' and waveshaping, haha... when transforming live-audio(as opposed to synthesis), this will be more powerful for distortion needs 🍻
hey, thanks again! you basically clarified everything I wasn't sure about.
So no, ultra-wordy is very welcome and not confusing at all.
It was a great read!
Thanks :)
i am trying a different approach.
the confusion already lies in your question.
phase distortion a la CZ1 can not be applied to to any audio input because it is not applied to "audio input", it is applied exclusively to the accumulated phase of an oscillator model.
so there must be an oscillator with a "phase" in that sense, and its frequency must be known and/or you must have access to it.
to put it simply: phase distortion happens either
1.) when you apply any math function to scale the output of {phasor} (but it may not change direction and of course must remain between 0. and 1. )
or
2.) - when you have access to the velocity of the phase accumulator - by modulating this accumulator velocity (but always according to the phase accumulators´ main speed, i.e. the frequency must remain as it was before)
simple example: take the output of {phasor} in gen~ and multiply it with itself: that´s PD.