# Any advanced approaches to a more "analog" style distortion?

Caligula Cuddles

Aug 11 2019 | 9:13 pm

I've been poking around the stkr.waveshaping tools and finding a lot of useful content in it (although the GenExpr content in the codeboxes is still eluding my attempts to properly understand it).
Unfortunately, it seems like most flavors of distortion are still limited to the same tricks: dc offset, variations of soft/hard clipping, possibly wave folding, and the like. Inevitably, the results never impress me much, and I find it particularly disappointing that, apart from folding the wavefrom (which is always a bit too aggressive), the only other alternative seems to be an ultimately flat, horizontal clipping.

A while back, I remember testing an amp with an oscilloscope at a friend's place, and just kind of watching the sine wave distort. It seemed that, not only was the clipping not flatly horizontal, but the sine wave itself seemed to skew ever-so-slightly like a saw wave. I've cooked up a makeshift image to sort of illustrate my point and posted it at the end of this post (sorry it's kind of blurry; I had to resize it so I could upload it).

In short, my question is what other approaches there are to achieving a more natural sounding analog-like distortion without simply resorting to LP/shelving filters to tame the distortion's harmonics?

- 👽R∆J∆ The Resident ∆lien👽Aug 11 2019 | 10:14 pm
most flavors of distortion are still limited to the same tricks

True, but it's probably because this is the math available within a digital space: all signal processing, which involves reshaping the signal in some way, is dependent on slopes and curves, which are simply describing the inherent 'rate of change' for that signal. working math to shape that rate-of-change/slope/curvature is the study of calculus 'derivatives'. you could try to study more about that to create your own new forms of waveshaping/distortion/etc. another idea: a person could train an AI to learn the difference between digital and analog distortion and then using that trained neural-network, distort an input signal accordingly.I love that .gif by the way! 🙌 Could stare at it for hours 🤩 - ben sonicAug 11 2019 | 10:49 pmquite interesting video about distortion Ivan Cohen - Fifty shades of distortion (ADC'17)
- Caligula CuddlesAug 12 2019 | 3:24 pmR∆J - You're right; I should probably invest a bit more time in the specific calculations, but it seems that the nuance comes from the imperfections that aren't necessarily calculable, especially now that I'm further removed from my mathematics lessons than ever before. I'll have to create a lesson plan to understand what kind of lesson plan to craft for myself to, you know... learn the appropriate lessons for this.Ben - Great link. I'll check it out sometime this week when I should be productive at work. Fascinating stuff.
- ErnestAug 19 2019 | 11:51 amHere's a way to emulate Hendrix, Zappa et al's long sustains into self oscillation at another pitch. It's new as far as I know. I stumbled on it when boosting a 18-db/oct self resonating LPF, with its cutoff point way below the input's dominant Fc. What happens is, say with a parabolic wave input, is that the output resonates at the filter cutoff immediately, .but as you're overdriving the filter, the original frequency components of the source signal are still in there, and as the filter is in heavy self oscillation, the dominant source Fc slowly increases and takes over from the resonant frequency.For a slow fade in, you need a really loud resonance, like 40~60dB, or the original Fc isn't strong enough to take over....and ideally, a compressor on the output. How does it work? What I can think so far is, the original Fc component's amplitude must be >1 inside the filter, even though it's way down the filter's frequency response curve, which is why you need to boost it so much. Then the oscillator reinforces itself above the filter's other output components, and it's level slowly increases, until it actually takes over the resonant peak, then continues to reinforce itself so the peak at the filter's own Fc gradually fades away.Tt took me by surprise when I stumbled on it, and I'm still thinking about why it works, but I'm pretty certain I'm right. Here I illustrate why I'm thinking this way. Below you can see a square wave input (red) and a standard 12dB/octave filter output (blue), with the filter Fc set two octaves lower. The filter's resonating at it's own Fc, but further up the frequency spectra, there's is a TINY tiny peak at the square-wave's main Fc.It's a very shallow peak that's normally inaudible, so most don't realize it's there at all, but you can see it here in the blue:If you get it just right, it slowly swells in. Now I'm working on a 12dB/octave filter, and I'm still fiddling with the coefficients to get a controllable swell. It's fun but ear-grinding. Hendrix and Zappa can't warn you as they were already dead by my age, so so please heed my warning: do NOT play around with this kind of thing without at least DOUBLE protecting your ears somehow before the overdrive pummels your eardrums, or you will twist some knob wrong by mistake and hate yourself 20 years later when you're half deaf like me!If I'm right about why it work and how Hendrix did it, then he could have just played a note a little louder when he wanted to trigger the effect, and listeners couldn't know they were hearing the filter, not the guitar. Of course he could have done it a different way, given that he always had a half dozen pedals in series and parallel it's only guessing at how his black magic worked.Note, for my own discovery to work, it needs something other than a sine into the input, because if it's a pure sine and your filter's way below the sine's Fc, there aren't any harmonics to kick off the filter's own self oscillation. However, if there are too many harmonics, the original Fc may not be dominant enough to take over by itself, and you just get noise, which also may be entertaining sometimes, but isn't nearly as difficult to create. Square waves work well, if you skew the harmonics right, but pulses and saws not so much.* * *On more traditional methods I do have a good contribution on valve, or tube amp, emulation to share too. Like you I found waveshaping disappointing oriignally. But then I learned that each order of a polynomial for the waveshaping curve doubles the aliasing, and the aliasing is very prominent, as you're distorting the input.It's not well known, apparently this expert in the video doesn't know it either, because he says to filter high and low on the result, which doesn't actually work, because there'a also major aliasing in the mid band. If you upsample and downsample, properly, you don't need the filters at all. Most people think the polynomial is to blame for making it sound trilly. But it's actually the aliasing!So that explains why most implementations sound unsatisfying. Even if you're using a first-order polynomial, like in most designs, you still have to upsample 2x, or it sounds really awful. Although linear interpolation is sufficient for the 2x upsampling, a higher-order polynomial gives better results and needs more oversampling with better interpolation . I like this one best, a modified quartic approximation of tanh, with 4x upsampling, it's got that kind of fruity warmth of an old tube:qtanh(in){ x = in * .25; a = abs(x); x2 = x * x; y = 1 - 1 / (1 + a + x2 + 0.66422417311781 * x2 *a + 0.36483285408241 * x2 * x2); return (x >= 0)? y : -y; } You can try simpler equations, but personally I don't find the results satisfying for lower-order polynomials, because their curves have more shallow knees. If you want a shallower curve for valve emulation, and keep the original content sound instead of getting total overdrive, I feel it's better to mix the tanh with the input and keep the knee.If you're wanting to optimize the CPU, you probably will want to cache the value range in a data buffer for lookup. That's because downsampling adds the same amount of distortion as table lookup does, so you get better performance at the same quality . There's debate about what size table is really needed, 512 points is typical. If you do make a lookup table, you may be tempted to use a real tanh, or a higher order approximation, because there'd be no difference in cpu. But then you have a bigger aliasing problem. So it's a temptation to avoid.Sorry for the long detail, but as I'm correcting an expert I do want to be thorough here. I hope to do some experimentation with frequency multiplication and combine it with the filter overdrive thing above for a future design. When it's done I'll be posting it on https://www.yofiel.com. as part of SynthCore 3, It will also have codebox optimized 4x upsamplers (which were in v2 wrongly called 3x upsamplers, we all can be stupid sometimes), more resamplers, equalizers, limiters, compressors, and AGC. Meanwhile, I do encourage you to try upsampling your waveshaping saturator, if you haven't before, I guarantee you'll' be amazed at the difference.
- .quasarAug 23 2019 | 8:16 amPretty interesting subject ! There has been quite a lot of research made in Aalto University about Virtual Analog distortion (especially Fabian Esqueda).Here's for example a recreation of the TubeScreamer distortion in GenExpr using Esqueda's Matlab code.Somewhere in his papers and on his github page you can also find Matlab or C++ code for antialiased Diode Clippers and Hard Clippers
- Roman ThileniusAug 23 2019 | 11:32 amhis original idea is correct. in analog circuits it (can) happen that the shaping for" up" is different from "down". (fig. 1) so splitting signals into up and down and then use e.g. 2 diffently harsh settings for a tanh or whatever can reproduce that. but one thing is clear ... the overtones created from such a processing will for most input material not sound different than just using 2 different curves in parallel. i also doubt that group delays has an audible effect here. (fig. 3) what is more interesting is to use slew limiting to distinguish between low and high frequencies. and moments where low and high frequencies are both present...
- ErnestAug 24 2019 | 12:11 pmThat's nice patch, thanks Quasar, here is one in return. It has an envelope follower for scaling the input to the right range, then it scales it back to the original level. The polynomial is from musicdsp.org, but you can put anything you want in there if you just want the envelope follower.I trigger 'lclk' from a shared train() function in main. Its period should be a little longer than the lowest frequency wave you want to capture. If you have symmetric wave input, you can get it to about 25ms quite safely. It applies the gain scaling on zero crossings, which is usually fine if you don't have sudden volume changes, but if you do, it will click, and you'll need to smooth lvl4 and lvl5 with slide() or something similar. Today I'm working on a version which removes any dc offset without needing HP filtering, as most of the values needed for it are already in the envelope follower.What I found is, if you have alot of them, the divide for normalization can causes quite a cpu peak, especially if oversampled, so I stagger the triggers from train().It should be oversampled 2x for this polynomial.tubeSoft(in1, warmth, lclk) { // tube emulation, adjustable warmth 0~1. // input = Signal to warm, may be any amplitude above 0. // warmth = amount of warmth, 0 to 1. // lclk = transition to 1 causes reasampling of levels. // Return: Warmed signal in eange -1 ~ 1 History tsat, lvl1(1), lvl2(1), lvl3(1), lvl4(1), lvl5(1), d2, y; if(lclk >0){ if(change(lvl1) >0){ lvl2 = 1 / lvl1; lvl3 = lvl1; } else { lvl2, lvl3 =1; } lvl1 = 0; }else if(abs(in1) > lvl1){ lvl1 = abs(in1); } if(change(in1 >=0, init =-9999999) !=0){ lvl4 = lvl2; lvl5 = lvl3; } w = abs(input) *lvl4; d = 1 -warmth *.99; if (w > d){ d2 = w -d; y = d2 /(1 -d); return lvl5 * (sign(in1) *(d +d2 /(1 +y *y))); } else { return in1; } }
- .quasarAug 25 2019 | 6:34 amThanks for this Ernest ! Really looking forward to testing your code.BTW I'm already a big fan of your saturating svf ;)
- ErnestAug 27 2019 | 9:32 pmOh thanks too, Quasar. Ive made a new one that doesnt need table-based gain compensation. It doesnt have sagturation internally so it doesn't need resmapling. I'm addomg an upsmpled saturator after it, and it's lighter on cpu. It should be on Yofiel in October )