Simple, FM ready, sinewave oscillator with sync.

Boris Uytterhaegen's icon

Boris Uytterhaegen

8月 06 2024 | 9:21 午後

Hi!

Just diving into gen~ and having a lot of fun figuring stuff out.

I'm trying to write a smooth sounding, CPU efficient sinusoidal oscillator in CodeBox, which I intend to use in a bigger feedback oriented FM/PM system. I have implemented the following features:

  • Pitch input

  • Linear FM input

  • Exponential FM input

  • Sync input

  • Linear feedback FM

  • Exponential feedback FM

All seems to be working at this point but since this is a first I would like to know if and how I can improve on the design.

Two main issues/questions:

  • I notice that the FM, especially when combined (EFM, LFM, FB,...) could sound a bit smoother or juicier at times but in stead starts sounding noisy (aliased?).
    I've tried interpolating the phase increment (commented code) but without a lot of luck.

  • The sync input adds too harsh harmonics for the thing to sound natural. I think I might need some smoothing, but not sure how to go about it.

Any input would be very much appreciated!
Many thanks,

Boris

HLG-FmOsc.gendsp
gendsp
HLG-FmOsc.maxpat
Max Patch

Wetterberg's icon

Wetterberg

8月 07 2024 | 1:08 午前

youll need to include the gen~ file, too

Boris Uytterhaegen's icon

Boris Uytterhaegen

8月 07 2024 | 7:30 午前

Thanks for the reply.

Here are all the files. I will edit my original post and include them there too.

HLG-FmOsc.gendsp
gendsp
HLG-FmOsc.maxpat
Max Patch

Boris Uytterhaegen's icon

Boris Uytterhaegen

8月 09 2024 | 12:19 午後

Hey!

Thanks :-) and thanks for the link, valuable info there, especially for the next step in this project, when I start x-modulating two or more of these!

In the mean time I've come up with a "solution" for the harsh sounding sync.

Every time a sync signal is received the output will be attenuated to zero over the next 4 samples, then back to unity gain over the next 4 samples. This works but:

  • Limits the pitch of the modulator

  • There sometimes is an audible click when switching sync on.
    I have not yet been able to understand the cause, since the attenuation should only begin after a sync signal is received.

  • Even if the sync signal runs at the same frequency as the output this "solution" has an effect and is audible. I guess that is the downside? Or are there better ways all together to go about this?

I hope someone with deeper DSP knowledge will have something smart to say about this :-)

Updated files attached.

Cheers!

HLG-FmOsc.gendsp
gendsp
HLG-FmOsc.maxpat
Max Patch

Graham Wakefield's icon

Graham Wakefield

8月 11 2024 | 12:41 午前

Hard sync of one oscillator to another has a few things that can be quite fiddly to get right for really nice DSP.

I find the most important thing to get right is the sub-sample location of the hard sync event. If we don't handle this, then we end up forcing the sync events to align to one sample or another, and this causes a lot of aliasing distortion. The true sync event should be at some subsample moment, between one sample and the next. That means on the next sample after a sync event, the follower oscillator should have already be reset to zero and started playing a little part (1-subsample) of the waveform. If you don't take care of that then there's another kind of aliasing added by forcing the sync resets to happen on a sample boundaries. It makes the hard sync sound kind of messier than it should be. Fixing this means you need to know the subsample location where the sync really happens -- that's possible to do very accurately if sync oscillator is a phasor ramp, but involves a lot of guesswork if the sync source is just a pulse.

The other thing is that wherever the waveform currently was, the hardsync event usually means setting the value to zero, or whatever the waveform start value is, and that means a hard jump in the waveform. A hard jump makes a click, and if syncing to another oscillator, this means repeated jumps. Repeated jumps is exactly what a square or saw waveform is made of, and that's part of the buzzy sonority that is added by hard sync. As you probably know, a naive square or saw wave in a digital signal processing world is going to cause aliasing, and the same thing is going to happen with hardsync. So you may want a method to suppress the aliasing caused by the jumps, which usually means some kind of smoothing to spread the jump out in time. The ideal form of this looks like an integrated sinc impulse, but that spreads the energy over a lot of time, so often we prefer a much quicker smoothing.

We look at these issues in the last chapter of the GO book, though in that case we are just working with ramp oscillators. The antialiasing scheme there is a method that is simple, cheap, and *very brief* -- not perfect, but I think not bad at all for what it is, and the brevity of it makes it handle pretty extreme modulation well. It is possible to adapt this patch to synced sines by replacing the ramp-to-curve shapers inside it with ramp-to-sine shapers.

The hard sync still sounds kind of buzzy, but that's true to the nature of it -- it sounds buzzy on analog oscillators too.

Boris Uytterhaegen's icon

Boris Uytterhaegen

8月 11 2024 | 11:56 午前

Hi Graham,

Thanks for the reply! I've been thinking about getting the book for a while and will do now for sure!
I've come across the sub-sample approach here and there (Husserl tutorials) but my brain does not yet compute. I was of the impression that a single sample was the tiniest particle of sound available to work with in gen~. Are we talking oversampling? Also a concept I still need to dig into.

I'm not bothered with the buzzy sound of sync, it's usually what we're after when syncing oscillators but I'm looking for the cleanest possible implementation of it for this project.

If you could point me to some sub-sample theory or practical tutorials while I wait for the GO book to arrive, that would be great.

Best,

B

Graham Wakefield's icon

Graham Wakefield

8月 12 2024 | 1:57 午後

A single sample is the finest output resolution of a digital audio process, but it doesn't have to be a limiting factor on the processes we want to model. Often we have an ideal waveform that we are trying to reproduce, and this ideal waveform may have discontinuities like steps, kinks, sync events and so on whose ideal timing is somewhere between two samples. Most of the time it's not necessary to model this ideal timing -- e.g. for rhythms being sample-accurate is more than enough -- but when you are doing modulations at audible frequencies it can matter quite a bit because of aliasing. Hard sync is one of those cases.

It may help to consider the benefit of using interpolation when reading from a buffer. Without interpolation, everything gets snapped to the nearest sample, and this can sound aliasy. With interpolation, we try to approximate what the ideal waveform is between two samples, and usually this greatly reduces aliasing (and probably filters a bit too). Subsample-accurate timing requires a similar approach.

Or, another way to think of it, is that we want to apply a custom oversampling just for that moment of the hard sync event, and we want to align the oversampling to exactly the moment the event ideally happens, so that the event is aligned to the oversampled sample grid. Some antialiasing methods behave more like this.

These are just different ways of thinking about it that hopefully might clarify more than it confuses :-)

Roman Thilenius's icon

Roman Thilenius

8月 12 2024 | 3:15 午後

I was of the impression that a single sample was the tiniest particle of sound available to work with in gen~. Are we talking oversampling?

to think of a sample as smallest part is not right and not wrong, but the truth about it lies in the good old sampling theorem: the digital signal as a whole (i.e. many samples in a row) represents far more than only the sample values. it also represents the complete course between the sample values - simply because an analog signal can be (more or less) reproduced exactly as it was when converted to digital and back, with the only limitation beeing the spectrum, which is cut at nyquist.

once you internalized this idea, it can be a revelation about why and how to use oversampling within a DSP.


the simplest test drive for your practice would probably be to write your gen~ thing as it is and put it in a [poly~ foo up 4] with the default, built-in downsample filtering turned on.
within the upsampled process one might in some cases need to multiply some of the frequency values now, but otherwise it is the same process, just with a higher resolution over time, allowing you to work with higher frequencies and/or more exact points on the timeline.

Boris Uytterhaegen's icon

Boris Uytterhaegen

8月 13 2024 | 9:49 午前

Thanks Graham and Roman for your replies!

At the moment this is all still a bit confusing. I will read up on it and try to understand these concepts (oversampling, interpolation,...) on a deeper level before continuing the patch.

The theory is one thing, a couple of clear example patches demonstrating these concepts would really help. I am sure they are out there, it's just a matter of knowing what to look for and I think I've got everything I need for now from your replies.

All the best!

B