Do we need to systematically filter I/O frequencies >= sr/2 ?
Rewieving the basic MSP documentation, I was surprised to read this :
« Any frequencies in the sound that exceed half the sampling rate must be filtered out before the sampling process takes place. This is accomplished by sending the electrical signal through a low-pass filter which removes any frequencies above a certain threshold. [..] we need to send the output signal through a low-pass filter, as well. »
=> Does-it mean that we need to have a low-pass filter set to sample rate/2 after [adc] and/or before [dac] ? Or do they do that "automatically" ? How ? Through the audio driver ? If not, which object should be chosen ?
no, you don't need to do this(they refer to the 'electrical signal' which is the actual analog one coming in, yet still outside the purview of the digital arena of MSP before this hardware filtering), this is done automatically via the adc/dac hardware(either in your computer's soundcard or your audio I/O). that passage is from the general introductory documentation on "How Digital Audio Works":
https://docs.cycling74.com/max8/tutorials/02_mspdigitalaudio
most of that page is just to introduce you to digital audio in general, so that when you're using max/msp you'll understand the limitations and strengths of digital audio(for example, when you run into the sound of 'aliasing', your mind will be prepared for why it occurs... sometimes when synthesizing and modulating you could create frequencies/sidebands/harmonics that go above sample-rate/2, in that case, you could use low-pass filters... there are many options, a simple object would be 'onepole~', but you could also be upsampling using poly~ as well... none of this is something you'd need to know right away, you'll generally run into the options available as you're learning the techniques that require them...)
if you're going through MSP tutorials, you can go through 'Topics' in a quickly skimming sort of way, in order to get to the rest of it:
https://docs.cycling74.com/max8/tutorials/00_mspindex
Thank you , R∆J∆, for having fully answered my question.
Indeed, I need to study all this in more details, having faced some audio issues in the course of current projects.
not only when coming from analog signals, but also whenever you convert a digital signal to a lower samplingrate than before.
Thanks Roman. As I happened to have free time, I started to play a bit, trying to better understand what all this means. I tried with a basic [cycle~] object at sr and inside a [poly~] with sr/4 (see below) :
1. Filtering ([lores~]) doesn't seem to do much ?
2. Anyway, as R∆J∆ implied, this is true, at this point of my learning curve, I don't have a single clue what what I did could be useful for, apart from producing some very unpleasant sounds (and nicely dancing peaks) ! ;-) Do you guys have relevant use-case to explain ?


poly has a filter built-in, but the way you placed your own is correct.
it is just that 3db/oct is not enough.
also forget that 5500 experiment. it will only work to test the following scenario: the poly runs at x8 and now you want to go back to the motherpatch, which runs at 44. this is where you now have to filter at 22 before the samplingrate is halved.
the aim of the lowpassfilter is to get rid of everything above 22 kHz (while leaving everything below 22 kHz as untouched as possible)
a signal in 88 kHz might contain frequencies between 22 and 44. in 44 they can not. in 44 a sinewave of 33 will be folded down to -11. that´s what we do not want.
some clever combination of 6+ butterworths and 2+ chebycheff filters, some of them above the new samplingrate, can get you quite close, but it will never be ideal.
Thank you again Roman. And, yes, I misread you : this makes sense the other way round ...
I don't want to be annoying, as, as I said, this is probably a little too far from my current tinkering needs, and I really need to improve my signal processing background before wasting experts' time with asking them to explain concepts that, in the end, I may only partially understand : but could you give a "real" situation where you need to have a poly at x8 inside a main patch ?
1. any DAC requires to use oversampling to produce the analog voltage at the output.
2. it can also be part of an algorithm to change samplingrates. if you have to convert 50khz to 40khz the math will be simpler and the results more precise if you first go to 200khz, because then you only multiply and divide by integers.
3. certain DSP processes can benefit drastic from upsampling, most filters do for sure. processes with tapping buffers, pitchers or soundfileplayers also benefit from it. you also find it in a lot of generators (synthesizer oscillators)
so you weigh up against each other the advantage of using a higher rate vs. the disadvantage of the lowpass necessity and the double CPU usage and then decide if you do it or not. :)
4. last but not least.... you might want to produce, i.e. mix and master audio in a higher samplingrate anway. and then later you want to create a 44 khz master, too. and if you do that, you need to carry out such a process.
far fetched example: you play a virtual synthesizer in an 96 khz enviroment. it contains frequencies around 25 khz. later you want to apply pitch shifting to that material and make it 10 octaves lower.
if you would work in 44khz from the beginning, that frequency will not even be generated.
if you first sample it down to 44, it will sound terrible.
if you first lowpassfilter it and then sample it down to 44, it will be gone.
so what you do is that you first apply the pitch shifting while in 96 and then sample it down.
that downsampling requires our magic 22khz (or rather 19?) bandlimiting filter is quite irrelevant now, because 25 is 2,5 by now.
the comb~ object uses upsampling internally afaik.
As always, I really appreciate your detailed explanations Roman.
Am I right saying that these situations occur when you are working in complex environments, like when you have to deal with "professional" audio equipment, sophisticated external instruments etc ... and that when you use basic (and consistently set) audio file recorders generating .wav, standard sound cards, commodity speakers and everything inside MAX or Live, the need for down/up-sampling is something close to exceptional ?
It's important because, since I have tons of things to learn, and limited time to allow to it, I need to prioritize a bit !
yes, we should add that people do it in max/msp probably far less than in commercial projects - or some FPGA hardware device, where performance matters are no longer an issue.
with good speakers and in a context where you actually have to render high quality audio it can make sense in max, too, but not for daily work.
and poly~ has some learning curve and is alltogether not very elegant, so in an egocentric one-time-project it might be easier to simply run your whole system in 192 khz anyway.