real-time instrument processor query

mixwhit's icon

I'm relatively new to Max, trying to figure out if something I want to do is feasible. So I'm hoping for guidance on that question, as well as any suggestions regarding how I might go about it.

I want to develop a system that listens to instrumental performers, continuously samples that input, processes the input, and then outputs the sound back in real-time (I can tolerate a slight delay).

The processing will involve extracting one period's worth of sound at some interval, redrawing that period based on an algorithm, then outputting the new period (repeated for a time).

Sound feasible? If so, any pointers? I figure I'll need an adc~ to feed into a buffer~, but how I go about extracting a period's worth of sound I'm not sure of, nor what to use to implement an algorithm that examines it and then redraws it.

thanks

vichug's icon

what do you mean by "one period’s worth of sound at some interval" ? The way i understand it, it's like one melodic moment that someone plays, and if so, then what you want to do is very complicated, yet exists already : the kind of thing that Ircam's Antescofo does. It's an ongoing matter of research though. Analysing a piece of sound, knowing it's a meldoy, then scrambling its part, in order to make another coherent piece of sound, is not an easy task !
is that what you meant though ?...

mixwhit's icon

For example, say a saxophonist plays a major scale in whole notes. For each pitch they play, most of the waveform is fairly regular (periodic). In a loop, I want to grab a bit of the sound and find extract one period's worth. So, if they're playing A440, then the period length is sample_rate/440 (say 48000/440=109 samples), so I need a 109 sample chunk from the sound I grabbed. Where I pick it likely matters (maybe look for a zero crossing?), but it might not. Then I'll use these samples to calc a new 109 sample period and use it to output a sound at the same pitch (since the period length is the same) but slightly different.

make sense? if not, i'll get more explicit.

thx!

vichug's icon

ok, it makes sense, and seems possible and not as hard as what i describe - but "seems" only. You can do it that way, but you will be sorely disappointed by the resulting sound. One instrument is not the same wave repeated at different pitches ; there is a more noisy part in the sound, and that part evolves with tie, very quickly. What makes the sound of an instrument are a big quantity of parameters, some of which are not well known as of today. Anyway, the shape of the waveform of an instrument is in constant evolution, so you'll need to continuously -know the instrument pitch, and -sample one wave of the instrument at the precise time it plays that pitch ; which is at the limit of the possible already ; you then have to ignore the noisy part of the sound ; you must also have a very clean recording of the instrument so that the sound can be isolated easily and other sounds don't interfere ; you msut also make sure you have an exact wavelength, maybe overlapping windows would be indispensable. Basically, depending on what you want to do, but in most case, trying to isolate one wavelength of an instrument's sound is probably the most complicated and less effective to modify one instrument's sound... what kind of sound modification are you aiming for ? there is probably a better way to do it somewhere out there

do.while's icon

Hi . Vichug is very right .
capturing such sort periods of sound and freezing it may give you even very annoying audio results . beside other things i would go for longer "chunks" that can represent more of a source nature ,and loop them , crossfade that loop regions to stay smooth ,window it etc . i would go for that in your case definitely . its a bit of work , but you will be able to rely on that part with more confidence that , randomly grabbed sound will not shock you too much .

spectro's icon
Max Patch
Copy patch and select New From Clipboard in Max.

One way you can get close to what (I think) you want - though not strictly using a single period of the input pitch - is to use an external like fiddle~ or analyser~ in conjunction with oscbank~ (via the former's sinusoidal components output) to recreate a sustained version of the analysed fragment by means of a simplified additive resynthesis. With manual capture (automatic will be a bit trickier) this can work pretty well with steady state sources like sustained winds. Here attached is a modification i did a while ago of an old fiddle~ help file to try this method. - Note that you will need the fiddle external from here: http://crca.ucsd.edu/~tapel/software.html