I’m fairly experienced with Max/MSP but I have just started out with creating externals. Despite having some experience with C previously it has completely got my head in a twist.

I’ve been building a Phase Vocoder in Max which is fine. However I’ve gone on to try and add a Spectral Delay which requires building an external. Basically I need to know the best way to go about manipulating my FFT data in the external. Do I need to run the FFT data into a buffer inside Max and then access the buffer separately in the external? Or is it best to run the FFT data into the external and then buffer it for delays?

If it will be better to access the buffer how do I go about doing that? Having looked at the SDK help I haven’t seen a way of taking in that data.

Thanks for the help.

SR

In any case, it is certainly worth learning how to write externals to solve problems that are difficult to solve in Max/MSP itself. (I’m writing a book on this subject right now.)

Good luck,

Eric

]]>It’s easy as that; no need for a buffer. ]]>

I was just thinking that it would be easier to handle in a buffer since then each element will represent one frame of the FFT. Meaning for each element of the buffer a delay time from a separate buffer can be attached.

Great thanks for the help guys.

SR

Have you checked out John Gibson’s spectral delay externals?

(go to ‘software’). They’re really impressive.

brad

]]>I think i’m going to go with creating the buffer in the external and then create another alongside it for delay times and count them round together.

SR

]]>@shatter_resistant: best of luck, i’ll test anything you do (!). i love the john gibson’s too but upset no panning controls inside the externals. will yours pan?

]]>+1

]]>This is kind of true – there are index~ based solutions that are conceptually quite straightforward, but a word of warning in tapin~/tapout~ or delay~ based solutions (I haven’t tried the later but I assume it should work). These are a bit more complex, due to an unexpected extra sample of delay (for me at least) when setting the delay times using signals (which you would need to do in this case) – this came up when I was checking someone else’s spectral delay patch about a year ago, and makes these things a bit nasty – so first thing is I’d use count~ and index~ or something like that if you get a max solution.

There are other reasons for hard coding – including the inefficiency and lower accuracy of the current pfft~ fft routines as compared to the apple vDSP ones (if you are on a mac of course) and possibly dealing with more complex algorithms becoming simpler.

I seem to remember that John Gibson’s spectral delay (which I thought excellent in terms of code clarity and thoroughness) does not phase accumulate at all, meaning that when the delay times are changing the results can be choppy. In order to get a smoother sound for changing delay times you need to phase accumulate (standard phase vocoder fare…), although this leads to the potentially more problematic situation of having phase smearing and drift etc. etc. As a compromise, when I wrote my spectral delay external probably about 4 years ago now I used a hybrid technique, where I slowly reset the accumulation to zero over time, so that the delay time transitions are smooth, but eventually you return to the correct input phases – at which point you stop phase accumulating so as to conserve CPU. You can set the rate of this correction to choose a compromise between choppy transitions and rate of return to phase coherence as you desire…

This topic has come up before here as well – so there should be some old posts around about this. If the code is of interest to anyone I can post it, although it’s not as pretty as John Gibson’s, and it lacks proper denormal handling at present. However, in essence it’s a ring buffer with a cartesian phase vocoder.

As regarding MSP or internal buffers I’d say only use an msp buffer for the actual delay if you can see some value in allowing users to interact with the buffer in their own way – I’d say in this case you probably wouldn’t want this – so just allocate yourself some memory and store your data internally – however, you might have a different idea.

Regards,

Alex

]]>You make a good point about a Max solution, it just seems to unstable to work really effectively. So I’m going to stick with my external.

@pid I hadn’t thought about panning yet. I’m on a time limit so it might happen eventually but I wouldn’t be hopeful in the near future.

]]>For intel / pc you need a decent denorm solving strategy for the feedback part (which this doesn’t have – was written under PPC, and I’ve run it under intel, but not in a CPU critical situation – it clearly does spike on tails due to feedback). I’ve taken to just turning off denormals these days on the mac platform, as floating point math is done on the vector unit (unless you explicitly tell the compiler not to I think), and the vector unit has the option to turn denormals off (flush to zero – FTZ – and denormals are zero flags – DAZ – respectively) – this is faster and easier than other methods….

Also the interpolation between accumulated coords and straight out the buffer cartesian coords is done linearly here, which is not the best – there may be a better way of doing this. THe other problem for spectral delay I’ve never solved adequately is the fractional frame delay (for different delays across the spectrum). I think some form of cubic interpolation on the cartesian coords may work well, but the problem is a bit complex for me – however, the limitation of only exact frames of delay may or may not be a problem depending on your purposes.

Do ask if anything is not clear….

Alex.

#include

#include

#include “r_pfft.h”

#define TWOPI 6.28318530717958647692

void *this_class;

typedef struct _SpectralDelay

{

t_pxobject x_obj;

t_pfftpub *x_pfft;

float *FFTbuffer_real, *FFTbuffer_imag, *FFT_Buffer;

long pfftvectsize;

char MemAlloc;

long currentpos, maxpos, maxframes;

int PhaseMode;

float Slide;

float PerFrame;

float Ramp;

long *delaytimes;

// float *delaytimesfract;

float *feedbacklevels;

void *f_proxy;

void *f_proxy2;

long f_inletNumber;

long f_inletNumber2;

} t_SpectralDelay;

void *SpectralDelay_new(long maxframes);

void SpectralDelay_free(t_SpectralDelay *x);

void SpectralDelay_list (t_SpectralDelay *x, t_symbol *msg, short argc, t_atom *argv);

void SpectralDelay_slide(t_SpectralDelay *x, double Slide);

void SpectralDelay_rampsmooth(t_SpectralDelay *x, double PerFrame);

void SpectralDelay_dsp(t_SpectralDelay *x, t_signal **sp, short *count);

t_int *SpectralDelay_perform(t_int *w);

void SpectralDelay_assist(t_SpectralDelay *x, void *b, long m, long a, char *s);

t_symbol *ps_spfft;

void main(void)

{

setup((t_messlist **)&this_class, (method) SpectralDelay_new, (method)SpectralDelay_free, (short)sizeof(t_SpectralDelay), 0L, A_DEFLONG, 0);

addmess((method)SpectralDelay_dsp, “dsp”, A_CANT, 0);

addmess ((method)SpectralDelay_list, “list”, A_GIMME, 0);

addmess ((method)SpectralDelay_slide, “slide”, A_DEFFLOAT, 0);

addmess ((method)SpectralDelay_rampsmooth, “rampsmooth”, A_DEFFLOAT, 0);

addmess ((method)SpectralDelay_assist, “assist”, A_CANT, 0);

ps_spfft = gensym(“__pfft~__”);

dsp_initclass();

}

void SpectralDelay_free(t_SpectralDelay *x)

{

dsp_free(&x->x_obj);

if (x->FFT_Buffer) free (x->FFT_Buffer);

if (x->delaytimes) free (x->delaytimes);

free (x->feedbacklevels);

if (x->FFTbuffer_real) free (x->FFTbuffer_real);

if (x->FFTbuffer_imag) free (x->FFTbuffer_imag);

freeobject((t_object *) x->f_proxy2);

freeobject((t_object *) x->f_proxy);

}

void SpectralDelay_slide(t_SpectralDelay *x, double Slide)

{

x->PhaseMode = 1;

if ( Slide >= 0 && Slide < = 1)

x->Slide = (float) Slide;

}

void SpectralDelay_rampsmooth(t_SpectralDelay *x, double PerFrame)

{

x->PhaseMode = 0;

if ( PerFrame >= 0 && PerFrame < = 1)

x->PerFrame = (float) PerFrame;

}

void *SpectralDelay_new(long maxframes)

{

t_SpectralDelay *x = (t_SpectralDelay *)newobject(this_class);

x->f_proxy2 = proxy_new(x, 4 ,&x->f_inletNumber2);

x->f_proxy = proxy_new(x, 3 ,&x->f_inletNumber);

x->x_pfft = (t_pfftpub *)ps_spfft->s_thing; // Find pfft vectsize

int pfftvectsize = 0;

if (x->x_pfft)

pfftvectsize = x->x_pfft->x_fftsize / 2;

if (!pfftvectsize)

pfftvectsize = 4096; // Default to 4096

x->pfftvectsize = pfftvectsize;

dsp_setup((t_pxobject *)x, 2);

outlet_new((t_object *)x,”signal”);

outlet_new((t_object *)x,”signal”);

long *delaytimes = x->delaytimes = (long *) malloc (pfftvectsize * sizeof(long));

float *feedbacklevels = x->feedbacklevels = (float *) malloc (pfftvectsize * sizeof(float));

if (maxframes > 0 && maxframes < = 10000)

x->maxframes = maxframes;

else

x->maxframes = 10;

x->Slide = 0;

x->Ramp = 1;

x->PerFrame = 0;

x->PhaseMode = 0;

x->FFT_Buffer = (float *) malloc(4 * pfftvectsize * sizeof (float));

float *FFT_Buffer = x->FFT_Buffer;

long memoryitemsize = pfftvectsize * maxframes;

x->maxpos = memoryitemsize;

float *FFTbuffer_real = x->FFTbuffer_real = (float *) malloc (memoryitemsize * sizeof(float));

float *FFTbuffer_imag = x->FFTbuffer_imag = (float *) malloc (memoryitemsize * sizeof(float));

if (x->FFT_Buffer && x->delaytimes && x->feedbacklevels && x->FFTbuffer_real && x->FFTbuffer_imag) x->MemAlloc = 1;

else

{

x->MemAlloc = 0;

goto out;

}

int iter;

for (iter = 0; iter < memoryitemsize; iter++)

FFTbuffer_real[iter] = 0;

for (iter = 0; iter < memoryitemsize; iter++)

FFTbuffer_imag[iter] = 0;

for (iter = 0; iter < 2 * pfftvectsize; iter++)

FFT_Buffer[iter] = 0;

for (iter = 0; iter < pfftvectsize; iter++)

{

delaytimes[iter] = 0;

feedbacklevels[iter] = 0;

}

out:

x->currentpos = 0;

return (x);

}

void SpectralDelay_list (t_SpectralDelay *x, t_symbol *msg, short argc, t_atom *argv)

{

long *delaytimes = x->delaytimes;

float *feedbacklevels = x->feedbacklevels;

long maxframes = x->maxframes;

long i;

if (proxy_getinlet((t_object *)x) == 4)

{

for (i = 0; (i < argc) && (i < x->pfftvectsize); i++)

{

switch (argv[i].a_type)

{

case A_LONG:

feedbacklevels[i] = 0;

break;

case A_SYM:

feedbacklevels[i] = 0;

break;

case A_FLOAT:

if (argv[i].a_w.w_float >= -1 && argv[i].a_w.w_float < = 1)

feedbacklevels[i] = argv[i].a_w.w_float;

break;

}

}

}

else

{

x->Ramp = 0;

for (i = 0; (i < argc) && (i < x->pfftvectsize); i++)

{

switch (argv[i].a_type)

{

case A_LONG:

if (argv[i].a_w.w_long >= 0 && argv[i].a_w.w_long < = maxframes)

delaytimes[i] = argv[i].a_w.w_long;

else

delaytimes[i] = 0;

break;

case A_SYM:

delaytimes[i] = 0;

break;

case A_FLOAT:

if (argv[i].a_w.w_float >= 0 && argv[i].a_w.w_float < = maxframes)

delaytimes[i] = (long) argv[i].a_w.w_float

else

delaytimes[i] = 0;

break;

}

}

}

}

void SpectralDelay_dsp(t_SpectralDelay *x, t_signal **sp, short *count)

{

dsp_add(SpectralDelay_perform, 6, sp[0]->s_vec, sp[1]->s_vec, sp[2]->s_vec, sp[3]->s_vec, sp[0]->s_n, x);

}

t_int *SpectralDelay_perform(t_int *w)

{

int vectsize;

t_SpectralDelay *x;

float *in1 = (float *)(w[1]);

float *in2 = (float *)(w[2]);

float *out1 = (float *)(w[3]);

float *out2 = (float *)(w[4]);

vectsize = w[5];

x = (t_SpectralDelay *)(w[6]);

if (x->x_obj.z_disabled || !x->MemAlloc || vectsize > x->pfftvectsize)

goto out;

long *delaytimes = x->delaytimes;

float *feedbacklevels = x->feedbacklevels;

float *therealbuffer = x->FFTbuffer_real;

float *theimagbuffer = x->FFTbuffer_imag;

float outframereal, outframeimag, prevframereal, prevframeimag;

long doublevectsize = vectsize < < 1;

long maxpos = x->maxpos;

long nextpos = x->currentpos;

long iter, outframe1, outframe2;

float a, b, c, d, e, f, g, h, i;

float Ramp = x->Ramp;

float Slide = x->Slide;

float PerFrame = x->PerFrame;

int PhaseMode = x->PhaseMode;

float *FFT_Buffer = x->FFT_Buffer;

if (PhaseMode)

Ramp = Slide;

else

{

if (Ramp < 1)

{

Ramp = Ramp + PerFrame;

if (Ramp > 1) Ramp = 1;

x->Ramp = Ramp;

}

}

if (Ramp == 1)

{

for (iter = 0; iter < vectsize; iter++)

{

therealbuffer[nextpos] = *in1++;

theimagbuffer[nextpos] = *in2++;

outframe1 = nextpos – (vectsize * delaytimes[iter]);

if (outframe1 < 0)

outframe1 += maxpos;

outframereal = therealbuffer[outframe1];

outframeimag = theimagbuffer[outframe1];

FFT_Buffer[iter] = outframereal;

FFT_Buffer[iter + doublevectsize] = outframeimag;

if (delaytimes[iter] && iter) // Don’t feedback if delaytime < one frame or in the dc bin

{

therealbuffer[nextpos] += (feedbacklevels[iter] * outframereal);

theimagbuffer[nextpos] += (feedbacklevels[iter] * outframeimag);

}

*out1++ = outframereal;

*out2++ = outframeimag;

if (++nextpos >= maxpos)

nextpos = 0;

}

}

else

{

for (iter = 0; iter < vectsize; iter++)

{

therealbuffer[nextpos] = *in1++;

theimagbuffer[nextpos] = *in2++;

outframe1 = nextpos – (vectsize * delaytimes[iter]);

if (outframe1 < 0)

outframe1 += maxpos;

outframe2 = outframe1 – vectsize;

if (outframe2 < 0)

outframe2 += maxpos;

outframereal = therealbuffer[outframe1];

outframeimag = theimagbuffer[outframe1];

prevframereal = therealbuffer[outframe2];

prevframeimag = theimagbuffer[outframe2];

// Phase Accumulate !!

a = outframereal;

b = outframeimag;

c = prevframereal;

d = prevframeimag;

e = FFT_Buffer[iter];

f = FFT_Buffer[iter + vectsize];

if (c == 0 && d == 0) c = (float) 1;

if (e == 0 && f == 0) e = (float) 1;

g = ((e * c) + (f * d));

h = ((f * c) – (e * d));

i = (float) 1 / sqrt((g * g) + (h * h));

if (!isinf(i))

{

g = g * i;

h = h * i;

}

else

{

g = (float) 1;

h = 0;

}

c = ((a * g) – (b * h));

d = ((a * h) + (b * g));

c = c + (Ramp * (outframereal – c));

d = d + (Ramp * (outframeimag – d));

FFT_Buffer[iter] = outframereal = c;

FFT_Buffer[iter + vectsize] = outframeimag = d;

if (delaytimes[iter] && iter) // Don’t feedback if delaytime < one frame or in the dc bin (nasty!)

{

therealbuffer[nextpos] += (feedbacklevels[iter] * outframereal);

theimagbuffer[nextpos] += (feedbacklevels[iter] * outframeimag);

}

*out1++ = outframereal;

*out2++ = outframeimag;

if (++nextpos >= maxpos)

nextpos = 0;

}

}

x->currentpos = nextpos;

out:

return w + 7;

}

void SpectralDelay_assist(t_SpectralDelay *x, void *b, long m, long a, char *s)

{

if (m == ASSIST_OUTLET) {

switch (a) {

case 0:

sprintf(s,”(signal) FFT Real Out”);

break;

case 1:

sprintf(s,”(signal) FFT Imag Out”);

break;

}

}

else {

switch (a) {

case 0:

sprintf(s,”(signal) FFT Real In”);

break;

case 1:

sprintf(s,”(signal) FFT Imag In”);

break;

case 2:

sprintf(s,”(list) Delay Times”);

break;

case 3:

sprintf(s,”(list) Feedback Levels”);

break;

}

}

}

Thanks for the interest! My contract specifies that the manuscript be delivered to the publisher by August 30th, 2010. So hopefully it will be out by sometime in 2011 if not before.

]]>@Raja…… erm….. that’s ok????

]]>// Phase Accumulate !!

a = outframereal;

b = outframeimag;

c = prevframereal;

d = prevframeimag;

e = FFT_Buffer[iter];

f = FFT_Buffer[iter + vectsize];

if (c == 0 && d == 0) c = (float) 1;

if (e == 0 && f == 0) e = (float) 1;

g = ((e * c) + (f * d));

h = ((f * c) – (e * d));

i = (float) 1 / sqrt((g * g) + (h * h));

if (!isinf(i))

{

g = g * i;

h = h * i;

}

else

{

g = (float) 1;

h = 0;

}

c = ((a * g) – (b * h));

d = ((a * h) + (b * g));

c = c + (Ramp * (outframereal – c));

d = d + (Ramp * (outframeimag – d));

Thanks otherwise it’s great just it’s unclear because of how you’ve coded it what is going on.

SR

The basic idea is a cartesian phase vocoder ith a linear interpolator at the end….

You should be able to see roughly (in order):

Check for divide by zero and avoid

Complex Division

Check for infinite denominator and set to unit vector if so…

Complex Multiply

Linear Interpolation with non phase accumulated values…

However, I’m a bit confused as to the inputs to each of these calculations… Either I’ve done something quite clever (to optimise it – and now I don’t remember what it is) or I’ve done something stupdi and it doesn’t work properly.

Let’s assume the former (optimistically). What I should be doing is :

taking the phase difference between the delayed frame and the one just before it (cartesian form)

set the magnitude of that of the delayed frame (the magnitude we want)

divide by the magnitude of the previously output frame

Now we have a vector which represents the phase and amp difference with the previously output frame

Complex multiply this vector with the previously output frame

[linearly interpolate]

I think I may have combined some of those steps to reduce the number of operations. To check this I’d have to do a little bit of algebra that I don’t have time for right now… If the above makes sense to you and you can figure out for yourself great – if not let me know – I’ll try and find a moment later…

Alex

]]>