Spectral stretching and more nifty FFT-stuff

Tarik's icon

So here I am, inside my phase-vocoding pfft~-subpatch. It filters, it gates, but.. it doesn't transpose yet!

And I can't just gizmo~ my way out of this because I want to be able to use a buffer~ (in conjunction with an index~ object) to tell my subpatch how much each bin should be transposed. And gizmo~ and fbinshift~ don't take signals in it's right inlet.

what to do...?

Jean-Francois Charles's icon
Tarik's icon

Thanks,

Yes, I think you understand my question very well. I just thought there was a way of changing the frequencies of all the bins by doing something with the phase or something like that. Of course, changing the bins is also a nice trick but I don't understand what to do after my index~ object has changed the index-number. How can I tell an fftout~ to interpret bin 6 as being bin 4 for instance?

The patch I'm working on is very similar to the one used in the phase vocoder tutorial (https://cycling74.com/download/articles/tutorials/pvoc_patches.zip)
so any sollution that would work on that patch would also work on mine.

- Tb

Tarik's icon

Ah... wait, let me guess... I should use a two fft-sized buffers (real and imaginary) in which I poke~ all the bins..?

It sounds like there should be an easier way, and I'd be happy to hear of one, but this should work too, I think. I'm gonna try!

- Tb

Eric Lyon's icon

I am also interested in this question. From my brief examination, it appears that you cannot directly do pitch scaling (i.e. multiply the frequency of each partial by a constant) within pfft~ as the only available resynthesis method is IFFT overlap/add, not oscillator bank. Though I have been titillated by the oscbank~ object (and in fact use it with one of my new LyonPotpourri externals for pitch scaling) and wonder if oscbank~ could be incorporated into a pfft~ subpatch for direct frequency manipulations.

Of course an alternative is to do time-scaling within pfft~ and then sample-rate convert outside of the pfft~ patch to get the desired pitch change. But you're out of luck if you want to apply different multipliers to different partials. (See my magfreq_analysis~ in LyonPotpourri or pvwarp~ in FFTease for custom externals that do this.)

I would be delighted to be shown that pfft~ is in fact able to do direct frequency manipulations.

Eric

lawrence casserley's icon

I am also interested in this area. I am working on something where I
want to separate bands of frequencies and shift each differently, the
kind of thing that was done in some of Trevor Wishart's programs for
the Composers Desktop Project back in the 80s, but do it live. Plus
probably some other things of the kind....

Any ideas much appreciated

L

On 12 Dec 2006, at 22:56, Eric Lyon wrote:

>
> I am also interested in this question. From my brief examination,
> it appears that you cannot directly do pitch scaling (i.e. multiply
> the frequency of each partial by a constant) within pfft~ as the
> only available resynthesis method is IFFT overlap/add, not
> oscillator bank. Though I have been titillated by the oscbank~
> object (and in fact use it with one of my new LyonPotpourri
> externals for pitch scaling) and wonder if oscbank~ could be
> incorporated into a pfft~ subpatch for direct frequency manipulations.
>
> Of course an alternative is to do time-scaling within pfft~ and
> then sample-rate convert outside of the pfft~ patch to get the
> desired pitch change. But you're out of luck if you want to apply
> different multipliers to different partials. (See my
> magfreq_analysis~ in LyonPotpourri or pvwarp~ in FFTease for custom
> externals that do this.)
>
> I would be delighted to be shown that pfft~ is in fact able to do
> direct frequency manipulations.
>
> Eric
>

Lawrence Casserley - lawrence@lcasserley.co.uk
Lawrence Electronic Operations - www.lcasserley.co.uk
Colourscape Music Festivals - www.colourscape.org.uk

Eric Lyon's icon

> I am also interested in this area. I am working on something where I
> want to separate bands of frequencies and shift each differently, the
> kind of thing that was done in some of Trevor Wishart's programs for
> the Composers Desktop Project back in the 80s, but do it live. Plus
> probably some other things of the kind....
>
> Any ideas much appreciated
>

Hi Lawrence,

Of course you could do precisely that with my magfreq_analysis~ hooked up to an oscbank~, but it may not be possible to do this exclusively using MaxMSP core objects. Someone please prove me wrong!

Eric

Jean-Francois Charles's icon

> I am also interested in this question. From my brief examination, it appears
> that you cannot directly do pitch scaling (i.e. multiply the frequency of each
> partial by a constant) within pfft~

Isn't that what [gizmo~] does?

Eric Lyon's icon

Quote: jeanfrancois.charles wrote on Tue, 12 December 2006 23:31
----------------------------------------------------
> > I am also interested in this question. From my brief examination, it appears
> > that you cannot directly do pitch scaling (i.e. multiply the frequency of each
> > partial by a constant) within pfft~
>
> Isn't that what [gizmo~] does?
>
>

Yes, but gizmo~ cannot handle the more general case of independent bin frequency adjustments.

Eric

Stefan Tiedje's icon
Tyler Nitsch's icon

If I'm not mistaken, the change in phase of each bin in a phase vocoder is equal to the instantaneous frequency of that bin. I think the frame delta thingy in the pfft patch is what makes for a consistent unwrapping of the phase and hence each bin being within the specific frequency resolution around (fs/(size of fft))*index. If one were to create a structure which increased or decreased the rate of change of the phase of each bin individually one could probably independantly pitch shift whatever partial(indexed component of an fft) by whatever they wanted.

p-s's icon

just on these last days I have been working in a patch
that
1. transforms phases into instantaneous frequencies
(I found a useful formula in the book DAFX ed. by U. Zolzer,
article by Arfib)
2. makes resynthesis with ioscbank~

so, to do so, I have to feed the third output of a FFT object
(index count), in the fourth input of ioscbank~ (index), right?

BUT

it seems that the speed of index count is not supported
by ioscbank~, and in fact it generates something like zipper noise...

Paolo

lawrence casserley's icon

Thanks Eric

I will have a look at your objects.

I had thought of sending the fft to two places, where one has all the
bins zeroed below my split frequency, and the other those above, then
use two gizmo~s. A bit cumbersome, though......

Best

L

On 12 Dec 2006, at 23:30, Eric Lyon wrote:

> Hi Lawrence,
>
> Of course you could do precisely that with my magfreq_analysis~
> hooked up to an oscbank~, but it may not be possible to do this
> exclusively using MaxMSP core objects. Someone please prove me wrong!
>
> Eric

Lawrence Casserley - lawrence@lcasserley.co.uk
Lawrence Electronic Operations - www.lcasserley.co.uk
Colourscape Music Festivals - www.colourscape.org.uk

Tarik's icon

Thanks a LOT for all your suggestions, I'm really learning about a lot of stuff very fast thanks to the help I get on this forum.

But the more you know, the more you know what you don't know:

For instance, I can't find any information on the order in which signals are sent. How can I make sure that I read a signal out of a buffer AFTER a new value has been written to this buffer?

So I have an fftin~ object sending out a signal which specifies the FFT-Bin index number. Suppose at a certain point this number is 10. Then I want to first write to a buffer at index number 10, and then read from it. Not the other way around.

Is there any way to control this? I can't just use delay because I want the result to be sent to the fftout~ immediately. Not sending the data immediately this would alter the sound in ways that I didn't intend to.

- Tb

Stefan Tiedje's icon

Tarik wrote:
> Is there any way to control this? I can't just use delay because I
> want the result to be sent to the fftout~ immediately. Not sending
> the data immediately this would alter the sound in ways that I didn't
> intend to.

as far as I know, the same right to left rule that applies to scheduler
events, applies to audio rate events: its coming out first on the right
side. Unfortunately there is no such thing as audiorate trigger, but you
can encapsulate a simple input to output as trigger which should work
also with audio patch chords. I use that for having a trigger that does
not change the type you send in...

Save it as St.pat and call it with an argument to get a specified number
of outlets. If you call up the patch from disk to edit it, you need to
hold cmd-shift to prevent it from deleting too much...

Max Patch
Copy patch and select New From Clipboard in Max.

--
Stefan Tiedje------------x-------
--_____-----------|--------------
--(_|_ ----|-----|-----()-------
-- _|_)----|-----()--------------
----------()--------www.ccmix.com

nathan wolek's icon

On Dec 14, 2006, at 10:01 AM, Stefan Tiedje wrote:
> as far as I know, the same right to left rule that applies to
> scheduler events, applies to audio rate events: its coming out
> first on the right side. Unfortunately there is no such thing as
> audiorate trigger,

Stefan:

My experience tells me this is a myth, although I would love to hear
someone from cycling weigh in on this. You're not the only one I
have heard perpetuating it. In programming externals, the signal
vectors for all inputs are available each time you hit the perform
method. It is all passed at the same time. Any right to left
preference would have to be built into the C-code of an individual
object.

[pause]
OK, just read some more of the thread. Looks like you are talking
about in a patch and *not* inlets/outlets as I was. Here too, I
think the myth does not hold. The order is determined by the signal
flow from start to finish in an MSP chain. So the surefire method
for forcing object A to process after object B is to connect outlet
on A to inlet on B.

If they are not connected, figure out a way to do so! This is the
solution given the MSP paradigm. If you are writing into a buffer
*and* then immediately using that value, why not write into the
buffer *and* send the value on to the point in your patch where it is
needed? In other words, don't rely on reading from the buffer in
this case.

--Nathan

-----
Nathan Wolek
nw@nathanwolek.com
http://www.nathanwolek.com

volker böhm's icon

On 14 Dec 2006, at 16:57, Nathan Wolek wrote:

> On Dec 14, 2006, at 10:01 AM, Stefan Tiedje wrote:
>> as far as I know, the same right to left rule that applies to
>> scheduler events, applies to audio rate events: its coming out
>> first on the right side. Unfortunately there is no such thing as
>> audiorate trigger,
>
> Stefan:
>
> My experience tells me this is a myth, ...

since the original question was about signal order in pffts, this
thread might be useful
https://cycling74.com/forums/index.php?
t=msg&goto=50842&rid=0&S=7f6fc62f27f045f33b120d749a0f53b0#msg_50842

the test patch i have posted in the above thread does state that
there is an issue with signal order - at least in pffts - and that it
seems to be the other way round, i.e. left before right.

and what about the following patch? position the poke~ object to the
left of index~ and compare to the original version.
although i'm not completely sure what's going on behind the scenes,
it shows that in certain situations you do have to be careful about
relative position of your signal objects.
cheers,
volker.

Max Patch
Copy patch and select New From Clipboard in Max.


Jean-Francois Charles's icon
volker böhm's icon

>> and what about the following patch? position the poke~ object to the
>> left of index~ and compare to the original version.
> I don't see any difference here. Is it normal?

do you mean no difference to the pfft example, or no difference
wether the position of poke~ is to the left or the right of index~?

Tarik's icon
Roman Thilenius's icon

> My experience tells me this is a myth, although I would love to hear
> someone from cycling weigh in on this. You're not the only one I
> have heard perpetuating it. In programming externals, the signal
> vectors for all inputs are available each time you hit the perform
> method. It is all passed at the same time.

exactly. you must think of "happens at the same time"
when you have 2 signals in a DSP.

so if you want to write one copy into a buffer and read
from the buffer, you need to implement at least one
sample delay. (not a vector - just one sample)

AlexHarker's icon

Quote: Tyler Nitsch wrote on Tue, 12 December 2006 21:23
----------------------------------------------------
> If I'm not mistaken, the change in phase of each bin in a phase vocoder is equal to the instantaneous frequency of that bin. I think the frame delta thingy in the pfft patch is what makes for a consistent unwrapping of the phase and hence each bin being within the specific frequency resolution around (fs/(size of fft))*index. If one were to create a structure which increased or decreased the rate of change of the phase of each bin individually one could probably independantly pitch shift whatever partial(indexed component of an fft) by whatever they wanted.
----------------------------------------------------

This is basically correct. The way to frequency shift fft data without changing resynthesis methods is to phase correct by an amount appropriate to the shift - the phase corrections have to be phase accumulated also if you are operating directly on the cartesian values.

Frequency shifting or warping within a pfft~ is not something I've done without writing an external to do it, because it gets nasty in MSP code. It depends also what you want to do. The method used by gizmo~ (and that I have used for frequency warping) acts on peaks (sinusoidlal components) of an input, rather than individual bins (which represent parts of sinusoidal components). For this method MSP only is not an option - you have to do some form of programming of an external, or at least have a couple of extra objects, and writing efficient code would be very hard.

To simply shift individual bins by an integer number of bins is different issue, especially because the phase correction becomes trivial. It depends on overlap, but I think in all cases you might see in Max/MSP you only have to either leave the phase alone, or reverse the polarity of the complex part every other bin.

This is all covered in more detail (and properly) in this paper: www.ee.columbia.edu/~dpwe/papers/LaroD99-pvoc.pdf

So to answer Eric's question it is possible (sort-of) depending on exactly what you want to do. Partial bin shifts are also possible but more complex than integer ones. Per-peak processing is currently not possible in pure Max/MSP code, but a simple peak finder object would make it possible, although inefficient. I hope this kind of thing will become easier in the future in pfft~, but we need more specialised objects for frame-based operations.

Regards,

Alex

Stefan Tiedje's icon

Nathan Wolek wrote:
> My experience tells me this is a myth, although I would love to hear
> someone from cycling weigh in on this. You're not the only one I have
> heard perpetuating it. In programming externals, the signal vectors
> for all inputs are available each time you hit the perform method. It
> is all passed at the same time. Any right to left preference would
> have to be built into the C-code of an individual object.

As I recall, I did pose the same question some time ago, and I got a
response from David or other c74 guru, maybe Jeremy, which claimed that.
But this was related to the patch cords. I guess that object outlets
could also follow that (common) rule, but I would never rely on it
(though it should be easy to test).

As long you do not create feedback, the signal vector chunks would not
create any problems with this, its the question weather to use the value
from the previous or the current vector and that would be determined by
the order...

Stefan

--
Stefan Tiedje------------x-------
--_____-----------|--------------
--(_|_ ----|-----|-----()-------
-- _|_)----|-----()--------------
----------()--------www.ccmix.com

AlexHarker's icon

> exactly. you must think of "happens at the same time"
> when you have 2 signals in a DSP.

This is incorrect. The signal vector of an MSP object has to be calcuated in some point in real-time, so there is an order for a DSP chain in which objects calculate their signal vectors - it is impossible for this to be done simultaneously.

Although for many purposes you can imagine that the signal cables in MSP are like analog audio cables and everything is simultaneous, this is in fact not the case, and in some specific instances it becomes an issue. The index~ / poke~ frame delay case is one of these.

The compilation of the DSP chain is done with max messages, and has therefore some relationship to the max right-left bottom-up system, although I seem to remember that tests in the past have produced unexpected (reversed) results so I won't try to guess which side (right or left) is first. This is complicated by the fact that objects can make a request to be processed first in the DSP chain, so unlike with max objects, the order of execution cannot be expected for all cases to follow a strict rule.

Anyway, an official line on the index~ / poke~ thing would be useful, although i seem to remeber this being more-or-less sorted in the thread volker linked to, however I only get one message using the link - i seem to remember more discussion in this or another thread - maybe someone knows where it is.....

Alex

nathan wolek's icon

On Dec 16, 2006, at 7:11 AM, Stefan Tiedje wrote:
> As long you do not create feedback, the signal vector chunks would
> not create any problems with this, its the question weather to use
> the value from the previous or the current vector and that would be
> determined by the order...

Yes, getting inside the current vector means getting your hands dirty
and coding externals in C. This is what happened to me. Seems to
have motivated several others on this list as well.

-------------------
Nathan Wolek, PhD --- nwolek@stetson.edu
Assistant Professor of Music Technology
Stetson University - DeLand, FL
http://www.nathanwolek.com

Peter Castine's icon

On 16-Dec-2006, at 13:14, Alex Harker wrote:

>> exactly. you must think of "happens at the same time"
>> when you have 2 signals in a DSP.
>
> This is incorrect. The signal vector of an MSP object has to be
> calcuated in some point in real-time, so there is an order for a
> DSP chain in which objects calculate their signal vectors - it is
> impossible for this to be done simultaneously.

Actually, it's _not_ incorrect.

Take an object like cartopol~ (or lp.c2p~). When the time to process
DSP comes around, both input vectors are full of data. It's not like
cartopol~ gets a vector full of y-data, then waits a bit, gets a
vector full of x-data, pumps out a vector full of phase data,
triggering objects down the line of the phase outlet, and then
finally calculates a vector full of amplitude data and sends that out
the left outelt. (This is starting to sound like Jack Nicholson's
'dirty joke' from Chinatown.)

What really happens is that when cartopol~ (or lp.c2p~) comes to
process DSP, it gets two vectors full of data. It fills up two output
vectors of data. Then MSP call the next item in the DSP chain. All
output data is there. Nobody has any idea whether phase or amp was
filled first, or whether the calculations were interwoven, or
anything else. And it doesn't matter, because all objects down-chain
get all the data when their turn to process DSP comes up.

Obviously the processing is serialized inside each object, but the
processing is also completely encapsulated inside the object. So
other objects respond in a 'input happens at the same time' manner,
not in a right-to-left manner. This is significantly different from
the processing with plain-vanilla Max objects.

I can illustrate the above with working C code if you want.

-------------- http://www.bek.no/~pcastine/Litter/ -------------
Peter Castine +--> Litter Power & Litter Bundle for Jitter
Universal Binaries on the way
iCE: Sequencing, Recording &
Interface Building for |home | chez nous|
Max/MSP Extremely cool |bei uns | i nostri|
http://www.dspaudio.com/ http://www.castine.de

AlexHarker's icon

Quote: Peter Castine wrote on Sat, 16 December 2006 05:54

> Actually, it's _not_ incorrect.

There is a confusion here between those talking about about DSP order of execution within an object (given a number of signal inputs/outputs to the object) and those talking about order of execution between DSP objects. Roman's comments seemed unclear as to which situation he was refering to. For others reading the thread let's make the distinction clear.

Neither of us appear to be wrong. Peter, your comments relate to order of execution within an object (which is up to the programmer in question) - my comments relate to order of execution between objects(as in the poke~ index~ cases cited).

Order of execution within an object shouldn't be an issue for end users. Order of execution between objects is an issue for max programmers in some cases.

No need for the examples - thanks

Regards

Alex

Luigi Castelli's icon

Hi there,

I find this thread very interesting...

At this point I have two questions:

1 - Talking about order of execution within an MSP
object, and given an MSP object with multiple outlets,
how is it up to the programmer to decide the output
order? What I mean is that whether in your perform
method you entirely process a vector before the other,
that doesn't change the actual final output order,
which - in my understanding - is something hard-wired
inside MSP, which 3rd party programmers have no access
to.

Code examples:

n = blksize;
while (n--) {
    *out0++ = 0.f;
    *out1++ = 1.f;
    *out2++ = 2.f;
    *out3++ = 3.f;
}

or

n = blksize;
while (n--) {
    *out3++ = 3.f;
    *out2++ = 2.f;
    *out1++ = 1.f;
    *out0++ = 0.f;
}

or even

n = blksize;
while (n--) {
    *out0++ = 0.f;
}
n = blksize;
while (n--) {
    *out1++ = 0.f;
}
n = blksize;
while (n--) {
    *out2++ = 0.f;
}
n = blksize;
while (n--) {
    *out3++ = 0.f;
}

In my understanding the above loops have no bearing to
the output order of the object. Can someone confirm
this or not?

2 - Talking about order of execution between objects
leads to the same result. Is there anything that a
programmer could do or not do to make this behaviour
more consistent? As far as I know it is still
something internal to the inner-workings of MSP,
however maybe I am overlooking something...

BTW, all of the above does NOT apply for plain Max
object, where the programmer is actually 100%
responsible for output order.

Please someone comment on this.

Thank you.

- Luigi

--- Alex Harker wrote:

>
> Quote: Peter Castine wrote on Sat, 16 December 2006
> 05:54
>
> > Actually, it's _not_ incorrect.
>
> There is a confusion here between those talking
> about about DSP order of execution within an object
> (given a number of signal inputs/outputs to the
> object) and those talking about order of execution
> between DSP objects. Roman's comments seemed unclear
> as to which situation he was refering to. For others
> reading the thread let's make the distinction clear.
>
> Neither of us appear to be wrong. Peter, your
> comments relate to order of execution within an
> object (which is up to the programmer in question) -
> my comments relate to order of execution between
> objects(as in the poke~ index~ cases cited).
>
> Order of execution within an object shouldn't be an
> issue for end users. Order of execution between
> objects is an issue for max programmers in some
> cases.
>
> No need for the examples - thanks
>
> Regards
>
> Alex
>

------------------------------------------------------------
THIS E-MAIL MESSAGE IS FOR THE SOLE USE OF THE INTENDED RECIPIENT AND MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION. ANY UNAUTHORIZED REVIEW, USE, DISCLOSURE OR DISTRIBUTION IS PROHIBITED. IF YOU ARE NOT THE INTENDED RECIPIENT, CONTACT THE SENDER BY E-MAIL AT SUPERBIGIO@YAHOO.COM AND DESTROY ALL COPIES OF THE ORIGINAL MESSAGE. WITHOUT PREJUDICE UCC1-207.
------------------------------------------------------------

Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

Peter Castine's icon

On 16-Dec-2006, at 17:41, Luigi Castelli wrote:

> In my understanding the above loops have no bearing to
> the output order of the object. Can someone confirm
> this or not?

This is exactly the point I was trying to make.

-------------- http://www.bek.no/~pcastine/Litter/ -------------
Peter Castine +--> Litter Power & Litter Bundle for Jitter
Universal Binaries on the way
iCE: Sequencing, Recording &
Interface Building for |home | chez nous|
Max/MSP Extremely cool |bei uns | i nostri|
http://www.dspaudio.com/ http://www.castine.de

Tarik's icon

Its very nice to see this whole discussion going on 'cause I was feeling like an idiot, not knowing how the order in signals in MSP actually works. Unfortunately I still can't say anything sensible about this subject.

But I'd like to jump in again and post a little example of an extremely simple binshifting patch (yeah, way back in this thread we also discussed binshifting), which shows some strange behaviour.

When I shift the fft-signal one bin up (by simply using delay), the sound gets distorted. If I shift it up two bins, the sound is transposed nicely. If I shift it up three bins, it gets distorted again and with four it sound perfect again... and so on.

My question is: why??

To test it, save the following as "binshift.pat" (this is the pfft~ subpatch)

-----------------------------------------------------

Max Patch
Copy patch and select New From Clipboard in Max.

-----------------------------------------------------

And save the following as... well, as whatever you like:

-----------------------------------------------------

Max Patch
Copy patch and select New From Clipboard in Max.

---------------------------------------------------------

- Tb

Eric Lyon's icon

> So to answer Eric's question it is possible (sort-of) depending on exactly what you want to do.
> Partial bin shifts are also possible but more complex than integer ones.
> Per-peak processing is currently not possible in pure Max/MSP code,
> but a simple peak finder object would make it possible, although inefficient.
> I hope this kind of thing will become easier in the future in pfft~,
> but we need more specialised objects for frame-based operations.
>

This sounds great in theory, but I've never actually seen it done without some C intervention a la gizmo~. Could you show us a pfft~ patch that pitch-scales its input (i.e. multiplies each frequency in the spectrum by a user manipulable constant) using only MaxMSP core objects?

Eric

Eric Lyon's icon

Sorry about the double post above. And of course not using gizmo~ in the patch - that would be cheating!

nathan wolek's icon

On Dec 16, 2006, at 1:21 PM, Peter Castine wrote:
> This is exactly the point I was trying to make.

And you are both right, MSP delivers them at the same time. But the
effect of Luigi's last example could be the *appearance* of out0
being ignorant of out1, out2, & out3, and so on.

It would only be an issue in feedback situations, which is a lot of
situations in computer music. But I don't need to tell either of you
that. Just recording it here for posterity. :)

-------------------
Nathan Wolek, PhD --- nwolek@stetson.edu
Assistant Professor of Music Technology
Stetson University - DeLand, FL
http://www.nathanwolek.com

nathan wolek's icon

On Dec 16, 2006, at 11:41 AM, Luigi Castelli wrote:
> Talking about order of execution between objects
> leads to the same result. Is there anything that a
> programmer could do or not do to make this behaviour
> more consistent?

Luigi;
The only thing I know is to force the user to connect outlet to inlet
a la [tapin~] & [tapout~]. Of course they use a non-MSP connection,
a technique that I believe is undocumented. What info are they
passing? Don't recall this in the SDK. Anyone else know how this is
done or where to find the info?

This thread is becoming ever more "dev".
--Nathan

-------------------
Nathan Wolek, PhD --- nwolek@stetson.edu
Assistant Professor of Music Technology
Stetson University - DeLand, FL
http://www.nathanwolek.com

AlexHarker's icon

Quote: Luigi Castelli wrote on Sat, 16 December 2006 09:41

> 1 - Talking about order of execution within an MSP
> object, and given an MSP object with multiple outlets,
> how is it up to the programmer to decide the output
> order?

It's not. I didn't mean to imply this. Output order and order of execution are not the same thing. Inside an object the order that code is executed (for instance what order the outputs are calulated in) is up to the programmer of the object, as appropriate to the situation. This has effectively no relevance to the end-user (max/msp programmer) as long as it performs the dsp correctly. In fact this is exactly what your code examples demonstrate. Sorry for any confusion.

>What I mean is that whether in your perform
> method you entirely process a vector before the other,
> that doesn't change the actual final output order
> In my understanding the above loops have no bearing to
> the output order of the object. Can someone confirm
> this or not?

Yes, it doesn't matter, one can essentially assume that they are simultaneous, but it may matter to the programmer in cases where outputs/inputs have dependencies on one another.

> 2 - Talking about order of execution between objects
> leads to the same result. Is there anything that a
> programmer could do or not do to make this behaviour
> more consistent? As far as I know it is still
> something internal to the inner-workings of MSP,
> however maybe I am overlooking something...

I'm not sure about whether one can control the consistency of this. My point is merely that it may matter in a feedback-related situation (as others have mentioned). In pfft~ this problem occurs if you try to use buffer~ objects for storing data that relates to a frame - like phase-accumulating in cartesian geometry using a buffer~ to store vals from one frame to the next. In this case you would want to read out of the buffer before writing in order to get the values from the previous frame. If you write first then you'll be reading values that you've just written in, and it won't work (although as the recent c74 tutorial on phase vocoders shows a [send~] and [receive~] combination is a more reliable method in this instance, because it doesn't depend on object positioning on the screen). As with fft operations the main limitation of msp is the difficulty of operating across a frame, or between frames (the paradigm of treating each sample as the same on which most msp objects work is not so appropriate in this context because the position of the sample within the vectro has meaning) using buffer~s in this way may be useful, and hence it is important to understand whether you will incur the delay or not if you use this method.

As I reflect on this though I am not sure that there is any need for this given that send~ and receive~ is more reliable, and for non-delayed examples anything that could require a buffer~ (eg. re-ordering within a frame) can be done with vectral~. I'd be happy to be corrected on any of this though.

Regards

Alex

AlexHarker's icon

Quote: Eric Lyon wrote on Sat, 16 December 2006 12:10

> This sounds great in theory, but I've never actually seen it done without some C intervention a la gizmo~. Could you show us a pfft~ patch that pitch-scales its input (i.e. multiplies each frequency in the spectrum by a user manipulable constant) using only MaxMSP core objects?
>
> Eric

Ummmm. No - sorry, some people here are talking about by-bin operations (which CAN be done), some people about operations on sinusoidal peaks (which will excite several bins, which need to be treated as a whole and kept in correct phase alignment for "correct" resynsthesis). The latter CAN'T (as I understand it) be done.

What you're asking for can't be done with vanilla MSP (afaik) because to transponse properly, you need to be able to detect sinusoidal peaks and then shift them, rather than operate on each bin separately. This isn't possible in MSP code. The other problem is that for fractional transpositions between 0 and 1 (or -1 and 1 if you allow negative frequencies), or warping where the peaks maybe be overlapped at the output, you need to be able to do a += operation on the output buffer - which I don't know how to do in MSP code (i don't think it can be done), as peaks will be pasted on top of one another in these cases.

What CAN be done currently in MSP code is transforms which take no account of sinusoidal peaks, and instead operate on individual bins, but I haven't done any work with this method, because you are messing up your data quite a lot by doing this, and you can't do a traditional and accurate transposition, although maybe it might sound great for a bizzare frequency warping effect. I could code something up, but it might take a while. However, I'm pretty confident that this is technically possible, even if hard to achieve.

My approach used to be to do everything for this kind of thing in C (I have an external for frequency warping in stereo for instance that works much like gizmo~), but I have come to realise more and more that MSP is not that far off being able to do these kinds of things. We just need tools to do things like :

--find peaks and give a reasonably accurate amplitude / freq estimate (ideally I'd like to be able to do some calculations only once-per-peak for effeciency reasons, but this would require a variable vector size patcher~/poly~ style object, which is currently not possible, or an object using expr type arguments - which would actually be feasible)

-- += ops on a buffer (zeroing the buffer at the start of each frame)

-- find max/min vals in a frame

-- sum across a frame

etc.

These kind of objects would make more fft-based operations available to the object-programming max user. Maybe at some point I'll make some of them. Hopefully, though we can get objects at some point in the main MSP distro for this stuff.

I might code up the peak finder with expr based per-peak calculation - that would get us pretty close to the frequency-based stuff, and maybe do a frame-sized buffer with a += input (I don't have time to get to know how to use buffer~ from an external right now). Then I could make a patch that would do what you want. It would of course be using C (and hence cheating), but the point is that the task would be broken up into stages that individually might be modified or recombined within Max to change the process or design a new one, whereas something like gizmo~ does the whole thing within an external without allowing the user to intervene.

Maybe not the answer you were looking for, but I'd be interested to hear your thoughts....

Alex

Eric Lyon's icon

> Maybe not the answer you were looking for, but I'd be interested
> to hear your thoughts....

Not the answer I was hoping for, but pretty much the answer I was expecting. Things could be much simplified if pfft~ were generalized to provide magnitude/frequency representation (see my LyonPotpourri external magfreq_analysis~) and oscillator bank resynthesis. F. R. Moore in Elements of Computer Music discusses this dual resynthesis option as a convenient way to effect arbitrary frequency transformations. The only drawback is that under many conditions, oscillator resynthesis is considerably more CPU-expensive than IFFT.

In any case, the consensus here seems to be that while pfft~ has covered some distance, it has not quite crossed the finish line.

Eric

Luigi Castelli's icon

Ok, I am clear now. Thanks Alex.

Intuitevely I would agree that the send~/receive~
combination should take care of any ordering problem,
since it automatically introduces a 1 vector delay.
However I haven't practically tried it, so I don't
really know...

The other solution that Nathan suggested appeals to me
greatly as well. Using the same technique as
tapin~/tapout~ might open new possibilities not only
because it solves the FFT buffer reading/writing
problem, but also because allows to write externals
that are able to use the memory buffer that tapin~
provides. As as matter of fact a while ago I started
writing a granular object called graintap~, with the
intent of doing exactly this: reading from tapin~ and
granulating the tapin~ buffer in real time.
The only problem is that I later realized that the
technique to access tapin~ is not documented and an
attempt to ask the C74 folks to share some
tapin~/tapout~ code miserably failed.
I would greatly appreciate if anybody who knew how to
access the tapin~ memory buffer could share the
technique.... or - even better - if the tapout~ code
was included in the next SDK release....

Ciao.

- Luigi

--- Alex Harker wrote:

>
> Quote: Luigi Castelli wrote on Sat, 16 December 2006
> 09:41
>
> > 1 - Talking about order of execution within an MSP
> > object, and given an MSP object with multiple
> outlets,
> > how is it up to the programmer to decide the
> output
> > order?
>
> It's not. I didn't mean to imply this. Output order
> and order of execution are not the same thing.
> Inside an object the order that code is executed
> (for instance what order the outputs are calulated
> in) is up to the programmer of the object, as
> appropriate to the situation. This has effectively
> no relevance to the end-user (max/msp programmer) as
> long as it performs the dsp correctly. In fact this
> is exactly what your code examples demonstrate.
> Sorry for any confusion.
>
> >What I mean is that whether in your perform
> > method you entirely process a vector before the
> other,
> > that doesn't change the actual final output order
> > In my understanding the above loops have no
> bearing to
> > the output order of the object. Can someone
> confirm
> > this or not?
>
> Yes, it doesn't matter, one can essentially assume
> that they are simultaneous, but it may matter to the
> programmer in cases where outputs/inputs have
> dependencies on one another.
>
> > 2 - Talking about order of execution between
> objects
> > leads to the same result. Is there anything that a
> > programmer could do or not do to make this
> behaviour
> > more consistent? As far as I know it is still
> > something internal to the inner-workings of MSP,
> > however maybe I am overlooking something...
>
> I'm not sure about whether one can control the
> consistency of this. My point is merely that it may
> matter in a feedback-related situation (as others
> have mentioned). In pfft~ this problem occurs if you
> try to use buffer~ objects for storing data that
> relates to a frame - like phase-accumulating in
> cartesian geometry using a buffer~ to store vals
> from one frame to the next. In this case you would
> want to read out of the buffer before writing in
> order to get the values from the previous frame. If
> you write first then you'll be reading values that
> you've just written in, and it won't work (although
> as the recent c74 tutorial on phase vocoders shows a
> [send~] and [receive~] combination is a more
> reliable method in this instance, because it doesn't
> depend on object positioning on the screen). As with
> fft operations the main limitation of msp is the
> difficulty of operating across a frame, or between
> frames (the paradigm of treating each sample as the
> same on which most msp objects!
> work is not so appropriate in this context because
> the position of the sample within the vectro has
> meaning) using buffer~s in this way may be useful,
> and hence it is important to understand whether you
> will incur the delay or not if you use this method.
>
> As I reflect on this though I am not sure that there
> is any need for this given that send~ and receive~
> is more reliable, and for non-delayed examples
> anything that could require a buffer~ (eg.
> re-ordering within a frame) can be done with
> vectral~. I'd be happy to be corrected on any of
> this though.
>
> Regards
>
> Alex
>

------------------------------------------------------------
THIS E-MAIL MESSAGE IS FOR THE SOLE USE OF THE INTENDED RECIPIENT AND MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION. ANY UNAUTHORIZED REVIEW, USE, DISCLOSURE OR DISTRIBUTION IS PROHIBITED. IF YOU ARE NOT THE INTENDED RECIPIENT, CONTACT THE SENDER BY E-MAIL AT SUPERBIGIO@YAHOO.COM AND DESTROY ALL COPIES OF THE ORIGINAL MESSAGE. WITHOUT PREJUDICE UCC1-207.
------------------------------------------------------------

Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

Eric Lyon's icon

> The other solution that Nathan suggested appeals to me
> greatly as well. Using the same technique as
> tapin~/tapout~ might open new possibilities not only
> because it solves the FFT buffer reading/writing
> problem, but also because allows to write externals
> that are able to use the memory buffer that tapin~
> provides. As as matter of fact a while ago I started
> writing a granular object called graintap~, with the
> intent of doing exactly this: reading from tapin~ and
> granulating the tapin~ buffer in real time.
> The only problem is that I later realized that the
> technique to access tapin~ is not documented and an
> attempt to ask the C74 folks to share some
> tapin~/tapout~ code miserably failed.
> I would greatly appreciate if anybody who knew how to
> access the tapin~ memory buffer could share the
> technique.... or - even better - if the tapout~ code
> was included in the next SDK release....
>

Hi Luigi,

I'm curious as to what benefit you might get by accessing a tapin~ buffer, rather than just accessing a normal MaxMSP buffer allocated with buffer~. I agree that it would be interesting to see the code to tapin~ and tapout~. Perhaps they bear some vestigial resemblance to Pd's tabwrite~/tabread~ (or not)?

Eric

PhiDjee's icon

Hi,

This is one of the most interesting thread I read from several years.
Thanks for sharing a such upper-erudition!

And while reading, came to me this question about a new plug I'm
writing:
Inside a [pfft~], is there a significant difference between [gizmo~]
versus [fbinshift~] for pitch-shifting a frequency domain?
Qualitative difference in the resultant transposed signal?
Difference in fastness and stability?

All insights welcome!

(I'm running the current 4.6.2 on a quad G5)

Kind regards,
Philippe Gruchet

Roman Thilenius's icon

> Hi Luigi,
>
> I'm curious as to what benefit you might get by accessing a tapin~ buffer, rather than just accessing a normal MaxMSP buffer allocated with buffer~. I agree that it would be interesting to see the code to tapin~ and tapout~. Perhaps they bear some vestigial resemblance to Pd's tabwrite~/tabread~ (or not)?
>
> Eric

yeah, the idea to bypass the runtime signal enviroment
is tempting, but what would it help? (in the given case)

in the case of tepin/tapout both of these objects have
at least on one side a connection to the "same sample at
the same time" world.

nathan wolek's icon

On Dec 16, 2006, at 8:15 PM, Luigi Castelli wrote:
> I would greatly appreciate if anybody who knew how to
> access the tapin~ memory buffer could share the
> technique.... or - even better - if the tapout~ code
> was included in the next SDK release....

I'll second that request. The reason I made the suggestion in the
first place was due to consideration I have given to a similar-styled
functionality for a future external project. Example code would be
very helpful.

-------------------
Nathan Wolek, PhD --- nwolek@stetson.edu
Assistant Professor of Music Technology
Stetson University - DeLand, FL
http://www.nathanwolek.com

Jean-Francois Charles's icon
Eric Lyon's icon

>
>
> yeah, the idea to bypass the runtime signal enviroment
> is tempting, but what would it help? (in the given case)
>

I'm not sure I understand this. How does tapin~/tapout~ bypass the runtime environment in a way that buffer~/groove~ does not?

> in the case of tepin/tapout both of these objects have
> at least on one side a connection to the "same sample at
> the same time" world.
>
>

Actually I think this is not the case, since there is an enforced feedback delay, the size of a signal vector. Or are you saying something else here?

Eric

Eric Lyon's icon

> I'll second that request. The reason I made the suggestion in the
> first place was due to consideration I have given to a similar-styled
> functionality for a future external project. Example code would be
> very helpful.
>

Hi Nathan,

If you don't mind my asking, what would you hope to gain from access to tapin~ memory that you could not get from accessing buffer~ memory?

Eric

nathan wolek's icon

On Dec 17, 2006, at 3:18 PM, Eric Lyon wrote:
> If you don't mind my asking, what would you hope to gain from
> access to tapin~ memory that you could not get from accessing
> buffer~ memory?

Eric:
Well, I have gotten a little frustrated with all the connections that
are necessary for multiple voices in my granular toolkit. You
currently have to connect the signals for parameters to every
[grain.*~] object. So I want to address this in a 2.0 upgrade that I
am currently in the *early* stages of thinking about.

One thing that I have considered would involve a central [grainhub~]
object to which you would connect your algorithms for durations,
sampling increment, etc. Then like the tap twins, you would connect
a single Max cord to any voices that should use its incoming data.

[buffer~] would not quite work for the way I envision it. Although
the named variable passing may be an improvement over hard wired
patching.

The other alternative is to make the [grain.*~] objects polyphonic.
But then you lose the ability to route different grains to different
places.

Anyway, I would love to hear yours (or anybody's) thoughts on this
scheme. I hope I have explained it well enough.

--Nathan

-----
Nathan Wolek
nw@nathanwolek.com
http://www.nathanwolek.com

Luigi Castelli's icon

Well, first of all I think that - as a Max/MSP user -
anything you can do with tapin~/tapout~ can be done as
well with buffer~ and some combination of objects
specific to your case such as record~, wave~, groove,
etc... and even send~ and receive~ if you need
feedback.

However I often look at what is the easiest, quickest
and most efficient way to do perform some task. When I
say easier I also mean that allows me to think the
least.

- When I set up a buffer object, I have to think about
the buffer~ name, the size of the buffer, the number
of channels, then I have to instantiate a record~
object, think if I want to loop or not, etc... With
tapin~ I tell it the size of the memory buffer and
connect the output of my sound source to its inlet.
Done. Feedback is a few steps away too, no need of
send~ or receive~ objects. So it's definetely easier
and probably more efficient too.

- Also there would be a graphical element that could
potentially make a patcher easier to read. When you
see many objects connected to a tapin~ you can easily
guess that they work together. With buffer~ you have
no patchcord connection between objects sharing the
same buffer~, only a name reference.

- At a code level, I am not crazy about the
interleaved nature of the buffer~ interface, which, by
the way, doesn't really allow for any optimization to
be performed by the reading object, especially if you
want to use cubic interpolation.

- Finally I think it's always good to have more than
one way of doing something. So expanding your
programming options by learning some more
inner-workings of my all-time favorite program is
always fun ;-)

Having said all of this, I don't mean in anyway that
buffer~ is not useful or is an object that should be
avoided. It's a great external without which much of
the MSP functionality would not be possible. I am just
underlining some of the potential reasons why someone
like me would want to know more about the
tapin~/tapout~ scheme.

My 2 cents.

Ciao.

- Luigi

--- Eric Lyon wrote:

>
> > I'll second that request. The reason I made the
> suggestion in the
> > first place was due to consideration I have given
> to a similar-styled
> > functionality for a future external project.
> Example code would be
> > very helpful.
> >
>
> Hi Nathan,
>
> If you don't mind my asking, what would you hope to
> gain from access to tapin~ memory that you could not
> get from accessing buffer~ memory?
>
> Eric
>
>

------------------------------------------------------------
THIS E-MAIL MESSAGE IS FOR THE SOLE USE OF THE INTENDED RECIPIENT AND MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION. ANY UNAUTHORIZED REVIEW, USE, DISCLOSURE OR DISTRIBUTION IS PROHIBITED. IF YOU ARE NOT THE INTENDED RECIPIENT, CONTACT THE SENDER BY E-MAIL AT SUPERBIGIO@YAHOO.COM AND DESTROY ALL COPIES OF THE ORIGINAL MESSAGE. WITHOUT PREJUDICE UCC1-207.
------------------------------------------------------------

Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

nathan wolek's icon

On Dec 17, 2006, at 3:13 PM, Luigi Castelli wrote:
> - Also there would be a graphical element that could
> potentially make a patcher easier to read.

This would be a big benefit in my hypothetical use of the tap twins
model.

-----
Nathan Wolek
nw@nathanwolek.com
http://www.nathanwolek.com

volker böhm's icon

On 17 Dec 2006, at 03:45, Philippe GRUCHET wrote:

> Inside a [pfft~], is there a significant difference between
> [gizmo~] versus [fbinshift~] for pitch-shifting a frequency domain?
> Qualitative difference in the resultant transposed signal?

hi philippe
not sure i'm following you here, but pitch-shifting (gizmo~) is not
the same as frequency-shifting (fbinshift~)
so you can't really compare the two quality-wise etc.
volker

Stefan Tiedje's icon

Peter Castine wrote:
> On 16-Dec-2006, at 17:41, Luigi Castelli wrote:
>
>> In my understanding the above loops have no bearing to
>> the output order of the object. Can someone confirm
>> this or not?
>
>
> This is exactly the point I was trying to make.

But the original question was for the patching side. And the
index~/poke~ examples shows, that the order does matter and has to
matter, and its obviously (luckily) a question of position at the time
the DSP chain is constructed (audio switched on) and not for example
order of creation. But it seems reveresed to the Max order. Could
someone point Joshua or David to the thread please, those who know the
innerds of MSP could shed some light on it, the rest is doomed to
speculation...

Stefan

--
Stefan Tiedje------------x-------
--_____-----------|--------------
--(_|_ ----|-----|-----()-------
-- _|_)----|-----()--------------
----------()--------www.ccmix.com

Eric Lyon's icon

>
> [buffer~] would not quite work for the way I envision it. Although
> the named variable passing may be an improvement over hard wired
> patching.
>
> The other alternative is to make the [grain.*~] objects polyphonic.
> But then you lose the ability to route different grains to different
> places.
>

Couldn't you have user-defined multiple outlets on a polyphonic grain generator and then route to as many different places as desired? Alternatively one could use multiple copies of the grain generator, driven by similar or identical data sets.

>
> Anyway, I would love to hear yours (or anybody's) thoughts on this
> scheme. I hope I have explained it well enough.
>

Have you tried doing granular synthesis using poly~ driven by JavaScript? It seems like it should be possible, but then again, I don't know what the overhead would be and if poly~ would be happy with 1000s of active voices.

Eric

Eric Lyon's icon

>
> However I often look at what is the easiest, quickest
> and most efficient way to do perform some task. When I
> say easier I also mean that allows me to think the
> least.
>

I totally agree with that design philosophy.

> - When I set up a buffer object, I have to think about
> the buffer~ name, the size of the buffer, the number
> of channels, then I have to instantiate a record~
> object, think if I want to loop or not, etc... With
> tapin~ I tell it the size of the memory buffer and
> connect the output of my sound source to its inlet.
> Done.

While I see your point, I think you overplay the attention overhead of using buffer~. Yes, you can specify all the above options but if you just want a monophonic buffer of duration 1 second, you just say "buffer~ mybufname 1000" and you're done. Note also that assuming you're going to write your own external to interact with buffer~ then you don't have to worry about the intricacies of record~/groove~ either, just the idiosyncracies of your own object :-) And you still have record~ and groove~ if you decide you want to use them after all.

> Feedback is a few steps away too, no need of
> send~ or receive~ objects. So it's definetely easier
> and probably more efficient too.
>

Now that's an interesting question - is tapin~/tapout~ more efficient than buffer~/whatever~? That might require a response from a c74 agent for a definitive answer.

> - Also there would be a graphical element that could
> potentially make a patcher easier to read. When you
> see many objects connected to a tapin~ you can easily
> guess that they work together. With buffer~ you have
> no patchcord connection between objects sharing the
> same buffer~, only a name reference.
>

By the same token, things could get confusing if you wanted to connect your tapin~ to other objects in subpatches. I find the name reference about equally helpful to the tapin~/tapout~ cable, visually, but that might be an artifact of writing too much C code.

> - At a code level, I am not crazy about the
> interleaved nature of the buffer~ interface, which, by
> the way, doesn't really allow for any optimization to
> be performed by the reading object, especially if you
> want to use cubic interpolation.
>

You could enforce mono buffers on your users. Again we'd have to hear from c74 if there is an efficiency difference between using a mono buffer and internal tapin~ memory.

> - Finally I think it's always good to have more than
> one way of doing something. So expanding your
> programming options by learning some more
> inner-workings of my all-time favorite program is
> always fun ;-)
>

No argument against that!

Eric

PhiDjee's icon

Hi Volker,

>> is there a significant difference between [gizmo~] versus
>> [fbinshift~] for pitch-shifting a frequency domain?
>> Qualitative difference in the resultant transposed signal?
> not sure I'm following you here, but pitch-shifting (gizmo~) is not
> the same as frequency-shifting (fbinshift~)

About processing a FFT, you're right, of course ;-)
Well, my approach here is more musical than technical.

> so you can't really compare the two quality-wise etc.

My question is about what you're thinking of these two objects for
just a real-time (no audible latency) 'transposition' purpose.
(In frequency or pitch, or both.)
Comparing some subjective results ("the two quality-wise") and
technical (speed and stability) from several users could be useful.

In my case, the best pitch-shifter (a plug-in) I was able to test
didn't process an incoming polyphonic&homophonic audio steam in real-
time. (I started first with the internal Apple's AUPitch in several
AU hosts.)
In the other way, the fastest plug-in I tried out was almost usable
in real-time but *very* noisy. No future for this one!
So, I decided to work with gizmo and fbinshift.

I'm still listening for your... any point of view ;-)

Cheers,
Philippe

volker böhm's icon
Stefan Tiedje's icon

Eric Lyon wrote:
> Could you show us a pfft~ patch that pitch-scales its input (i.e.
> multiplies each frequency in the spectrum by a user manipulable
> constant) using only MaxMSP core objects?

Do account mxj~ as MaxMSP core?

Stefan

--
Stefan Tiedje------------x-------
--_____-----------|--------------
--(_|_ ----|-----|-----()-------
-- _|_)----|-----()--------------
----------()--------www.ccmix.com

Stefan Tiedje's icon

Nathan Wolek wrote:
> The other alternative is to make the [grain.*~] objects polyphonic.
> But then you lose the ability to route different grains to different
> places.

Why? I do that all the time, the multichannel distribution is just part
of the voice...

Stefan

--
Stefan Tiedje------------x-------
--_____-----------|--------------
--(_|_ ----|-----|-----()-------
-- _|_)----|-----()--------------
----------()--------www.ccmix.com

Eric Lyon's icon

> > Could you show us a pfft~ patch that pitch-scales its input (i.e.
> > multiplies each frequency in the spectrum by a user manipulable
> > constant) using only MaxMSP core objects?
>
> Do account mxj~ as MaxMSP core?
>

Definitely.

Eric

PhiDjee's icon

Hi,

One of us has slightly changed the name of this topic and inside the same thread, instead of creating a new topic.
I'm the faulty guy, thinking that my question had its place here.

I apologize for the disturbance to the forum readers.
Philippe

PhiDjee's icon

>> MSP46ReferenceManual, page 169, "freqshift In left inlet".
>
> hm, am still using 4.5.6 - so no freqshifting with gizmo~ for me,
> i guess...

I've just switched to 4.6.2 last week and I've still 4.5.7 installed.
You're right, the 'freqshift' message to left inlet of gizmo~ doesn't exist in 4.5.7.
I can tell you that the two versions of Max works well on a same machine, alternatively if needed, or even together!

Bye,
Philippe

AlexHarker's icon

Quote: Eric Lyon wrote on Mon, 18 December 2006 04:44
----------------------------------------------------
> > > Could you show us a pfft~ patch that pitch-scales its input (i.e.
> > > multiplies each frequency in the spectrum by a user manipulable
> > > constant) using only MaxMSP core objects?
> >
> > Do account mxj~ as MaxMSP core?
> >
>
> Definitely.
>
> Eric
>
----------------------------------------------------

Oh if only this email had come earlier everything might have all gone very differently........

Anyway, after reconsidering the problem, I became more and more convinced that there must be a way to implement a gizmo~ equivalent (a least the basic transposition bit) in core-MSP objects - I considered mxj~ not to be an option simply because I already knew it was possible using mxj~ because you can can do all the loop and maths stuff in Java if you can do it in C.

At this point my question to Eric is: what's your interest in a MSP-only implementation - is it just portability, or is it accessiblity to more users? Obviously mxj~ provides the first of these, although the second is more problematic, as although the API is simpler DSP code in text is still going to be too much for many users?

Back to the problem. To cut a long story short, I then spent a long train journey which would have otherwise been quite dull (or at least dull in a different way) coding an MSP only solution with no mxj~. It's presented below (sorry - no comments or explanation included). However, it's pretty hopelessly inefficient (takes 20% CPU on my MBP and only runs with an i/o vs of > 512 compared to gizmo~s 3-4%, so it's of almost no practical value - expect to demonstrate that it can be done, and a for the use of a few interesting MSP techniques to 'cheat' things that are not easy to do in MSP code).

Of most relevance to this discussion is a trick using poly~ to force the desired order of execution within a DSP chain. By encapsulating various poke~ and index~ objects within individual poly~s I figured out that it would be possible to chain them together with dummy in~s and out~s in the desired order, and because each one has it's own internal dsp chain this forces the order to be that in which they are connected (as with normal individual msp) objects. So, this is a viable solution for anyone who wishes to force a certain order of execution to make a patch work. After doing this the position of the poly~ objects on the screen becomes irrelevant to the correct functioning of the patch (I wasn't actually able to get correct results using positioning).

So, the patch is below. I'll make a second post with a revised version using an mxj~ class to do the least effecient bit in this version, including the source for the java class.

Sorry about the many separate patches - it was the only way.....

I fear I have sunk to new levels of geekdom.....

>>>>> Save as whatever you like and load as a pfft with any fft size you like (2048 or 4096 recommened) and with an overlap of 4 - audio goes in the left - the transposition multiple in the right<<<<<<<<

Max Patch
Copy patch and select New From Clipboard in Max.

>>>>>>>> Save as AccumBuf.poly in the same folder <<<<<<<<<<
////////////////////////////////////////////////////////////

Max Patch
Copy patch and select New From Clipboard in Max.

>>>>>>>> Save as FindFirstPeak in the same folder <<<<<<<<<<
////////////////////////////////////////////////////////////

Max Patch
Copy patch and select New From Clipboard in Max.

>>>>>>>> Save as FindPeakStarts in the same folder <<<<<<<<<<
////////////////////////////////////////////////////////////

Max Patch
Copy patch and select New From Clipboard in Max.

>>>>>>>> Save as ReadAmps in the same folder <<<<<<<<<<
////////////////////////////////////////////////////////////

Max Patch
Copy patch and select New From Clipboard in Max.

>>>>>>>> Save as ReadPeaks in the same folder <<<<<<<<<<
////////////////////////////////////////////////////////////

Max Patch
Copy patch and select New From Clipboard in Max.

>>>>>>>> Save as StoreAmps in the same folder <<<<<<<<<<
////////////////////////////////////////////////////////////

Max Patch
Copy patch and select New From Clipboard in Max.

>>>>>>>> Save as StorePeakCors in the same folder <<<<<<<<<<
////////////////////////////////////////////////////////////

Max Patch
Copy patch and select New From Clipboard in Max.

>>>>>>>> Save as ZeroPeakBuf in the same folder <<<<<<<<<<
////////////////////////////////////////////////////////////

Max Patch
Copy patch and select New From Clipboard in Max.

/////////////////////////////////////////////////////////////

Alright - that's the lot.

AlexHarker's icon

As promised here is the revised pfft patch along with the mxj code for the mxj~ class (called SumOut), as mentioned in the previous post.

You still need all the other files in the same directory, you just use this patch as the pfft patch (same deal - any fft size prob 2048 / 4096 are best, and an overlap of 4 (or more but you'll be lucky if you get it to run and you may need to adjust i/o vector sizes to get it running smoothly without CPU spiking). You'll also need to compile the mxj~ class on your machine.

I think the most efficient way of doing this with MSP objects only might be to do it all in mxj~, although this is just a hunch, and I'm not going to try it. There are definitely parts of the MSP code that could be done more efficiently in Java, because redundant calculations could be avoided.

Alex

>>>>>>>>>>>>>>> The revised pfft patch <<<<<<<<<<<<<<<<<<<<

Max Patch
Copy patch and select New From Clipboard in Max.

>>>>>>>>>>>>>>>>> The java class code. You'll need to compile it as SumOut (no tilda btw) <<<<<<<<<<<<<<<<<

import com.cycling74.max.*;
import com.cycling74.msp.*;

public class SumOut extends MSPPerformer
{

    public SumOut()
    {
        declareInlets(new int[]{SIGNAL,SIGNAL});
        declareOutlets(new int[]{SIGNAL});

        setInletAssist(new String[]{
        "Value (sig)", "Index (sig)"});
        setOutletAssist(new String[]{
        "Output (sig)"});
    }

    public void dspsetup(MSPSignal[] ins, MSPSignal[] outs)
    {
    }

    public void perform(MSPSignal[] ins, MSPSignal[] outs)
    {
        int i;
        float[] in = ins[0].vec;
        float[] in2 = ins[1].vec;
        float[] out = outs[0].vec;
        int lower;
        float bit;
        float interp;

        for(i = 0; i < in.length;i++)
        {
            out[i] = 0;     

        }

        for(i = 0; i < in.length;i++)
        {
            lower = (int) in2[i];
            bit = in2[i] - lower;
            interp = in[i] * bit;
            if (lower >= 0 && lower < in.length) out[lower] += in[i] - interp;
            lower++;
            if (lower >= 0 && lower < in.length) out[lower + 1] += interp;    

        }
    }
}

Eric Lyon's icon

> At this point my question to Eric is: what's your interest in a
> MSP-only implementation - is it just portability, or is it
> accessiblity to more users? Obviously mxj~ provides the first
> of these, although the second is more problematic, as although
> the API is simpler DSP code in text is still going to be too
> much for many users

Two parts to this. Even though I develop and share my externals, I admit that third party externals make me nervous. For example I noticed some third party externals distributed with expiration dates. And others you never know how soon they will be recompiled whenever MaxMSP needs a new format. I think I mitigated that in my own externals by open-sourcing them, but nevertheless, I still think it's better when the functionality is inside of MaxMSP. For one thing c74 generally provides a level of documentation of their objects that you'd have to pay me to do for mine :-)

Second part is that if there is a commonly accepted MaxMSP way to solve a problem, it's more likely to propagate than special solutions via third party externals. The important exception is when third party externals solve a problem that c74 has left open, such that it becomes the de facto community solution.

>
> Back to the problem. To cut a long story short, I then spent a
> long train journey which would have otherwise been quite dull
> (or at least dull in a different way) coding an MSP only
> solution with no mxj~.

This looks very cool - a virtuoso effort. Unfortunately I could not quite get sound out of it. It appears that there are a few missing connections, for example see "FindPeakStarts" as it generates from the code you posted.

In any case the complexity of your (truly impressive) MaxMSP style solution convinces me that at least so far, the best solution is still a custom external like gizmo~ or my pvoc~. Sort of like how the American colonialists when they gave up on trying to figure out the British tax code decided it was simpler to just overthrow the government :)

Cheers,
Eric

nathan wolek's icon

On Dec 18, 2006, at 12:16 AM, Eric Lyon wrote:
> Have you tried doing granular synthesis using poly~ driven by
> JavaScript?

But then I couldn't get sample accurate triggering. And you know all
about that right? ;)

-----
Nathan Wolek
nw@nathanwolek.com
http://www.nathanwolek.com

Jean-Francois Charles's icon
AlexHarker's icon

Quote: Eric Lyon wrote on Mon, 18 December 2006 17:17

> Two parts to this. Even though I develop and share my externals, I admit that third party externals make me nervous.

> Second part is that if there is a commonly accepted MaxMSP way to solve a problem, it's more likely to propagate than special solutions via third party externals.

These are fair points...

Unfortunately I could not quite get sound out of it. It appears that there are a few missing connections, for example see "FindPeakStarts" as it generates from the code you posted.

This code all regenerates fine for me (the in~ / out~ objects with no connections are there for the order of execution hack - they do actually do something, but don't need to be connected)

The only problem I had was that the main pfft patch misconnected a couple of outlets from a poly when it wasn't present, so it may be worth a quick try of making the directory with the downloaded poly in current and pasting the pfft patch again from text into an empty window. Save it as Transpose.pfft and use the wrapper patcher below, which should default to passing audio through unchanged - change the numberbox to change transposition.

Alex

Max Patch
Copy patch and select New From Clipboard in Max.

> In any case the complexity of your (truly impressive) MaxMSP style solution convinces me that at least so far, the best solution is still a custom external like gizmo~ or my pvoc~. Sort of like how the American colonialists when they gave up on trying to figure out the British tax code decided it was simpler to just overthrow the government :)
>
> Cheers,
> Eric
>
----------------------------------------------------

Eric Lyon's icon

> > Have you tried doing granular synthesis using poly~ driven by
> > JavaScript?
>
> But then I couldn't get sample accurate triggering. And you know all
> about that right? ;)
>
>

Owww! Hoist by my own petard. But perhaps one day JavaScript will implement signal inlets and outlets. Never hurts to start planning ahead :=)

Eric

Eric Lyon's icon

> The only problem I had was that the main pfft patch
> misconnected a couple of outlets from a poly when it wasn't
> present, so it may be worth a quick try of making the directory
> with the downloaded poly in current and pasting the pfft patch
> again from text into an empty window. Save it as Transpose.pfft
> and use the wrapper patcher below, which should default to
> passing audio through unchanged - change the numberbox to
> change transposition.
>
>

Thanks for the followup. What you say makes sense, but now with the patch looking good, still no joy. Maybe my ancient TiBook is cursed. It seems every time I pick it up another piece falls off. Anyone else have better luck?

Eric

Jean-Francois Charles's icon
gusanomaxlist's icon

all (zip and unzipped) versions work fine on winXP ...

_y.

Eric Lyon's icon

> I include three pft patches with a testing wrapper patch
> (Transpose TEST) which will need to be altered to load each one
> (defaults to Transpose2.pfft):
>
> Transpose2.pfft - is the original MSP only version
> Transpose3.pfft - the revised version sent to the list
> Transpose4.pfft - IS A NEW PRACTICAL VERSION
>

3 & 4 work fine. 2 did not work for me. Very impressive! BTW the unconnected gizmo~ in the parent patch appears to be superfluous. Is it there for a reason?

Thanks,
Eric

AlexHarker's icon

Quote: Eric Lyon wrote on Wed, 20 December 2006 03:04

> 3 & 4 work fine. 2 did not work for me. Very impressive!

Cheers. My question to you G4 powerbook guys is - are you using 4.5.x ? My suspicion is that if so, the order of execution index~/poke~ situation has changed between 4.5 and 4.6, and that the one ordering I did not force with poly~ (zeroing the output buffer BEFORE summing the output bins) is executing in the opposite order on 4.5 and hence the output is silence. This is just a hunch, as as previously mentioned the patch worked fine for a friend on a G4 powerbook with max 4.6. If this is the case it essentially (for me) points to the fact that dsp order of execution must be considered unpredictable, unless a clear order dependency is present (output of one object to the input of another).

If not, then sorry no clue.... It's not like it's useful as a patch anyway though - more a technical exercise.

>BTW the unconnected gizmo~ in the parent patch appears to be superfluous. Is it there for a reason?

Quick access to the help patch so I could do an auditory comparison when I was debugging the code. I wanted also to compare artefacts, as many c74 objects use double rather than float values internally and so I thought gizmo~ might do a slightly better job (sound-wise - gizmo~ has some other cool features that I didn't implement). However, I can't say I could hear much difference in the end myself. I left it so others could judge for themselves..

Alex

Eric Lyon's icon

> My question to you G4 powerbook guys is - are you using
> 4.5.x ?

Nope, latest and greatest 4.6.x.

> If this is the case it essentially (for me) points to the fact
> that dsp order of execution must be considered unpredictable,
> unless a clear order dependency is present (output of one
> object to the input of another).
>

Yes, that's how I understand the situation. Again, *very* nice work.

Eric