## Jump after first value when using Scale object exponentially?

Nov 15, 2012 at 6:52pm

# Jump after first value when using Scale object exponentially?

In a live performance patch of mine I use a lot of scale-objects to scale data coming from midi controller and often I want to do that exponentially. However, scale behaves strangely there – there’s a (relatively) big jump from going from the first value to second, after which it progresses normally, as you’d expect. For example, if I have an object like this: “scale 0 127 0. 80. 1.04″, the output with 0 is 0, with 1 it’s 0.571336, with 2: 0.594189, 3: 0.617956 etc. I.e. it jumps ~0.57 from 0 to 1 after which it goes with smaller (at first approx 0.02) increments. (A simple example patch is at the end of the post).

Is this a bug or what? What would you suggest as a (simple/elegant) work-around to get a smoothly scaled exponential output (with various steepness-es)? I’ve been using sel to single out the first value and subtract from the others to get a smooth response, but this is obviously a pretty ridiculous thing to keep on doing. I suppose I could pretty much do the same with expr, but I’m not very fluent with it – what kind of expression would it have to be?

– Pasted Max Patch, click to expand. –
#65225
Nov 15, 2012 at 6:59pm

To scale as you describe, try [expr pow(\$i1/127)*80., 1.04].

Brendan

#235040
Nov 16, 2012 at 10:51am

Thanks Brendan! Although that one doesn’t work, it gives me the error “illegal comma” and object stays red. I researched around a bit, and I think what you mean is: [expr pow((\$i1/127.),1.04)*80.]? However, with that it’s almost linear, and in order to get something that’s close to 1.04 with the “classic” scale behaviour you’d need to have an exponent of ~3.5.

This expression gives exactly the same response as using scale with the “Classic exponential compatibility mode” turned OFF. Neither of them have the problem of the jump in the beginning, but actually they give a relatively different kind of response from the “classic” scale one. Unfortunately I prefer the classic scale response for my purposes.

Does anyone have any suggestions what would be the simplest/most CPU-light (I’ll have a lot of them in my patch) way of getting a response that’s similar to the classic scale behaviour but just without the jump in the beginning? A different expr expression?

I did a small patch that demonstrates the different responses between the classic scale behaviour and the new one. The first preset also shows very clearly the jump in the classic scale behaviour, when using an exponent of 1.02 – it jumps from 0 to 6.6 when going from zero to 1 (when the maximum is 80 and the next value will be 6.73)! This seems really strange, a bug I’d say, so I wonder why it’s still not fixed considering that it is, after all the default state (and lot of people don’t even know about the classic attribute)…

– Pasted Max Patch, click to expand. –
#235041
Nov 16, 2012 at 1:44pm

Here are some thoughts.

– Pasted Max Patch, click to expand. –
#235042
Nov 20, 2012 at 3:30pm

Thanks for that jvkr, that’s very interesting and useful! (And sorry for the late reply!)

I don’t fully understand what’s going on in the [expr (pow(\$f2,(\$f1/127))-1)/(\$f2-1)*80] -object, although maybe I don’t need to fully understand it in order to use it! :) Using dbtoa with zmap and scale is a good idea (as well as ftom and mtof)!

In the way you’ve used the [cpuclock], the higher the value is, the more CPU the process uses, right? Meaning that looking up the transfer curve from a buffer is, in fact, by far the most CPU-efficient method? I need to study the abstraction example a bit more, but all in all, very fascinating stuff!

#235043
Nov 20, 2012 at 5:12pm

The expression is a variation on the function y=pow(n,x) (n to the power of x). This function, regardless of what the value n is, will always go through the points (0;1) and (1;n). After subtracting 1 and dividing by (n-1) these points become (0;0) and (1;1). From there it is easy to scale from any input range to any output range.

Higher values are indeed more CPU cycles. Storing stuff in buffers is something I started doing with the earliest version of SuperCollider, when I realized that (at that time) a rather large amount of CPU was wasted on calculating the same values over and over again.

#235044
Nov 21, 2012 at 1:14pm

Thanks for the info jvkr. Would you normally write the buffers as sound files and read these when the patch is opened, or always generate the buffers anew when the patch is loaded (with an abstraction and patcherargs as in the example you provided)?

One good thing about using buffer for this is that it doesn’t matter (CPU-wise) what kind of method is used to write the buffers. For example, as curves were introduced to the function-object in Max 6.07, the function object is now very convenient for creating transfer curves for mapping – especially in S-curves or other cases like that, which get slightly more complex with expr and such. Reading values from the function -object seems to take pretty much as much CPU as the multislider-object, but if these are then stored to buffer, it doesn’t matter. For example the inverted S-shape in the example below is something I end up using frequently.

– Pasted Max Patch, click to expand. –
#235045
Nov 21, 2012 at 1:56pm

In my case I would generate the data upon load. Some time is lost of course doing this, but it is the most simple approach. Another advantage of using buffers for storing curves is that they can then also be used inside the signal domain.

– Pasted Max Patch, click to expand. –

I hadn’t even noticed about the curved functions, thanks for pointing out.

#235046

You must be logged in to reply to this topic.