Forums > MaxMSP

Scaling minimum, maximum, AND mean?

August 19, 2012 | 2:49 am

Is there way to scale min/mean/max using the scale object’s exponential scaling? As in the min/max working like they normally do, but the mean being scaled as well (so it would be an exponential/logarithmic mapping).

Or perhaps doing it in straight expr?

The intended purposes is to normalize two databases (where I have min/max/mean for each database) to give a better mapping for concatenative synthesis.


August 20, 2012 | 4:15 am

Ok I figured out how to do this using linear scaling but it’s a bit clunky. It would be ideal if it went smoothly between the three values.

Check it:

– Pasted Max Patch, click to expand. –


August 20, 2012 | 4:20 pm

That’s a super handy thread (I have a saved version of that file that I always reference when scaling stuff) but I want to stay external/framework free, at least for number crunching stuff.

From the looks of it you’re object(s) let you specify log/exp/pow stuff, but I’d still need to crunch the math as to what kind of curve would fit the min/mean/max.


August 21, 2012 | 4:28 am

this uses quadratic remapping (could be simplified into a more compact patch with a few expr boxes, as the polynomial and matrix coeffs are mostly known, but I’ll let you do that)

– Pasted Max Patch, click to expand. –

August 21, 2012 | 4:59 am

That’s amazing!

The jitter stuff is way over my head (but good to look/learn with). It does seem to act funny in extreme mean settings (ie near the min or max). It very easily goes outside the bounds and backwards (as opposed to "from min, through mean, to max").


August 21, 2012 | 5:10 am

Rogrigo- I just realised I made a mistake- I’ll fix it and repost in a couple of minutes


August 21, 2012 | 5:43 am

OK this works–
It’s scaling input (‘map from’) linearly from 0-1, then using the mean (which will be somewhere between 0 and 1) as a quadratic weight so the mean is always mapped to 0.5, then this is quadratically remapped to the ‘map to’ values so its mean is mapped from 0.5. Uses matrix ops to solve the quadratic mapping equations.

– Pasted Max Patch, click to expand. –

August 21, 2012 | 2:04 pm

Hmm.

My math isn’t great so it’s quite possible that I’m not understanding something correctly, but this still seems to freak out with extreme mean values. I’m doing audio analysis stuff and quite often the mean might be considerably closer to one extreme than the other (ie for frequency I might have 0. 2500. 18000.). With this kind of mapping it starts going backwards right away, then doubles back on itself (kind of breaking that range) and/or shoots way past the max.

Here’s a screencap of what I mean, as well as a crude drawing of what my understanding of the expected curve would be.

http://rodrigoconstanzo.com/sensors/mapping.jpg

So it would always stay within the constraints, and always increase (or decrease) in direction.
Though as I mentioned, I’m quite likely not understanding what that means in terms of math/curves.


August 21, 2012 | 2:28 pm

It’s the limitation of the approach– if you want the mapping to be constrained to the upper and lower bounds while accommodating ‘extreme’ or close to the edge mean values, you’d need to use higher order polynomials, like cubic or higher instead of what I’ve shown, which is quadratic. Basically you’d need to constrain the curves so their gradients are zero at the edge conditions (max and min).

If I get a chance I’ll try and figure it out. In the mean time you could try this: as a workaround first remap the frequencies logarithmically (ie freq to midi), then put it through my method and remap that result back to linear again. That approach may well be more meaningful/appropriate anyway.


August 22, 2012 | 2:42 am

I tried higher order polynomials and they’re actually worse– a good illustration of my maths limitations, as I couldn’t figure out the correct constraints. I suspect it’s not possible, or if it is it would be impractical/inappropriate.

But I did try the freq to pitch conversion and that actually gives you a lot more flexibility, hopefully a practical amount (I suspect so).

– Pasted Max Patch, click to expand. –

August 22, 2012 | 7:25 am

That (with the ftom) does seem like it would work in most cases it’s just a matter of of considering wether the general improvement is worth the possible esoteric failure.

I could also try to figure out at what ratio it breaks and if the ratio is met it sends the input down the linear/split scaling path instead. A bit clunky but it would cover most applications and still not break in extremes.

I’ve also not used jitter in a scaling capacity before so I don’t know if there will be a CPU hit doing this every 20ms or so.

Woah, ftom definitely doesn’t like low numbers! (0, 50, 100 looks crazy!)


August 22, 2012 | 7:30 am

Looks like with a linear setup a ratio of 1:4 seems to be the breaking point (0, 25, 100, or 0, 75, 100). Should be easy enough to test for that and map accordingly. I’ll try making an abstraction with this later that ‘black boxes’ everything and see how that goes.
(In my patch I send the min/mean/max as a one time list and go from there).


August 22, 2012 | 7:43 am

Hi,

higher order polynomials will always tend to move ‘less smoothly’ (of course, not in the mathematical sense), specially between two subsequent control points. What you have to understand here is that since you want to create a mapping with 3 fixed points, you won’t have a linear solution anyway. If you want something more-or-less ‘smooth’, either you could implement a piecewise linear mapping (one linear mapping between min-avg and another for the avg-max portion), in which case your mapping won’t be differentiable at the ‘avg’ point (hence, it will feel a bit weird), or you should use a piecewise smooth interpolation, like splines or Bézier-curves.

HTH,
Ádám


August 22, 2012 | 8:54 am

How do you do that (with scaling)?


August 22, 2012 | 9:51 am

Piecewise linear scaling is pretty straightforward:

– Pasted Max Patch, click to expand. –

(The actual functionality consists of an [if] and two [scale] objects, the rest is just some self-promotion to show how you could compute the min, max and avg values easily with some objects that I developed and which can be accessed at http://www.sadam.hu/software ).

With spline interpolation, the thing is more tricky. You’ll have to implement a spline formula for that: http://en.wikipedia.org/wiki/Spline_curve

HTH,
Ádám


August 22, 2012 | 9:58 am

Ah right, yeah that’s what I’m doing in my second post but using split instead of if. It was spline I was curious about. In my head (which isn’t filled with math) I pictured the min/mean/max being a matter of calculating the exponential curve that would plot the mean between the min and max.


August 22, 2012 | 11:01 am

OK, here’s a solution for quadratic interpolation using Lagrange polynomials. If you want to keep the avg(old) -> avg(new) mapping, this is the smoothest thing you could have. If you’d loose the avg(old) -> avg(new) constraint, you could implement a quadratic Bézier curve, which would look quite smooth, but it wouldn’t exactly match the avg(old) = avg(new) criterion.

– Pasted Max Patch, click to expand. –

HTH,
Ádám


August 22, 2012 | 11:03 am

OK, I was wrong with the last patch. I keep thinking on it a bit and post back.


August 22, 2012 | 1:45 pm

That’s very promising and sounds ideal!


August 26, 2012 | 3:08 pm

I’ve thought about maybe trying to do a 3 point interpolation kind of thing using mean as the middle, then the standard deviation as the point on each side between the min and max so it would be :

min, (mean – standard deviation), mean, (mean + standard deviation), max

And then using the [split] method to scale each one of those subsections, but I’m running into a weird problem, though I’m guessing it’s in relation to ‘inf’ values when used in calculations.

I’m using Alex Harker’s descriptors object to calculate my min/mean/max, and it does standard deviation too, which is awesome. So on a given audio file if I analyze pitch for min/mean/max, I get something like:
0. 570. 12000.

If I then analyze for standard deviation, for some files I get a value like 832., basically a value bigger than the difference between the min and the median. I’m guessing this has to do with the fact that the object calculates 0. and ‘inf’ a lot, so the minimum isn’t really 0, or something like that.


August 26, 2012 | 7:03 pm

Hi,

sorry for the late answer, I forgot to get back to this post. I made a new patch, this time with cubic spline interpolation. It’s a bit messy and not very optimised, but I made it in a hurry.

– Pasted Max Patch, click to expand. –

HTH,
Ádám


August 26, 2012 | 7:29 pm

Hmm, unless there’s something I’m not getting, it appears to function exactly the same way as using regular split with linear interpolation.

– Pasted Max Patch, click to expand. –

August 26, 2012 | 10:44 pm

Obviously, there was a mistake. Here’s the fixed version:

– Pasted Max Patch, click to expand. –

HTH,
Ádám


August 26, 2012 | 10:51 pm

That’s nice/smooth!

It does seem to break when ‘extreme’ means are calculated (min0, mean80, max100 to min0, mean10, max100), like in Terry’s version, but I guess that is part of the math of this. It can’t be smooth while always staying within the constraints.

I can just try to figure out the ratio at which the mean breaks, and when that’s the case, using the linear interpolation, and when it’s not, use the spline. I think it would be fine most of the time, but not broken and sometimes not smooth is better than sometimes broken and and smooth.


August 27, 2012 | 8:02 am

It’s smooth with the values you submitted, see this patch, where the multislider range is modified to [-100, 100]:

– Pasted Max Patch, click to expand. –

The key is that with those values AND wanting to keep it smooth, you must go to the negative range for some values.

Actually, there’s one way to keep it in the positive range, since one could constrain the spline to have pre-given derivatives at the endpoints (in this case, the obvious choice would be 0 at both endpoints). However, this would make the inner curve much less nice for most cases.

Best,
Ádám


August 27, 2012 | 8:38 am

OK, I was wrong. I tested that even by prescribing the vanishing first derivatives at the endpoints, you’d still get negative values. I assume there’s not much more to do here, then…

– Pasted Max Patch, click to expand. –

Best,
Ádám


August 27, 2012 | 11:17 am

It’s not the negative numbers that bother me as this would be used to scale dbs (-180 to 0) as well as frequency (20 to 20000) it’s more than it can leave the constraints of the min/max. I’m using it to normalize two databases, so if it leaves those constraints it’s asking for things that don’t exist as the min/max are absolute min/max.


August 28, 2012 | 4:48 am

have revisited this and come up with what you want: two piecewise parabola segments, with a common gradient at the means:

– Pasted Max Patch, click to expand. –

it wont go past the output bounds and is smooth– hope it’s what you want
T


August 28, 2012 | 10:47 am

Man that’s awesome. The plotting really helps visualize it too.

And as you said, it’s exactly what I’m looking for.

On first loading of numbers it spams this:

jit.la.inverse: error type unspecified
jit.la.inverse: error type unspecified
jit.la.inverse: error type unspecified
jit.la.inverse: error type unspecified
jit.la.inverse: error type unspecified
jit.la.inverse: error type unspecified

Though it seems to work fine. I tried adding float32 to each inverse, but then it complains about dim types (I think that’s what it said).


August 28, 2012 | 11:30 am

with the initial values of zero the matrix operators wont work– try loadbanging some realistic values like 0, 5, 10 into the min mean and max boxes before trying to input your data.

Also you could probably optimize it a lot. the patch as it stands does a of of redundant calculations. I’ll leave that up to you as it depends on your own application.

btw you won’t need to mess with the jitter objects arguments– that’s not the problem with those error messages

and another thing– I’d still recommend you convert from freq to midi before mapping and then back again afterwards; even though 0 hz cannot be mapped properly (zero hz= neg infinity midi note) you could get around that by adding 20 hz to the data and then subtracting again at the end (i think). besides you cant hear zero hz anyway…


August 28, 2012 | 12:03 pm

That’s exactly what I did (unpack a list of values into the minmeanmax area).

Looking through it there’s a ton of [t b f]ing of the initial value, so it calculates immediately. I’ve disconnected all [b]‘s as I don’t need to change the min/mean/max dynamically but am still getting one error.

– Pasted Max Patch, click to expand. –

I’ve done ftom for my pitch stuff (not initially, but I changed over recently), I just threw a clip 0. 140. after the ftom otherwise it spams the impossibly low midi note as you mentioned.


August 28, 2012 | 12:43 pm

that single error occurs in the graphing part of the patch, so you might have to remove a trigger object to fix that (occurs in ‘segmentA’ subpatch in the graphGen subpatch). I think those errors are benign anyway


January 9, 2013 | 5:14 pm

This thread seems to solve a question I was seeking the answer to but a lot of it is over my head maths-wise.

I am wanting to scale values in the range 0. to 1. so that input 0.5 results in the mean value of a minmaxmean set being output. Other inputs being scaled smoothly on either side towards the min and max of the set. This seems to do that (and more besides), correct? Given that my left hand min mean max set is always going to be 0., 0.5 and 1. I guess I should simplify the patch.

Anyway thanks everyone on this thread and thanks MAX forum for being awesome!


January 9, 2013 | 5:34 pm

This does do that, but in extreme/unusual settings does jump outside of 0. and 1. depending on the settings.

For my useage that complicated things as I may have 0.1 0.2 0.8 as my minmeanmax.


January 10, 2013 | 4:52 pm

hey thanks for the heads up. I’ll mess around with this tomorrow and see if I can get a handle on under what situation it screws up.
Did you build a work around or exception for those situations? How did you cope with them?


January 10, 2013 | 7:04 pm

I’ve just gone back to using the split scaling method unfortunately. It’s "safest" under all uses, even though it’s certainly not the smoothest.


January 10, 2013 | 11:08 pm

I have to say Rodrigo, I don’t know what your concerns are– if you put meaningful values into that patch I made, you *will* get a smooth scaling and interpolation between values that stay within the constraints, without discontinuities. If you put a mean value in that is above the max or below the min, then it breaks, but that’s because it is meaningless to have a mean outside of the min and max.


January 11, 2013 | 6:16 am

Right. I think I saved the wrong one as I went to implement this recently (I was in the middle of building up the rest of the patch when this thread was originally made) and it broke when I tried funny value. Looking up to the last one you posted works gang busters!

My bad. (and thanks again!)


January 11, 2013 | 9:53 am

OK excellent Rodrigo, glad it’s of use–

@Liam "Given that my left hand min mean max set is always going to be 0., 0.5 and 1. I guess I should simplify the patch"
–I’m sure you could simplify it a great deal if 3 of the 8 variables are fixed.


January 11, 2013 | 5:48 pm

Yeah it is tidied up, color coded, labelled, and ready for abstraction.

I still can’t seem to get rid of the:
jit.la.inverse: error type unspecified

error.

– Pasted Max Patch, click to expand. –

January 11, 2013 | 7:32 pm

It also just occurred to me, does this kind of jitter stuff gobble up CPU? When I’m using this section of the patch, there’s numbers flying through this around every 20ms or so (and there’s a ton of these in my patch…). Not new minmeanmax values but the ‘value to be scaled’ values.


January 12, 2013 | 12:50 am

Wow, very tidy! Nice colours! It looks like you could get rid of a few trigger objects and put in some swaps here and there, otherwise it looks about as optimised as it can get. Maybe use cpu timer to see how much time it takes up. If you really wanted to optimise I suppose you could modify the patch to generate a lookup table ….


January 12, 2013 | 1:33 am

Hehe. I had some time to kill earlier and went to town!
What do you mean by ‘swaps’.

That could work for a lookup table as I never change the minmeanmax on the fly. It’s once per instance basically.

So for the lookup table version would I just Uzi in values every 0.01 and then interpolate on output? I’ve never made/used a table for something as mathy as this.


January 12, 2013 | 2:25 am

OK, I finally got sick of this thread (in the positive way), so I implemented a solution from scratch, with no jitter objects, only using [expr]-s to compute the coefficients, therefore there should be no CPU trouble (it requires a single expr for evaluation and 7 nested expr-s to compute the coefficients, but that is only needed if the constraints change).

The idea was taken from Terry’s patch: my solution also gives two parabola segments, where the two segments join each other at the (old mean; new mean) point in a way that their first derivatives would match at that point.

However, the mathematical problem is, that we need 6 constraints to solve this problem (3 for each parabolic segment), but Terry’s solution gives us only 5: two points for each parabola plus the condition for the first derivates at the junction.

In my solution, the 6th constraint is, that the mapping can’t decrease (so that if x2 > x1, then the mapped y2 is expected to be bigger than — actually, not smaller than — y1). Based on this, my patch computes the allowed minimum and maximum values for the ‘a’ parameter of the first parabola segment (the equation is y = a*x^2 + b*x + c), and after computing the range allowed for ‘a’, it chooses an actual value for ‘a’ inside the allowed range. However, if you don’t like the proposed value, the patch will allow you to change it! So you might experiment with altering the inclination of the interpolation.

Enjoy!
Ádám

– Pasted Max Patch, click to expand. –

P.S. In my patch, x1, x2 and x3 mean the old min, avg and max values (respectively), while y1, y2 and y3 are the new min, avg and max values (respectively).

EDIT: I found a silly bug which I corrected.


January 12, 2013 | 3:18 am

Here’s another version of the previous patch. The only difference is that I’ve changed the 6th constraint so that now, instead of setting the ‘a’ parameter of the first parabolic section (which is somehow abstract) you can set the inclination/gradient at the junction of the two segments. This is somehow more user-friendly, I guess. The rest is just the same.

Hope that helps,
Ádám

– Pasted Max Patch, click to expand. –

January 12, 2013 | 11:28 am

Well Adam’s method takes about 0.6 microseconds on my computer to calculate a new result and my patch takes about 0.9, both a lot less than 20 milliseconds.

@Rodrigo, I took out some redundant trigger objects which seem to stop the error messages (and probably shaved off a few tenths of microsecs). I wouldn’t worry about creating a lookup table.

@Adam, calculating the correct gradient at the common point was the challenge as I remember– the way I did it was as a function of the horizontal distance of the common point from the line y=x, and have it so if the distance is zero, the gradient is 1, and if the distance is +/- 1 then the gradient is zero. The simplest curve to connect those 3 points is an inverted parabola, and it seems to produce a gradient so that neither resulting segment goes out of its bounds, given any input value. So yes it is arbitrary, but seems to work ok.

here’s a (slightly modified) version (some trigger objects taken out)

T

– Pasted Max Patch, click to expand. –

January 12, 2013 | 12:48 pm

Hi Terry,

By only telling that the gradient of the two segments should be the same at the junction, you only get five constraints (unknown variables are italic; user-given constraints are bold):

(1) a1*x1^2 + b1*x1 + c1 = y1 (constraint for the first data point of the first segment)
(2) a1*x2^2 + b1*x2 + c1 = y2 (constraint for the second data point of the first segment)
(3) a2*x2^2 + b2*x2 + c2 = y2 (constraint for the first data point of the second segment)
(4) a2*x3^2 + b2*x3 + c2 = y3 (constraint for the first data point of the first segment)
(5) 2*a1*x2 + b1 = 2*a2*x2 + b2 (constraint for the first derivates at the junction)

What you did in your case (if I understand it well) was to prescribe the actual value of the first derivate (that is, to set the value of constraint (5) to some explicit value — let’s call it d — computed by your patch). However, one has to notice that this system of equations can be solved for any arbitrary value of d. However, for some values of d, the solution won’t be monotonic increasing (that means that it could happen that the interpolated values decrease while you are actually increasing the input values). Of course, there are situations where there’s no way to make the interpolation monotonic increasing — for example, if y3 is smaller than y2, but in this case, I was supposing that Rodrigo is using realistic values, so that the scaled max value (which is y3 in my notation) won’t be smaller than the scaled mean value (y2).

To guarantee the ‘monotonic increasingness’ of the result, we have to make sure that the derivates at our control points are non-negative:

(6) 2*a1*x1 + b1 >= 0
(7) 2*a1*x2 + b1 >= 0 (this is equivalent to d >= 0)
(8) 2*a2*x3 + b2 >= 0

Note that, because of (5), we could have written (7) in the form 2*a2*x2 + b2 >= 0 as well.

After doing the math, we can rewrite (6)-(8) in the following form:

(9) d >= 0
(10) d < = 2 * (y3-y2)/(x3-x2)
(11) d < = 2 * (y2-y1)/(x2-x1)

To obtain (9)-(11), I actually had to assume x1< =x2< =x3 and y1< =y2< =y3 as well, which means that the user input is ‘realistic’ (so that the mean is not smaller than the minimum etc). Otherwise, one would need to take some absolute values here and there.

To summarize, if you pick any d that is non-negative, but not bigger than any of the right-hand sides of (10) and (11), you’ll be fine. So, if your patch is choosing a gradient within these limits, then your solution is fine.

In my case, my patch first computes the constraints (10) and (11) and proposes a value for the user (actually, my proposed value is min(10,11)/2, where 10 and 11 means the right-hand sides of the respective formulae). However, if the user doesn’t like the shape of the interpolation, then she/he can freely modify d within its allowed range. Then I compute the values a1, b1, c1, a2, b2 and c2 based on equations (1)-(4) and the following versions of the original equation (5):

(5a) 2*a1*x2 + b1 = d
(5b) 2*a2*x2 + b2 = d

This can be solved, since there are 6 variables and 6 linear equations for them; the solution is straightforward from this point:

a1 = ( d – (y2-y1)/(x2-x1) ) / ( x2-x1 )
a2 = ( d – (y2-y3)/(x2-x3) ) / ( x2-x3 )
b1 = (y2-y1)/(x2-x1) – a1*(x2+x1)
b2 = (y2-y3)/(x2-x3) – a2*(x2+x3)
c1 = y2 – (a1*x2^2+b1*x2)
c2 = y2 – (a2*x2^2+b2*x2)

Cheers,
Ádám

P.S. I cleaned a bit up the scaling section. This patch is identical to the last one I posted, but the math formulae are more readable:

———-begin_max5_patcher———-
3390.3oc6cszbihjD9r8uhJTzGZ61sZp2vtwbXiYurW2curwFczARBaSOHPg
DpsbOw7eepp3gAKATf4QwDtC2RhR.5K+pLyJqWI+90WsXUzIuCK.+Mv+Gb0U
+90WckpHYAWkd7UK15dZcf6A0os39fnviaWbWxWI9Xzw3.uX0WhRK89nv3C9
+zSVFDszJs3jyL94cdI+fx6ka7h6.KV4F9vBvWSOuctwqezO7gus2acbxoBs
sD2F.xlJeilbf307qweiBcQq99m4KJ.iP2spesE+i89tAEfseXFpgxx9iquV
9xcZxBqi1t0KLN69E6cRAzE+2G8O.D+E+nG3vwUJIQbfaLXSjWRwtqiO5F.N
r1MPHiWlHspmHu.AgHIbBRQPHFqNFhM0LTN0HNqcGiELy5nvCw6c8ExK39n8
JlxObsfhbi8iBABFTVzVOWwmC2.1sOZWzAwE5B9gavQukf+WzQvV2mAqeTnL
4oNa02.ht+r618687BdF7jerfFSpUBBhdxaCXu5h+nrnUQwwQayt7xmfPFbC
dx84C.qaVlIihau25nigJAE2SUrXtpFEhUugscjuQrubEKcDpXC8dR7ScV8p
vTdyVuCBBYI6xJ0vV5cnVGBjDe.PKEsXipScGZgZlWdzeyFuvh3bDXJxHvTX
VaXJnQxTvQfmnjh7DuAdxxL0nFAdJsMXs3IG6Iglppsm+4d2m.wtqB75qlcg
Ijf5UlcsTwXDVheIgSf0D49DvB7L3WpR8.Ukdf75OK5rWHF6zRTMIVtnJopL
smTmRDZErUdKY4L0EIoLn6sckm5ps5OqpaExCeY+XQkDoasQ3lz.OhpdiTuY
Etk5Rn9iU9xTvJPmDVAVKqflNVIF3C76mNEkZzorc+ZSlSPVRXgnkzZHF3zE
V3we5KcKTkqGbKIGkqn6d4cMYoDsHpSspOVif5ypih9TDVu0ykEZMjxjVjQ0
14XamAuGd+qPQW55sFXS53ehDh40JZ1SX2Z+2dGNFD228lOowRBsVwlMcV2w
f.PP+31SZMWebmkhOudOdbmoiS1oFZGuQHR7rAEKYLeH05ci2VsDbweWu8o7
PJQHtK9Ad+va+A+nhgte0B2c6JT7UEtDI688H0MhdWdQ9gIEAyKZu2O7ytdm
jBE0IWm8k4BOKMx.kXKhIQ8NFWXjQDUfODDs92RiLLqvMd22xaSzNuvWtDEo
W5kxmoe3t8dGDNITi0zq+ocEtI91kq9K+826t1qxK9hUfWs3g89ahBkfnzUJ
KN6mKu0BZQbqNiP2cW3hiihBV4tWVoH6aQwJag8gan+V2XuX+D7frxuN+s61
m1Aj7x7Bk8O4wCq2GEDT5Vk7M+3BeyFg9vZum72D+n5d8BaUZrqKpoUzxsT4
05LurY7ulLxjfCdqiB2HeSUaV3TunS9ZrtSLmJnZTvT1AgSbtYeQ8pJb.XWp
q3PB90d.NqshEuBkWVO5xtGuJyEYpI4.R526u+Pb+w4WXPwrJ1XBD0HywM.l
qTCNuJL6vctq+MgNy4+UM8w0i9pdZa5vGJqReYiANTUc3.ais.G2RaApgaK3
BGBEeXR2CxFsARSrDwvYoObOD7KEmam+N3C2iDEcBJ+DV7omUehHKCI+DUVF
ZP7p3ThbcZTGDa1dU7NsaO3i.IG+YvGEL2mEL5MeQ7Ih3Sna.2.9BH+npYTX
GczrnIidHsDgibZhvQcjvoiCgKzh20jW6AjLKLhRTZSTIriTIdj7LrdPLw4V
EGPxliafNObftBk42zEk42LwaIYv7V5PJQkVM1VDiOCbWJYrOqXvaUtNQ2J+
7mTNQmBWjbVQVFxYMpwx5HKSFIM1UCpgMSy.iX1yBCa2bC6S4F1OmaXeBmGP
DdPLw4kHUZytKQygHhDznHhGpJRHj3SjaTl7vakG+I4wSggdZrPobMD0nFri
ADLTs8.ZPMz0sGPbqYaWflPKds6BDwYd0Gnyr7S6CTxQSgcOl1t9.wgFtceZ
Hov7PRgiUHoYqUN8CIk8dHosuil1VsKjTF0vCIc8fNVcZ2WStg2R0pAklzMx
cJcdD4NzDFJS8ibGOOhb+rwvLOxcxmlpQwjQaWj6TnYOLlyzYip5wWV+IUxx
rsB9Y.36Q90Lkpntpjem9CWOuhEy4YcMnqCwBZbHyC6B7iaRsF0Op0MQskFK
UZi8AfYBideywsJiTEJ+uJZUb9mHSfWZlSwUaEj2LGaY.QsNu33zU5n1TL0w
.nX0cUOxpgU.eCyLWZnXYuVY+6QUJ1Eqcxh.UdYeTgia.pktKHNJcILNyHIq
VwR32.KUZeo9wSn6DQgeioPWZ2+acYJR6Yp.+CBh5DTPLh+mPPh2w2IG1wdj
mRDd85.T09ar0kInUG+VC5L+GoA0Ff.ZxEG+EXfWVKlxsQbEKqQkHH+9KSMG
hNteclZPVDTfxhyFuCwoJuENQ5Ym2KaKuhb3V+M6hjaR6LaNG4xv1Q8JmPxN
.7Uspiaq3HQISCoQNyF.R6EmAD1TcgM1nfMQGcG1rD1J1FYTvFqKaO+fshsg
FEr0wShbtRlgdRTvd9YRpf87S2VAayR21RWXaYNv1RWG2TyR41RWWfTiR69B
U+035tC5IowClFiHO4M0QCjDcAfVgDY0sZhTY.AUuYawVBYLlMOsnoVrTgFX
0YwBZmTQwI4GM0pdJIBNCMYr5FtSMYPHTx3jiVZiwXHNsngxCl1hUW6lXAUp
xuMTlL51FHsic4JUFRqnPvWp1FxVYzIzuNFBEhknqQHxsCmR3XVppIz.1e9y
MSpVnfcoVxR4glixEJBZnEJsMl5X3Wnh1Rzg2WMRS4wrhlT4AVqnf6nOMVxf
dgHEpGTGLT1JNCr.MT5ODMwsZrJMn15wsA2Vyvtiz0VJPvbWn2AbHFSiDr9q
UB9vKUr1DJYGhPIMLdDTkybry54Be5ihj1wAdElIQIgOxf40ZCYuuzdXInF1
PEpqe.Jz7FNEjt3FYV3FpKtMm14zeJBsLq.7Zk6FCC2Pcc72Ab6TzKYVTpCs
CRcpHPlWbRPMgMb94dDYddGwZBa77aNIPFUa+n13ZD14k8QZrkPKtZnivrgM
.SskJ3aZwrj8FmLzyWApMC6Z2qlf1IoMSBTUMgrSK5hRUZgYYbuEGb+g2luI
Pg253u4FGu2ek740vqSGfODDsxMHMWA95T1bxJy7r7Y2PkD7p4qpJ+6c8KrP
ukDTu7ytB0ijh9JMglkEyR2HcLbcY.RBYDxIlxrPu6CdmQGhSB7gwHK9ml+r
yxhMI7SEOsLHSXBidW1i9juUXia2yOUcZN8ZmkSUS2BwnZyut3VmTUM7THJh
lrETPrDcEkzisPsKChpyc48DH5eMSfn0r6JDpk9aOtcX2aPES+4UtZuKkXa3
MtE1XFvltp4cG6o77Zyyujvpv4YGfAI2VT9IeBxBOGRdhMsAfPxjBP9lg8KY
o3B3jrQXKkZVfT1PkN+FocX0sBqeZ+yiZteXJko5vMmO+LcC+eM+QDGXP1x6
kdVQvajuvDiOYJmyWvgIGJW74.Uy7EdVj1ENkm1EddzR6LHdozXp0rH2EOqZ
XAUZe72bCKXytgkpRvBUSsrwK6JnYCSoIJrlyWUPCPYeX2OlVsZ+8ho+kc+X
lEZMUWpf04MjYgAnyjH.bKI.deP..gHH6r5nsuTwzY4hJDp4ZvEaTShETy4v
BaTyXHTyILDaTSOKTyYmEab50yuImENKmIb3rbYG.mkqwCsmfUiB03VfZCRC
Q2EzJ1rXacgMbdBaY.h8FreewAzt49UDosHj7r4FMKHa4scIV8u9c9da74Da
1z85TXxwq7QbuFOzfSmiqRyp1aeMDTCqYMxDVZlRhnAewPSEeU0RPIeRvdyq
yDJs3.TT6pLAwmvGBu4iN6aVhIX8kX1TJwvdasDA0WhoSnDep2zpIN5KwjoT
h6MsZLWeIFOkRbuoUiZgmKzD9Li1s5AyePWsbjBIjWHr1F3PVsjeXy619oDs
a6GxeOVI.Q+Xkfr24K.AoOeQemuxd1rnEeQdmuxVb3ZwW324qr1B0gu58t5o
9QJOAaIr2qGVnTN77Qeqz.zVwXAYWZmPVXbff4f9BiAzY0n5hINSCLwq.RVC
BjPZfHj0nBIHVWLA0GSXqzc+oZaZw3K4VVVPTRIuU.SzEvnV.XVQ.aSWZ6H9
Goe.LUW.i0GvDTA.isHKgBF1h0O.loKfIs.v1EALBsDIHXa69AvbcALUe.SI
EALwJ6f2HVwNZf0W4Ncn8APPC.lPrzbCkcg7Tj5n2ZssNs0vYsyiUJ9R215x
m7dk215cFsLjtncDaFxRGL4LpXhWZu.WGOgZiNHNaQ9UHysfd60pbcrhwiKC
ZqClbF2nKbJ2NbEXpkdVXEyqqnWuMo6NVgZnA5fZGVw1I6.zDMOxqSWdcGrC
gakT7kYmX6zWNA0JTbGbKQqcwLfWF1I8.2pSHwNz10.SF93oYOJZ9Q8gZqVn
cDM6s0IfrQs2fZAIHdTwjHlcc.EyDAEejAkNZ4upqgCOnzJtY5nBJrNwQQZa
qXTdwlbYUzlq3f+35+DfSvYNT
———–end_max5_patcher———–


January 12, 2013 | 1:51 pm

Awesome patch, and great explanation!

All the values fed to this would be valid/real as they are coming from a preanalyzed database which generates these min/mean/max values automatically. The input values would also always stay inside that, but I like that it’s baked in that it flattens the edges anyways.

One thing I don’t understand about this patch is why the inclination defaults to half of the maximum. In most(all) the cases I tested for input values, the maximum inclination ‘looks’ the best. As it, it appears to be the smoothest remapping available.

The fact that this only calculates the input values when given is great too.


January 12, 2013 | 2:21 pm

Here it is slightly reformatted (to take the initial list in "order") and trimmed of ‘fat’.

– Pasted Max Patch, click to expand. –

January 12, 2013 | 2:33 pm

One thing I don’t understand about this patch is why the inclination defaults to half of the maximum. In most(all) the cases I tested for input values, the maximum inclination ‘looks’ the best. As it, it appears to be the smoothest remapping available.

There’s no explanation for that, as I said, between the allowed minimum and maximum any inclination value will be fine. If you like more the maximum, go for it. ;-)

I chose the half-maximum because, although in most cases the choice with the maximum looks the best, there are some scenarios when, at least for my taste, this was not the case. Try the ‘identity-scaling’ as an example (here x1=y1=0, x2=y2=0.5 and x3=y3=1). If you compute the allowed range for d, you will get [0,2]. However, to get linear scaling (which one would expect for such an input), you need to set d to 1. This led me to the half-maximum choice, but again, it’s just question of your taste/expectations. If you like it more with the max, then use it. But remember that in that case you won’t get linear behaviour, even if your three value-pairs would fit to a single line.

HTH,
Ádám


January 12, 2013 | 2:52 pm

Hi,

I modified the patch that pre-computes the inclination. Now d is set to the half of the mean value of the two constraints. When the three control points fit on the same line, this will set d to be 1, however, for most cases it gives an estimation that is much closer to the allowed maximum than my old patch, thus creating a much smoother interpolation.

Cheers,
Ádám

– Pasted Max Patch, click to expand. –

January 12, 2013 | 3:26 pm

That looks buttery smooth.


January 16, 2013 | 4:27 pm

jeez guys! I have never felt quite like such a mathematical philistine :P I have included terry’s scaling system in my patch for the minute as I didnt realise this thread had continued being so excellent.

In very simple words of one syllable or less ;) can you explain to me the main differences between the two solutions? I would appreciate it muchly. Also is there a performance trade off between the two?

thanks
Cormac


January 16, 2013 | 4:50 pm

The main difference is computational. The last version doesn’t use jitter at all, and only calculates the minmeanmax when they are changed (rather than all the time). That D value thing is different too, though I don’t understand the maths enough to explain that bit. I do like it and have it cranked all the way.


January 17, 2013 | 1:40 am

Adam’s version is an ‘optimized’ version of mine, using a slightly different approach. It’s about 30% faster than mine. The "D value thing" is calculated automatically in mine to give the smoothest ‘elbow’ (the common gradient at the scaled mean-remapping point) without making the overall curve go out of bounds (always monotonic). If you wanted to get the best of both worlds you could replace Adam’s [compute_inclination] subpatch with my [gradCalc] subpatch.

(now it’s been prototyped it’s probably time to make C external out of it… ;-)


January 17, 2013 | 8:49 am

More like time to make it into a native object!


January 17, 2013 | 11:37 am

Well, I’m actually thinking on adding the method to my [sadam.interpol] object. I actually discovered a couple of days ago that the piecewise linear interpolation of that object had a bug, so I’d need to make a new release fixing that bug anyway. I’d probably have time for this in feb.

Cheers,
Ádám


September 1, 2013 | 5:39 am

Another gem of the thread.. this deserves a bump. I learned quite a bit from these examples.


Viewing 59 posts - 1 through 59 (of 59 total)