# Bug report for the [scale] object

Hi list,

there’s a bug that causes the [scale] object to work improperly when one wants to scale nonlinearly intervals with big ranges (in the given example I try to scale the interval [4800, 7200] to [50, 6500]). But when I start increasing the floating point variable ‘Exponential base’, first I get some huge values as output (see the values printed to the Max window), and then I just get ‘inf’ values.

So some internal variable clearly blows up there.

**all**of the following text. Then, in Max, select

*New From Clipboard*.

```
----------begin_max5_patcher----------
682.3ocwVErbaBCE7L9qPCmc8HAFan25g9UzIiGAnXqTPhADItNS926CI.ab
wI.lw8BxZEHs69VI42WXYGJOxJrQeG8Kjk06KrrzPU.V08srSoGiRnE5WyNR
llxDJ6klwTriJM9OOlIEv.bZBJjVvZdgmkBkflxzuzOxggaFIiphNvE62kyh
TFJDfWgWhHd5mX2pFG.B8zESVA+jdxHNvvFXdrd1kgu7MhSyzKJS4hDlRSax
YPYopAEWA9whEUOVNP8KXuAqy+H+rbtPgdklTB14TjNAaTcftYqyTzN9+i1U
zvD1UZ2rRp+jwLxylWkYP1gTwd6NZZ3NTGCxaBFTPu9iSu9SCZwA4arXtRlq
csZTCRmxmI5ZZbunokeEzWYw6.d.ezNpRkyCKUlMeVsFNPCoh03I0KGrdogr
3NH8xK3qabyNkC30aMF+FnbnTbMVAeundgzH5.xbsEgF8aDFgWcyLx8EL1na
7BlPvveDACxDbDnPqjhap6taJ5Qgq0Racal+JAdgR1Nhi.lhRdNQBSxMUBLL
cT6yARxSgIrhNsUla5.lJ85MSnD6MBiwY9B8mRoYPnm3rEs1GigC1w3uv8l1
lfZCp9VD2ob2oauVj2rkctgEUdhWcDDB+46OVdts5tj6wk1N8SJbFwIEtymG
UDQgqXaSPHOLZiG9SNJcFhRtll0SwlH8ZSa9xnjdIrS3hq+SoZZVg206Jjk4
QMZtt9fbZIZLqPwETEGN7876.4bzYwbfGGyDWd4dJONSBYrZJDrdk2RTPP0S
u1eidp2p6P441Nbned5LNddmLZHNm+zcNm4x47GfyU8OgGCQMji3qYJgbt2C
fqAOzpr6PbuGJiHCfQU9HY3LxrKk3oKmaN24NYp2.8NuwxTWStyusyCfnaui
hLz4iE+EIC8r8B
-----------end_max5_patcher-----------
```

Cheers,

Adam

I would steer clear of [scale]‘s exponential function, it is not to be trusted! Instead you can create your own using [expr] or use Zachary Seldess’ [z.scale] or [z.zmap] which are both excellent, you can find them here.

lh

Hm… the problem is that I’m actually using a software that was build with Max5 whose code I can’t modify directly. Then there are some [scale]s inside this software and I can only set the parameters for these [scale]s (but I can’t replace the used object).

Fortunately I know personally the person who developed the application I’m using, so I can ask him to redesign his application to use some alternative to the [scale] object, but theoretically it would be nice if the native Max5 object worked as well. And specially because it’s not a big deal (it’s not that hard to implement a scaling algorithm which won’t blow up that easily).

Cheers,

Adam

I consider it bad design to use the scale factor at all.

I never ever needed it, instead I use mtof, ftom, atodb, dbtoa for most cases and expr for the custom wishes. Then I know whats going on…

Even the makers of scale couldn’t deliver a valuable explanation how it works, there is a very small area and some fixed values which seem ok, but that’s far from being usable.

I am sure it is not possible to really avoid blowing up, as exponential means potential very high values beyond the range of representable numbers in a digital world…

Stefan

the limits of [zmap] and the bugs of [scale] was what once made

me start using [expr] for all those scaling and mapping tasks.

-110

You may not want to know this, but I replaced the scale object in the original patch with an [lp.scampf map 4800 7200 50 6500]. Setting "exponential base" to 100 resulted in the output below.

Is that what you wanted?

Quote: |

the limits of [zmap] and the bugs of [scale] was what once made me start using [expr] for all those scaling and mapping tasks. |

I was going to say that those were the reasons I built scampf and scampi, except zmap didn’t exist at the time. expr is flexible but you have to think about what you’re doing. I spent a couple of hours thinking when building scampf so I could think about other things in the following eight years;-)

values: 0 50.

values: 1 50.412933

values: 2 50.825867

values: 3 51.2388

values: 4 51.651733

values: 5 52.064667

values: 6 52.477596

values: 7 52.89053

values: 8 53.303463

values: 9 53.716396

values: 10 54.12933

…

values: 117 98.313164

values: 118 98.726097

values: 119 99.13903

values: 120 99.551964

values: 121 99.96489

values: 122 100.377823

values: 123 100.790756

values: 124 101.20369

values: 125 101.616623

values: 126 253.562546

values: 127 6500.

Hi Peter,

thanks for the reply. Unfortunately I had to move on with other topics lately, this is why it took me this long to react to this.

It seems that [lp.scampf] is a good choice, although I had to specify the mapping type (the default setting is linear mapping), so I actually had to replace the original scale with [lp.scampf map 4800 7200 50 6500 exp]. Now for positive numbers this works great, but for negative numbers small enough (like -4, for example) I got some weird shapes where some of the values in the given interval would map to higher values than the borders specified. However, with pow mapping this problem gets solved.

Thanks again,

Adam

I should perhaps have mentioned that lp.scampf’s ‘pow’ mode is the equivalent of scale’s "exponential" curves.

Lp.scampf’s ‘exp’ mode is a true exponential curve, but the math is such that negative base values do not result in a simple inversion of the curve generated by positive base values. And, for base values < -1, the curve goes outside the nominal output range before coming to the maximum value. That's simply the way the math works.

Someday I may build a compromise exponential mapping with more "nicely" behaved curves for negative base values. In the meantime, I guess ‘pow’ will be your friend.

The fact that linear mappings (with "kinks") are the default is a legacy issue, but it seems a reasonable design decision: usually the simplest behavior is the default.

Glad you’re finding scampf useful.

Best,

P.

Peter Castine wrote on Fri, 21 August 2009 19:29 |

Lp.scampf’s ‘exp’ mode is a true exponential curve, but the math is such that negative base values do not result in a simple inversion of the curve generated by positive base values. And, for base values < -1, the curve goes outside the nominal output range before coming to the maximum value. That's simply the way the math works.
Someday I may build a compromise exponential mapping with more "nicely" behaved curves for negative base values. In the meantime, I guess ‘pow’ will be your friend. |

in my expr stuff i dont see this issue because i always map to 0. 1. before i apply distortion.

does that mean 110.makeexp is cooler than lp.scampf? that cant be true, no?

or is that simply the difference between engineering and math.

-110

.

Roman Thilenius wrote on Sat, 22 August 2009 15:57 |

in my expr stuff i dont see this issue because i always map to 0. 1. before i apply distortion. |

does that mean 110.makeexp is cooler than lp.scampf?

.

Not at all. At least not in my case, since I can’t map the data to [0,1] before doing anything with it. I have the possibility to apply one single mapping to my incoming data, and I have to do the mapping in one step (and no, at least currently there’s no possibility to extend the software which I use with an own ‘mapping patch’ – and as I guess this would remain the case, since the mappings of sensor data to internal ones must remain really lightweight).

Also I have quite bad experiences with the performance of expr. As far as I noticed it is much more resource-consuming than scale-like objects when it is used for a simple scaling…

why exactly cant you map 4800 7200 to 0 1 (or 1 2), then

do log/exp/pow and then map to 50 6500?

it surely needs more CPU, and it is even a bit more

inaccurate, but at least it doesn have the [scale]

bug and the [zmap] limitation which does not allow you

to do [zmap 4800 7200 6500 50] for example.

-110