addition vs. multiplication
i just asked this question to the cycling support, but was told to ask this question to the forum
and so i do as i was told (first post).
In a few documents concerning Max i read that multiplication is faster that division
so it would be better to do * 0.5 instead of / 2.
1. Could you explain me why ?
2. Is there a way to track the time an object takes for an operation ?
I tried it with the timer object and the cpuclock but they seem to always output
different values each time i bang.
any response very welcome
clemens
It's hard to test one iteration of anything. 10000 or more is measurable.
Comparing [/ 2.] vs [* .5]
plenty of info out there about multiplication vs. division, but if you consider that computers really are just adding, multiplication is more straightforward (no carries etc.) In the example it's negligible but in other cases it's definitely a concern. Any time you can do multiplication instead (which is most cases) it's better... one time you couldn't would be when you don't know beforehand what the divisor will be. (Even then you could use a lookup table, but there are limits to how many you want to include :)
I suspect the divide object is probably implemented internally as a multiplication anyway, or the compiler optimizes it as such. There are other Max-object optimizations that will show more definite improvements:
For multiplying or dividing by a power of 2, use a bitshift instead, ie instead of [* 64] use [<< 6].
For modulo, if the operator is a power of two, use a bitwise AND and subtract 1 from the operator, ie instead of [% 128], use [& 127]
Any time you can do multiplication instead (which is most cases) it's better
I would recommend the opposite. As you can see in the patch Chris posted the difference is really not significant either way, so the thing that you should really prioritize is the readability of the patch. In the old days, when computer where slow this wasn't really true but now, changing a division by a multiplication is probably not going to make any difference, and you increase your chances to get weird behavior due to floating point precision issues:
Readability is great… then, if you have some performances issues, optimize!
In a few documents concerning Max i read that multiplication is faster that division
so it would be better to do * 0.5 instead of / 2.
1. Could you explain me why ?
Take a pencil and a piece of paper. Divide 39,203 by 197.
On the same piece of paper multiply 196 with 198.
Which is faster? Why?
The only difference between you (equipped with pencil and paper) and the computer is that the computer is faster than you. But the difference between multiplication and division is essentially the same.
The computer is faster, so with the computer we're talking about nanoseconds, whereas you (+paper+pencil) need several 1,000,000,000 times as long. Furthermore, the actual mathematical operation takes practically no time at all compared to the time Max/MSP spends in message passing. That's why I second EJ's advice to prefer the solution that is more readable--in the time that we've spent discussing your question, your computer could have performed quadrillions of division operations. (And there are people who think the national debt is a big number--they don't know what really big numbers are.)
John's idea about the division object internally replacing, say [/ 2] with multiplication by 0.5, is not likely to be used in the plain-vanilla division object [/]. It's not worth the effort and there can be problems with precision. The MSP [/~] object does do this (or at least used to the last time this was documented) because the processing hit can be significant when performing the operation 44,000 times per second (or whatever your sampling rate is). Actually, on current processors the performance hit is pretty small and this practice seems quaint reminder of the days when we measured processor speed in MHz.
i think the argument that computers are fast these days is invalid, what matters is the
relation between the methods.
is it 1%? or is it 10%? a difference of 10% could be worth the effort, at least for signals, or maybe even
under metro conditions on my old G4 computers.
and how about √ ? is the idea to avoid it where possible from an engineers viewpoint still valid?
and what about the idea to do bitwise operations when you need to do 128/16? 10% faster? or 50%?
btw. i would question that 3/2 is really better readable than 3*0.5, depending on the context.
i often do actually think in 0.5
the most important idea is, in my opinion, to avoid calculating things in realtime which are not needed
in realtime.
if someone does dspstate - sig~ - /~ 2. to get nyquist on startup, he needs to rethink that.
plus modulation signals can in many cases run downsampled just fine.
plus turn off what you do not need.
maybe i need to rethink my extensive use of expr for numbers. °•°
but i prefer to talk myself into the idea that i can save CPU by seinding lesser numbers
between max objects that way.
what was the topic here? i´ve lost it i think.
-110
In this particular case, the difference is less than 1%. In any real world patch, many more cpu cycles are likely consigned to gratuitous feedback than this level of difference.