The "save CPU" aspect is still valid to some extent because the format [* ] vs. [* 1.] determines which type of operation (integer vs. float) will be used. So it's not necessary to derive the operation type at runtime from the input types which may save some CPU cycles. But from the users view, I find it rather inconvenient.
When Max was young, there was a huge performance difference between integer math and floating point math. As processors have gotten better, this difference has at least been reduced, and perhaps has largely gone away. The trouble is that at this point, it couldn't really be changed without potentially breaking thousands of patches.
I for one think it's best to have to be explicit. Yes, I've been stung by this too (or stung myself, more accurately), and spent considerable time trying to figure out the problem, only to discover a truncation somewhere. But I wouldn't change it for anything, because all you need to do is be specific everywhere you need to be. It also makes you be really sure about what you're doing mathematically and why.
@jamesson: There should be no objects that change them "arbitrarily" from floats to ints etc. They do what their arguments or their incoming values tell them to do, and if they don't, it's a bug. Is this what you found?