I would like to clear up something that I've never managed to understand.
Especially now that I feel very comfortable with the SDK this little thing
is bugging me.
If you define a method with A_FLOAT arguments, the data is passed in is a
If you define a method as A_GIMME, the data is passed in as Atoms.
An Atom is 32bit, and holds a float amongst other types, but if you call
getfloat() it returns a double.
If I remember correctly, on PowerPC, you can get away with using float
instead of double as the argument to your methods, on Intel processors you
I don't have a background in computer science so I might be missing some
fundamental knowledge here, but this is all a bit confusing to me. I've
searched the archives, and I couldn't really find a good answer as to why
this works the way it does. Most posts I found are about people getting
confused because they used float arguments in their A_FLOAT declared
Are the A_FLOAT arguments passed in as doubles really 64bit, or just 32bit
floats casted to double? If they are just floats, why are they passed this
way? If not, what's the use for this extra precision? Objects will truncate
it to 32bit if either one of their i/o methods uses Atoms, no?
About the Atom getfloat() returning a double I'm completely in the dark.