Forums > Dev

Use of double with Atom and A_FLOAT

June 11, 2008 | 8:07 pm

I would like to clear up something that I’ve never managed to understand.
Especially now that I feel very comfortable with the SDK this little thing
is bugging me.

If you define a method with A_FLOAT arguments, the data is passed in is a
double.
If you define a method as A_GIMME, the data is passed in as Atoms.
An Atom is 32bit, and holds a float amongst other types, but if you call
getfloat() it returns a double.

If I remember correctly, on PowerPC, you can get away with using float
instead of double as the argument to your methods, on Intel processors you
can’t.

I don’t have a background in computer science so I might be missing some
fundamental knowledge here, but this is all a bit confusing to me. I’ve
searched the archives, and I couldn’t really find a good answer as to why
this works the way it does. Most posts I found are about people getting
confused because they used float arguments in their A_FLOAT declared
methods.

Are the A_FLOAT arguments passed in as doubles really 64bit, or just 32bit
floats casted to double? If they are just floats, why are they passed this
way? If not, what’s the use for this extra precision? Objects will truncate
it to 32bit if either one of their i/o methods uses Atoms, no?

About the Atom getfloat() returning a double I’m completely in the dark.

Thijs


June 12, 2008 | 10:32 am

The single vs. double precision thing is a result of how different machine architectures use hardware registers for passing parameters. If you don’t have much of a CS background I can explain this until I’m blue in the face and it still may not make much sense to you. So unless you want a CS 101 lecture, my advice is to just get used to the way it works. Atoms use single precision floats, passing parameters in registers is done with double precision.

————-
>If I remember correctly, on PowerPC, you can get away with using float
>instead of double as the argument to your methods
————-

With *modern* PPC compilers you can get away with declaring your function parameters as float and the compiler will automatically upgrade them to double. Relying on this is, however, a Bad Idea(tm) because your code becomes unportable.

As an historical footnote, the first PPC compilers didn’t upgrade float to double, either. I recently found some list correspondence from 2001 confirming this. Lord knows where I put it.

My advice is alwaysalwaysalways use double for parameter lists.

—————
>Are the A_FLOAT arguments passed in as doubles really 64bit, or just 32bit
>floats casted to double?
—————

A typecast-to-double *is* 64 bit.

Typecasting to double is basically telling the compiler "take the value, whatever format it is (byte, short, long, float, whatever) _with_any_and_all_bit_pattern_conversion_necessary_."

—————
>About the Atom getfloat() returning a double I’m completely in the dark.
—————

Do you mean atom_getfloat() from the pattr-SDK? Since this isn’t the Java list, I assume you don’t mean the Java Atom.getFloat() method.

I will admit that I was a little taken aback, but consider that in C function return values are typically passed in a register. Recall what I said in the first paragraph: values passed in float registers are double precision. QED.

However, if you assign the atom_getfloat()’s return value to a single-precision variable, the C compiler will take care of converting the float from 64-bit to 32-bit representation.

atom_getfloat() is more about extracting a floating point value rather than an int or a symbol. It’s not about the single/double precision thing.

In brief: floating point is *always* double precision in the Max API nowadays with the sole exceptions of Atoms and MSP signal vectors.


June 13, 2008 | 3:15 pm

On Thu, Jun 12, 2008 at 11:32 AM, Peter Castine

wrote:

>
> The single vs. double precision thing is a result of how different machine
> architectures use hardware registers for passing parameters. If you don’t
> have much of a CS background I can explain this until I’m blue in the face
> and it still may not make much sense to you. So unless you want a CS 101
> lecture, my advice is to just get used to the way it works. Atoms use single
> precision floats, passing parameters in registers is done with double
> precision.
>

Thanks a lot Peter, I didn’t realize it had to do with processor
architecture that much. I’ve read up on CPU design and now I understand a
bit more about the fundamentals and the use of registers.

> My advice is alwaysalwaysalways use double for parameter lists.
>

I used floats once because I started my first externals on PPC and didn’t
know about the arguments being passed as double at the time. I found that
out when attempting to port the code to windows.

>
> —————
> >Are the A_FLOAT arguments passed in as doubles really 64bit, or just 32bit
> >floats casted to double?
> —————
>
> A typecast-to-double *is* 64 bit.
>

Yes of course. I formulated my question a bit too vague. Nevermind.

—————
> >About the Atom getfloat() returning a double I’m completely in the dark.
> —————
>
> Do you mean atom_getfloat() from the pattr-SDK? Since this isn’t the Java
> list, I assume you don’t mean the Java Atom.getFloat() method.

Yes I meant atom_getfloat().

> I will admit that I was a little taken aback, but consider that in C
> function return values are typically passed in a register. Recall what I
> said in the first paragraph: values passed in float registers are double
> precision. QED.

While researching this I encountered several C optimization sites which
claim it’s best to use int (instead of short and char) and double (instead
of float) for method arguments and return types. This is the same issue
right? Because the int and double types correspond to the processors
wordlength it involves less work when converting to/from the registers.

Using doubles will take up a bit more memory, but can it actually be more
(or equally as) efficient to use double instead of float for internal
calculations in my externals?

And I might as well change attributes to use float64 instead of float32 no?

As you can see I’m confused as to why I would use anything with less bits
then int and double if the cpu doesn’t benefit from it. Memory size is not
an issue in most of my externals. I know these optimizations might seem very
trivial, and I should probably focus on my algorithms instead, but I’d like
to optize my code for speed. That includes making consious discisions about
data types.

> However, if you assign the atom_getfloat()’s return value to a
> single-precision variable, the C compiler will take care of converting the
> float from 64-bit to 32-bit representation.
>
> atom_getfloat() is more about extracting a floating point value rather than
> an int or a symbol. It’s not about the single/double precision thing.
>
> In brief: floating point is *always* double precision in the Max API
> nowadays with the sole exceptions of Atoms and MSP signal vectors.
> –
>

So just to get this straight… If 2 or more objects in Max chain, use
A_FLOAT for both their in and out methods, and the internal calculations use
double precision, the output at the end of the chain is true double
precision. If, however, one of the objects use Atoms in one of the methods
for passing the data, the double precision calculations are truncated to
single precision. You still have a 64bit output, but the signal path is not
true 64bit anymore. Am I correct?

Cheers,
Thijs


June 14, 2008 | 10:46 am

Quote: thijs.koerselman wrote on Fri, 13 June 2008 17:15
—————————————————-
> And I might as well change attributes to use float64 instead of float32 no?

One point about float64/32 is if you’re using default setter/getter functions, you better match the actual data type inside your structure. More than that I don’t know.

Jeremy or Joshua might have more to say about this.

> As you can see I’m confused as to why I would use anything with less bits
> then int and double if the cpu doesn’t benefit from it.

There was a time when optimizing for size was pretty important, so people used shorts and bytes and even packed bits when they could.

There is a speed/size trade off in most design decisions.

BTW: this is somewhat anal-retentive on my part, but I never use ‘int’ as a synonym for ‘long’. Nowadays I think every C compiler you can find implements int as 32-bit, but K&R clearly state that you cannot make any such assumption. All it says is that sizeof(long) >= sizeof(int) >= sizeof(short). So I use long/short when I want to specify a particular bit-width, and int only when I don’t care. But this is, to some extent, a matter of taste.

> So just to get this straight… If 2 or more objects in Max chain, use
> A_FLOAT for both their in and out methods, and the internal calculations use
> double precision, the output at the end of the chain is true double
> precision.

Actually, I’m not sure about that. I suspect that things like outlet_float() are just convenience wrappers around outlet_anything() and copy data into Atoms behind the scenes.

>If, however, one of the objects use Atoms in one of the methods
> for passing the data, the double precision calculations are truncated to
> single precision.

That’s right.

From other discussions, I’m pretty sure that if you’ve got a patch cord involved, you’re limited to a 32-bit interface between the objects involved.

Hope this helps,
P


June 14, 2008 | 12:59 pm

On Sat, Jun 14, 2008 at 11:47 AM, Peter Castine

wrote:

>
> —————————————————-
> > And I might as well change attributes to use float64 instead of float32
> no?
>
> One point about float64/32 is if you’re using default setter/getter
> functions, you better match the actual data type inside your structure. More
> than that I don’t know.
>

Yes providing that I match them to double member variables of course.

>
> > As you can see I’m confused as to why I would use anything with less bits
> > then int and double if the cpu doesn’t benefit from it.
>
> There was a time when optimizing for size was pretty important, so people
> used shorts and bytes and even packed bits when they could.
>
> There is a speed/size trade off in most design decisions.

I did a bit more research on the float vs double performance, and I found
the following quotes:

"On x86 the FPU treats floats & doubles the same internally, so there is no
speed difference (aside from the extra memory bandwidth for double
loads/stores and larger cache footprint)."

"Assuming you are working with a recent crop of the x86 processors, anything
from Pentium II onwards. It really makes little difference. floats an
integers are both treated similarly. If you really are shooting for
performance and do not care about precision, then you can tell the
coprocessor to work in low-precision mode and use floats instead of doubles.
A tiny chunk of assembler (that I don’t seem to find right now ;) will do
that for you."

I don’t I really need the extra precision most of the time, so I guess I’ll
stick with using floats. Also OpenGL, which I use regularly, uses GLfloat
for most of its method arguments, which is 32bit for almost every
implementation afaik.

>
>
> BTW: this is somewhat anal-retentive on my part, but I never use ‘int’ as a
> synonym for ‘long’. Nowadays I think every C compiler you can find
> implements int as 32-bit, but K&R clearly state that you cannot make any
> such assumption. All it says is that sizeof(long) >= sizeof(int) >=
> sizeof(short). So I use long/short when I want to specify a particular
> bit-width, and int only when I don’t care. But this is, to some extent, a
> matter of taste.
>
> > So just to get this straight… If 2 or more objects in Max chain, use
> > A_FLOAT for both their in and out methods, and the internal calculations
> use
> > double precision, the output at the end of the chain is true double
> > precision.
>
> Actually, I’m not sure about that. I suspect that things like
> outlet_float() are just convenience wrappers around outlet_anything() and
> copy data into Atoms behind the scenes.
>
>
> >If, however, one of the objects use Atoms in one of the methods
> > for passing the data, the double precision calculations are truncated to
> > single precision.
>
> That’s right.
>
> >From other discussions, I’m pretty sure that if you’ve got a patch cord
> involved, you’re limited to a 32-bit interface between the objects involved.
>
>
Thanks Peter, you’ve been very helpful. I think its time for me to stop
worrying about this and focus on my work. I’m glad I now understand the
reason behind the use of doubles in some API methods. If I really care about
optimization that much I’d better spend some time learning Assembly :)

Cheers,
Thijs


September 17, 2009 | 9:16 am

Just a remark on the subject:
I’m currently working at an MSP object who does a lot of number crunching (all basic stuff, actually, sums and multiplications, but an awful lot of them)
First I have declared all the floating points as double, as I usually do (in the object structure, in the perform method, … everywhere)
Looking for optimization, I replaced all the doubles with floats, and the cpu load increased by almost 30%…
andrea
(MacBook Pro Core 2 Duo @ 2.4GHZ)

Thijs Koerselman_ wrote on Sat, 14 June 2008 14:59
On Sat, Jun 14, 2008 at 11:47 AM, Peter Castine < pcastine@gmx.net> wrote:

>
> > As you can see I’m confused as to why I would use anything with less bits
> > then int and double if the cpu doesn’t benefit from it.
>
> There was a time when optimizing for size was pretty important, so people
> used shorts and bytes and even packed bits when they could.
>
> There is a speed/size trade off in most design decisions.

I did a bit more research on the float vs double performance, and I found
the following quotes:

"On x86 the FPU treats floats & doubles the same internally, so there is no
speed difference (aside from the extra memory bandwidth for double
loads/stores and larger cache footprint)."

"Assuming you are working with a recent crop of the x86 processors, anything
from Pentium II onwards. It really makes little difference. floats an
integers are both treated similarly. If you really are shooting for
performance and do not care about precision, then you can tell the
coprocessor to work in low-precision mode and use floats instead of doubles.
A tiny chunk of assembler (that I don’t seem to find right now Wink will do
that for you."

I don’t I really need the extra precision most of the time, so I guess I’ll
stick with using floats. Also OpenGL, which I use regularly, uses GLfloat
for most of its method arguments, which is 32bit for almost every
implementation afaik.


September 17, 2009 | 10:18 am

Two points

1) Converting float-to-double is surprisingly expensive on all the architectures I’ve ever worked with. Not quite sure why, but it is.

2) Many modern processors handle operations on double faster than float.

I think that chip designers are taking the approach that doubles are used more (or "more important") and put their optimization screws there. But I’m not an expert in this area, so don’t quote me on the last sentence.

2a) The exception to observation (2) is that vector processing seems to be pretty zippy with 32-bit values, including floats.

In short, your observation doesn’t surprise me a lot!-/


Viewing 7 posts - 1 through 7 (of 7 total)