characteristics of floats and pitch


    Apr 22 2006 | 6:52 am
    Here's a very basic question - I'm almost embarrassed to ask, but I do need to get this right. I'm considering a change to how I store frequency data. Right now I'm storing the number directly as a float, so 440.5 Hz is 440.5 in storage.
    This is for pitch, so it's more important for me to favor precision in the lower range. For a variety of reasons, it would also be more convenient and interesting to use the range from 0. to 1.
    Question One: Does changing the range change the *relative* precision? In other words, am I better off with the [0. ... 22500.] range or the [0. ... 1.] range if I want a minimum of audible floating point inaccuracy? Is this even an issue, perceptually? Should I use [-1. ... 1.] with the assumption that MSP's floats are all signed anyway?
    Question Two: Is there an optimum transformation / scaling to apply to the frequency data in order to favor audible pitch correctness (ie greater precision as pitch decreases)? Does that even make sense?
    Thank for any feedback on this!
    -j

    • Apr 22 2006 | 8:19 am
      Question One:
      Yes, it does, and you'd be better off with 0-22050 than 0-1. You'll have the same number of values available after the decimal in both cases, but the bit before will give you the extra precision. (All depends on how precise you need)
      Question Two:
      Unlike the rest of the audio world, in Max you can have floating point midi, which is extremely useful (Use ftom 0.) More accuracy where you need it (the low end), less where you don't. Also, note that ftom isn't limited to 0-127. Check the ftom help file for one slight caveat on rounding... The other great advantage of using midi scale for all things frequency is that (like the decibel scale for amp) it responds to controls in the way you expect it to.
      Peter McCulloch
    • Apr 22 2006 | 11:05 am
      On 22-Apr-2006, at 8:52, dlurk wrote: > Question One: Does changing the range change the *relative* precision? > In other words, am I better off with the [0. ... 22500.] range or > the [0. ... 1.] range if I want a minimum of audible floating point > inaccuracy? Is this even an issue, perceptually? Should I use > [-1. ... 1.] with the assumption that MSP's floats are all signed > anyway?
      Actually, it doesn't. Floating point means *floating* point: the binary "decimal" point is always moved so that you have an effective 24-bit mantissa. It doesn't matter if you have 0.0000000000000123456 or 123.456 or 123456000000000000. You always have the equivalent of approximately 6 digit precision.
      This is, btw, far beyond the limits of pitch perception.
      The seventh digit is a crap shoot though. There are tons of messages in the archive complaining about inaccuracies way down at the 7th, 8th, 9th significant digit. These are really cosmetic though. And, as said, far beyond the limits of pitch perception.
      Google on IEEE-754 for full details.
      > Question Two: Is there an optimum transformation / scaling to apply > to the frequency data in order to favor audible pitch correctness > (ie greater precision as pitch decreases)? Does that even make sense?
      Pitch perception is essentially logarithmic, with some "stretching" outside the midrange. Peter McColloch's pointer to the mtof object is useful. But for full details, any of the standard texts on acoustics will be helpful: Terhardt (Akustische Kommunikation), Wood (Physics of Music), Pierce or even Helmholtz (just to name a few).
      Hope this helps, Peter
      -------------- http://www.bek.no/~pcastine/Litter/ ------------- Peter Castine +--> Litter Power & Litter Bundle for Jitter
      iCE: Sequencing, Recording & |home | chez nous| Interface Building for |bei uns | i nostri| Max/MSP Extremely cool http://www.castine.de http://www.dspaudio.com/
    • Apr 22 2006 | 12:42 pm
      > Actually, it doesn't. Floating point means *floating* point: the binary > "decimal" point is always moved so that you have an effective 24-bit > mantissa. It doesn't matter if you have 0.0000000000000123456 or 123.456 > or 123456000000000000. You always have the equivalent of approximately 6 > digit precision.
      Ok... that fits and expands my understanding. Wasn't sure if there was anything funky about Max floats. So this, in effect, compensates to some crude degree for my second concern - as the frequencies drop past the decimal powers, additional precision is gained on the other side of the point. (Not sure if I expressed that correctly, but I think I get the concept.)
      So taking that into account, Peter McCulloch's answer ("it does") makes sense in the context of my question. I think there will be a slight accuracy advantage to using the actual frequencies rather than a scaled version. I suppose it's also going to save a significant amount of processor effort.
      > Google on IEEE-754 for full details.
      Thanks, I always forget to check the standards.
      > Pitch perception is essentially logarithmic, with some "stretching" > outside the midrange. Peter McColloch's pointer to the mtof object is > useful. But for full details, any of the standard texts on acoustics > will be helpful: Terhardt (Akustische Kommunikation), Wood (Physics of > Music), Pierce or even Helmholtz (just to name a few).
      Great! A few names are always useful. And now that I have a bookshelf... ;) ...I suppose I should make some use of it.
      Incidentally, the combination of the help files for mtof, ftom, and expr got me started along this path - this path that excludes using mtof and ftom. Ah, the joy of arbitrary and occasionally indeterminate tunings.
      Many thanks on this very late night,
      -j
    • Apr 22 2006 | 1:43 pm
      three cheers for newbies.
      this is a new fact for me. is this the case for jitter numbers as well? is the mantissa 32-bit? and 64-bit for float64?
    • Apr 22 2006 | 4:52 pm
      On 22-Apr-2006, at 15:43, joshua goldberg wrote:
      > this is a new fact for me. is this the case for jitter numbers as > well? is the mantissa 32-bit? and 64-bit for float64?
      There is no difference between "Jitter 32-bit" floats and "Max 32- bit" floats and "MSP 32-bit" floats. They are all IEEE-754 floats. That is what the hardware supports.
      32-bit floats have sign bit, 8-bit exponent, 24-bit mantissa 64-bit floats have sign bit, 11-bit exponent, 53-bit mantissa
      It is correct that the sum of exponent length, mantissa length, plus one (sign bit) is one bit greater than the total number of bits available. Really it is.
      There are bunches of posts in the archives about this. Plenty of tidbits in there for the curious. If I had a euro for every message I've written about floating point numbers...
      -------------- http://www.bek.no/~pcastine/Litter/ ------------- Peter Castine +---> Litter Power & Litter Bundle for Jitter Heavy-Duty Mathematics for Everyday Use iCE: Sequencing, Recording & Interface Building for |home | chez nous| Max/MSP Extremely cool |bei uns | i nostri| http://www.dspaudio.com/ http://www.castine.de
    • Apr 22 2006 | 5:17 pm
      On Apr 22, 2006, at 12:52 PM, Peter Castine wrote: > If I had a euro for every message I've written about floating point > numbers...
      ...you'd be 20 % richer than if you had asked for US dollars.
      ----- Nathan Wolek nw@nathanwolek.com http://www.nathanwolek.com
    • Apr 22 2006 | 5:29 pm
      On 22 avr. 06, at 19:17, Nathan Wolek wrote:
      > On Apr 22, 2006, at 12:52 PM, Peter Castine wrote: >> If I had a euro for every message I've written about floating >> point numbers... > > ...you'd be 20 % richer than if you had asked for US dollars.
      Not exactly, because Peter started explaining IEEE and co a long time ago. Euro was not even born :-) And when euros arrived it was cheaper than dollar... Is there an object in the LItter package to calculate that :-)
      Cheers, ej
    • Apr 22 2006 | 7:23 pm
      >> this is a new fact for me. is this the case for jitter numbers as >> well? is the mantissa 32-bit? and 64-bit for float64? > > There is no difference between "Jitter 32-bit" floats and "Max 32- > bit" floats and "MSP 32-bit" floats. They are all IEEE-754 floats. > That is what the hardware supports.
      true, although the float number box unfortunately doesn't show all digits of the fraction. so for example, what looks like zero, doesn't necessarily have to be zero. this trapped me once... here is a little example.
    • Apr 22 2006 | 7:38 pm
      > 32-bit floats have sign bit, 8-bit exponent, 24-bit mantissa > 64-bit floats have sign bit, 11-bit exponent, 53-bit mantissa
      exactly, which is why some people claim 64-bit audio is the superior format for recording and would beat "doubleprecision" (dual 32 bit float = 48 bits of precision) when it comes to sound quality.
      we should wait until coreaudio supports sony playstation GPUs that we finally can use 128-bit audio files (and tuning!) - everything less precise is unusable for making music anyway.
      and during your next live show your granny sits in the first row of chairs shouting:
      "3107.975757744002903894002834757! that ought to be 3107.975757744002903894002834757! but you played a 3107.975757744002903894002834758!!
      -110.000000000000000000000000000000000000000000000000001 Hz
    • Apr 22 2006 | 7:42 pm
      > true, although the float number box unfortunately doesn't show all > digits of the fraction.
      normally it should, as son as its range does not exceed the range you want to represent in it/send into it.
    • Apr 22 2006 | 7:51 pm
      I suppose that means that once that comes out, I'll have to upgrade from these 2-bit speakers.
      > we should wait until coreaudio supports sony > playstation GPUs that we finally can use 128-bit > audio files (and tuning!) - everything less precise > is unusable for making music anyway. >
    • Apr 23 2006 | 2:00 pm