Writing buffer~ to file: small differences in sample values?


    Feb 15 2015 | 9:47 pm
    Hi
    I have come across some strange behaviour when writing a buffer~ to a file. When I load in an audio file into a buffer~ object and then write the buffer~ content to a new file, the two files have small differences in individual sample values – not all samples but many of them. For example I might read a value of 0.005188 on a given sample in the original file (using a peek~ object) and a value of 0.005157 on the same sample in the newly written file. This difference would suggest some sort of difference in bit depth (?). I know that you can set bit depth on the buffer~ by using the 'format' message, this doesn't really seem to be the solution. If, for example, I set the buffer to the format 'int16' and read a file with this format, then the result is the same when I write the buffer to a file – small differences in the sample values.
    The reason for this question is that I am working on a Max application for automating splitting of audio files, and I would like to be able to save the extracted files without any degradation of quality.
    I have attached a patch for testing the issue.
    I hope that someone can enlighten me on this topic. Maybe I'm doing something wrong, or there is an aspect that I have missed.
    Best regards Jakob

    • Feb 16 2015 | 8:47 am
      buffer~s in maxmsp are always float32. the 'format' message is for setting the output sample format of the file that will be written to disk with 'write'. that means, as long as your starting or destination soundfile format is not float32, there will be some conversions going on - and most soundfiles are still stored in integer format. i wouldn't say that max is the perfect tool for this task. hth, vb
    • Feb 16 2015 | 11:19 am
      Thank you for your reply, Volker.
      I understand the problem :-) Though I am still a bit puzzled that an int16 file can not be converted to float32 and back without losing precision... Maybe some interpolation goes on in Max (?) I think that I will do the actual splitting in a Java external instead and then use Max mostly as a user interface.
      Best regards Jakob
    • Feb 16 2015 | 5:14 pm
      There's always going to be a loss of precision in casting from floating point numbers to fixed precision numbers (int). It's inherent to the different ways that the two are implemented, and not specific to Max.
      If this is a problem, you might want to look into doing this via Sox via shell scripts or some other utility. FWIW, sox is great for these sort of tasks and has quite a few options. I had a bunch of split stereo files that I was able to quickly convert into interleaved files.
    • Feb 16 2015 | 9:30 pm
      At a guess, the reason for the value perturbation is, I think but not sure, based on the fact that MSP stores 16-bit fixed values between -1 and 1. This is because if I try storing a value greater than 1 in fixed16, I have observed with peek~ the value appears always to be stored as -1.
      That means, in 16bit fixed, one bit indicates sign, and the other 15 bits give 32768 values between 0 and abs(1), that is, the mantissa is 15 bits long give or tace a tiny smudgeon for the endpoint values. IEEE float32 is 24 bits long with the same smudgeon.
      But often when converting between them, the conversion might not pad the missing bits with 0. For example, if the manitissa is .3333(decimal) it may pad the extra precision with 3 (decimale), because it guess the value is meant to be 1/3. There's been some debate over the years about the best conversion method, and I lost track of the debate some time ago and dont know if there is a predictable convention in all software.
    • Feb 16 2015 | 9:37 pm
      The reAson I can answer this question is that I had a debate at IEEE over the floating-point format when I was working at Intel. This is because in my wild youth, I strongly objected to the fact that the IEEE format includes two versions of zero: plus zero and minus zero. Also I objected to Intel's handling of denomrmals in microcode, but by that time, in my wild youth, I had created too much enmity in prior debate about plus and minus zeroes, and I sadly got over-ruled.
      In my latter years I thought about things like that quite a bit and figured out a better way I should behave, which, had I done so many years ago, would have resulted in a much better Pentium I microprocessor. I just mention that because some people have started picking fights with me :)
    • Feb 17 2015 | 10:20 am
      Thank you, Peter and Ernest, for your replies :-)
      I understand that converting from float to int can result in a loss of precision but if the 'original' was an int in the first place, then I would think that it was possible without loosing precision. Like this for example in Java:
      int originalInt = 598;
      float floatVersion = (float) originalInt; //== 598.0
      int backToInt = (int) floatVersion; // == 598 (same as original)
      But it seems that it is a bit more complicated than that, as Ernest also points out, even though, I must admit, I don't completely understand the details :-) But I accept that the conversion changes has its reasons.
      Best regards Jakob