Attempt at normalizing each period of audio in Gen. Need to reset sah object?

    May 23 2020 | 8:00 pm
    EDIT: JF posted a solution at the link below. Very nice!
    I am attempting to delay a signal by the length of the period and then normalize that signal by the using the largest value found during that same period. This way, each discrete period of audio is multiplied to full scale.
    My WIP patch is attached with comments.
    I will run down the basic idea I have here..
    1. Using contents of gen~.zerox to obtain zero crossing and period length information.
    2. Using sah, >p, and history to obtain the largest value (I think the issue is here).
    3. Dividing 1 by the largest value in the period to obtain my multiplier.
    4. Delaying the signal by the period length and normalizing it by the multiplier.
    What I think I need to do is reset the sah object so a new largest value can be found every period, but I can't figure out how to do this. I attempted to bring this set of objects into codebox and reset it using an if statement but had no success. I think the answer to this is simple but its eluding me..
    edit: I also will need a way to save the last biggest value as a variable in codebox and output that to normalize the delayed, previous period after sah is reset and it is busy obtaining the current period's largest value.

    • May 24 2020 | 9:20 pm
      Starting over and trying to make something in codebox from scratch since I cant make any headway with this patch.
      Here's some really simple code in my codebox:
      largest = 0; if(in1 > largest) { largest = in1; } out1 = largest;
      I'm not sure why this isn't giving me the largest value. Shouldn't largest remain the same value if the statement is false?
    • May 25 2020 | 12:22 am
      Did some research..
      I also realized that peakamp~ wouldn't work as it is not signal rate, and I need to be able to update the maximum in a fast and accurate way. This user didn't seem to have much luck..
      This user apparently made a custom peakamp~ object, but it appears to have been lost as this post is from 2007. Not sure if this would help me, but would have been useful to look at.
      This is an interesting idea but I have no knowledge of mxj.. not sure if updating the length of the buffer in realtime would be an issue or if it would be as easy as just using delay in my attempt.
      I don't really have anything left to try based on my knowledge of gen and max, so I don't know where to go from here.
    • May 25 2020 | 3:48 am
      i'm not familar with gen~ but such an error can cause problems in max patchers. try that code:
      largest = 0.0; if(in1 > largest) { largest = in1; } out1 = largest;
      the .0 at least in patcher level forces the system to use double (floating point arithmetic). a 0 without .0 it is treated as integer and if the system internal cast is stupidly done it will just remove all the decimals behind the comma before comparing it with an integer 0.
      since audio levels go fro 1- ... +1 usually the largest var will stay 0 forever. the comparison may be translated to if( ((int)0.9999) > 0. and a standard int cast will just remove the decimals. only in high level languages such "Math.Round" routines are called for type casts.
      a second thing is that you should use a bipolar max value. or make an abs(in1) before comparing. but this is just to be clean and get really the max amplitude. at first i would concentrate on why you dont get a max value and i think the float thing may be the resason.
    • May 25 2020 | 3:53 am
      erm... haha and another thing is..... if you code it like it is, this means that you set the "largest" var to zero before each comparison.... then of course only the very last sample will be recognized..
      but this is just what i see from the code without knowing the exact gen~ spec.
      i think there should be some global init-part and a function that is called for every sample or something like that... so you can init the var once with 0.0 and for the succeeding comparison calls you must not do this initialization.
    • May 25 2020 | 4:35 am
      I used an abs object in my original patch at the top, for my test case im just using a sig~ into the input to see if I can get it to hold the largest value.
      Changing the typecast using 0.0 as you demonstrated didn't fix the issue.
      I can see how it could be setting largest to 0 every iteration, but removing that line like so:
      if(in1 > largest) { largest = in1; } out1 = largest;
      The code works just as before with that line removed. Maybe I do need to initialize as a float like you say, but it doesn't make a difference if I use a value between 0 and 1 or a value greater than 1 using a sig~ object as input, so I'm not sure that's the actual problem.