MIDI range sonogram


    Jul 18 2020 | 7:26 pm
    Hello! I'd like to integrate this sort of sonogram: https://www.youtube.com/watch?v=H-Ci_YwfH04 into my visual score automata, replacing the actual boring header: https://www.twitch.tv/playmodes
    Most of the examples i've found are working with frequency range instead of pitch, but i would need it o be adjusted to a specific MIDI note range, as it is the way my visual score sonification works (lower pixels being lower pitches, higher pixels higher pitches)
    Can anyone point out some tips or examples i could start from? Thanks for your help, I'm a noob with jitter and wouldn't know how to orientate this!

    • Jul 18 2020 | 8:21 pm
      Hi. The Youtube reference you gave seems similar in approach to the one on Twitch. The actual parameter represented on the Y axys could be frequency or MIDI pitch and the range could also be adjusted (the one on Youtube is only mapping the range from 30Hz to 1000Hz) How are you doing things right now, and can you expand on what specific strategies would you like to implement?
    • Jul 18 2020 | 8:54 pm
      Yeah, the sonification strategy is the same i believe. I am permanently generating graphics in Processing, storing the pngs on the hard-disk and letting max/jitter stitch the pngs and create the scrolling image. For sonification, i am using the classical combination of jit.peek~ and ioscbank. Here it is the sonification snippet:
      basically, the lower row of pixels is assigned to MIDI note 36, and the upper row of pixels to MIDI note 120. Because the image is 1080 pixels high, there are 1080 simultaneous oscillators mapped microtonally within this range. What i want to achieve is exactly the spectrogram line effect on the NASA reference... replacing my static vertical white line header with a spectrogram (sonogram? spectrograph?) showing the amplitude variations aligned to each pixel... so when a horizontal line in the middle of the image is intercepted by the vertical header, a visual peak is generated at that exact y coordinate.... same than with the NASA video... What puzzles me the most is how to draw a pitch-domain spectrogram instead of a frequency-domain. Also the overall fft and jitter trickery feels pretty esoteric for my level of knowledge....
    • Jul 19 2020 | 7:39 pm
      Hi again!
      120-36 = 84 semitones (MIDI notes) 1080 pixels / 84 notes = 12.857 subdivisions/note I would tend to implement an equal division of the semitone. For instance, with 90 notes: 1080/90 = 12 subdivisions/note
      Regarding the spectrogram, you could slice from the matrix/texture the column related to the playback head position and use a jit.gl.graph to draw the data.
      Does this help?
    • Jul 20 2020 | 2:12 am
      Hi Pedro! didn't think about the equal subdivisions... you're totally right, as with the range i have now i would rarely get a perfectly tuned "line".... thanks for pointing this out!
      and regarding the jit.gl.graph you provide.... that's neat and works exactly the way I was looking for! Thank you very much! I'm going to have this method integrated on my patch, and i'll let you know when i have the scores running on twitch with this update
      This is an amazing community to learn and share!
    • Jul 20 2020 | 11:11 am
      Already implemented and running: https://www.twitch.tv/playmodes
      Thanks again Pedro!!!
    • Jul 20 2020 | 12:51 pm
      Congratulations, I really like the end result. In particular, the diversity of the source material potentially leads to a longer experience than most manifestations like this, which tend to have a more repetitive nature... Just curious, in your mapping, does color have any impact on sound?
    • Jul 20 2020 | 1:22 pm
      thanks Pedro! yeah, because the graphics are generated "on the fly" , driven by randomness and probability, there's always different results....
      Color on itself has no relevance on the mapping (although it will...), but because luminance=sinusoidal oscillators amplitude, brighter colors (green, yellow, white...) have a biger volume than darker colors (blue, red...)
    • Jul 20 2020 | 2:39 pm
      Yes, I was referring to the chrominance part. In my experiences, and although the "classical" audio-visual association (at least throughout history) is to relate color to pitch (a musical octave), I don't find that association very "fortunate". If I have to relate color to pitch, I prefer to map the whole frequency spectrum, i.e, lower frequencies with warm colors (red, orange...) and higher frequencies with cold ones (blue, violet...). Other experiments would be to relate color to the timbre of the sound or even to the stereo field (from left-red to right-blue)... Good work, once again!
    • Jul 20 2020 | 2:57 pm
      yeah, i usually find the color mapping kind of "esoteric". I would agree on the color-timbre approach though, which is where i am heading to....
      :-)
    • Jul 21 2020 | 7:53 am
      one color per octave, lightness per key. basenote variable.