Timbre Analysis in Max?

    Mar 18 2011 | 3:24 pm
    Hi All, I'm building an advanced visualiser and I'm at a loss finding objects/abstractions that can analyse the timbre of an incoming sound effectively. I'm stuck using Max 5 on windows 7 which is compounding problems. I've tried: fiddle, bonk and analyzer which work fine for basic spectral data & pitch I've looked at cataRT which does not work on windows 7 I've also looked at timbreID for PD which looks perfect for my needs but is only available in PD (and the pd~ object for max is only available on Mac.)
    What I'm really after are parameters like (off the top of my head) brassiness or hollowness or resonance or sinusoidal complexity. Does anyone have any idea how might analyse things like this in realtime in Max? I'm aware this is a very complex area but it does not need to be hugely accurate - I just need to get as much data as possible out of the incoming signal.
    Thanks a lot!

    • Mar 18 2011 | 11:00 pm
      if I remember correctly, Gabor/FTM lib has some analysis tools
    • Mar 19 2011 | 4:03 am
      you might try zsa.descriptors from ej:
      I haven't yet, but it looks like they do some of what you're looking for.
      (if I'm way off, my apologies :)
    • Mar 19 2011 | 2:02 pm
      Tristan Jehans externals give a little bit more - noisiness, loudness, brightness, bark scale, beat detection. There is a handy one called analyser that'll output all the above except beat.
      You can also integrate the Echo Nest API into Max4Live which would give you a lot of analysis parameters. I have not done this nor have I any idea how easy or difficult it would be. EchoNest is also Tristan Jehan so I think it is pretty powerful but might be overkill depending on your purposes.
    • Mar 20 2011 | 12:20 am
      Hey all thanks for the tips on this. Since last posting, I've had a look at:
      zsa.descriptors looks handy but doesnt seem to do anything major that Tristan Jehans analyzer cant do with a bit of creative thinking. (granted its everything's very easy to implement in zsa.descriptors)
      Echo Nest API looks like it does song recognition rather than instrument recognition. Am I wrong?
      Gabor looks interesting but is a bit buggy on my machine. I cant get the analysis example patch to loaf but I will keep playing with it. The spectral tracker looks good - Has >100 band bark/harmonics analysis which is really cool.
      It's a pity theres nothing that combines a few of these techniques into one tool. Im very new to all this but I've read the best way to analyse timbre is to compare spectral data over time so an object that could buffer some data before outputting would likely be the most accurate.
    • Mar 22 2011 | 3:53 pm
      Hey everyone,
      I just released en_analyzer~, an external that provides access to The Echo Nest's audio analysis API from within Max. The object, as well as an example patch and source code is available here:
      This should be significantly easier to set up than Max4Live (which requires compiling an mxj object). en_analyzer~ uploads the audio data in a buffer~ to The Echo Nest for analysis, then retrieves the analysis data, which includes timbre descriptors. As grizzle pointed out, this analysis is the work of Tristan Jehan and shares some of the underlying principles of his earlier Max objects. You can read more about The Echo Nests's audio analysis here:
      cronoklee, unfortunately this object only exists on Mac at the moment. I'm working on the Windows version, but since I'm not an experienced Windows developer it's a slow process.
    • Mar 23 2011 | 9:38 am
      Hello Ben,
      this is very interesting - bravo for the Max integration. I would have two suggestions:
      - it could be nice to explain the role of a key and facilitate the path to registering a key in the help file - 0 web users (like me) may abandon the process if they don't fully understand how to do that "quickly".
      - when an analysis is done, you may wonder : "where is the analysis data, how to store it?". Perhaps a "dump" message, that would output the entire data as text through a dedicated outlet could facilitate the storage of data on user's hard disk - so one can organize a local repertory synced with local soundfiles easily. Of course, I have no idea of the amount of work this may represent, so that's just my 0.2 €.
      Bravo again.
    • Mar 17 2012 | 2:27 pm
      Just wondering how you got on with this Cronoklee? I'm just researching the topic for a project of my own and found this thread.
    • Jul 01 2015 | 2:15 pm
      I am also wondering if anyone has had any updates on this since 2011. As @Mark Durham, I am also looking for something similar for a project and found this thread
    • Jul 01 2015 | 8:48 pm
      check descriptor objects in AHarker externals
    • Jul 02 2015 | 10:51 am
      Hi Federico, I actually ended up building an external for this which worked quite well. It basically took a snapshot of all partials in the incoming sound every 30ms and added it to historical snapshots. Over a few seconds, you can build up a kind of average shape of the harmonic spectrum of the incoming sound. I used it reasonably successfully to distinguish instruments from each other.
      I dont have the code to hand but I'll try to dig it out later and post it here.
    • Feb 19 2017 | 11:57 am
      Hi Cronoklee, I know it's a been a long time. I'm looking to do something similar. Did you have a video of how the visualiser looked?