So I'm building the brain/analyzer section of my main performance patch and the idea is to use Alex Harker's descriptors~ object to dynamically create/alter 'presets' for each of the modules in my patch based on what I record into a buffer.
A rough description of the parent patch:
A sampler/looper (poke/groove) running into a buffer slicer/shuffler, a granulizer, a pattern recorder, and a few dsp effects (lofi, dirt, filter, etc...)
I'm primarily an instrumentalist so I'm trying to make the patch as intelligent/autonomous as possible, so my main controls for each module are minimal (generally an on/off and one parameter), although while coding I did expose several more parameters for each module to be controlled automatically/dynamically.
Now I'm using the non-realtime version of descriptors~ since the functionality I'm going for is to record into a buffer, and once that's done, descriptors would be called on to set all of the exposed parameters, and they would sit like that until a new recording is made. (or overdubbed on)
The first snag I'm running into is that, unless I'm missing it, the help file doesn't specify the range/unit that each descriptor is outputting. It specifics what unit the parameters of each descriptor is in, but not what comes out. I've set up a clunky 'test' section using a bunch of peak objects to give me a ballpark of what's happening. Not ideal as the numbers vary a bit from sample to sample. Does anyone know where I can find this info?
Next up is how to handle the data itself. This is a big one.
Once figuring out the functional range for each parameter I was going to scale everything to 0. 1. (as that's what all my parameters are set to). After that I was thinking of directly 1to1 mapping things to parameters that seem relevant, generically (like energy/loudness being tied to distortion amount etc...). This becomes a bit trickier when dealing with more subtle descriptors/parameters. For this I was planning on crunching some of the numbers together (using some basic math). Not ideal, but it would be someone predictable in it's outcome.
The goal for this would be to have it generate a preset that would be musically interesting and fitting to the content in the buffer. I can imagine that this desire alone could be a quite lengthy discussion... For the purposes of my patch, since other things are at play, and since this is fundamentally being used as an 'addition' to an acoustic instrument, good enough would be great for me!
Lastly, again I'm likely missing something, as the object(s) seem quite comprehensive. But is there a way to pull 'attack' data from the realtime object, as in sending a 'bang' when an attack is detected? (Similar to the 'attack' output of Tristan's Analyzer~ object)
Here is my testing/working patch showing the descriptors I'm using (all at default values)