Using descriptors/descriptorsrt to generate dynamic 'presets'?
So I’m building the brain/analyzer section of my main performance patch and the idea is to use Alex Harker’s descriptors~ object to dynamically create/alter ‘presets’ for each of the modules in my patch based on what I record into a buffer.
A rough description of the parent patch:
A sampler/looper (poke/groove) running into a buffer slicer/shuffler, a granulizer, a pattern recorder, and a few dsp effects (lofi, dirt, filter, etc…)
I’m primarily an instrumentalist so I’m trying to make the patch as intelligent/autonomous as possible, so my main controls for each module are minimal (generally an on/off and one parameter), although while coding I did expose several more parameters for each module to be controlled automatically/dynamically.
Now I’m using the non-realtime version of descriptors~ since the functionality I’m going for is to record into a buffer, and once that’s done, descriptors would be called on to set all of the exposed parameters, and they would sit like that until a new recording is made. (or overdubbed on)
The first snag I’m running into is that, unless I’m missing it, the help file doesn’t specify the range/unit that each descriptor is outputting. It specifics what unit the parameters of each descriptor is in, but not what comes out. I’ve set up a clunky ‘test’ section using a bunch of peak objects to give me a ballpark of what’s happening. Not ideal as the numbers vary a bit from sample to sample. Does anyone know where I can find this info?
Next up is how to handle the data itself. This is a big one.
Once figuring out the functional range for each parameter I was going to scale everything to 0. 1. (as that’s what all my parameters are set to). After that I was thinking of directly 1to1 mapping things to parameters that seem relevant, generically (like energy/loudness being tied to distortion amount etc…). This becomes a bit trickier when dealing with more subtle descriptors/parameters. For this I was planning on crunching some of the numbers together (using some basic math). Not ideal, but it would be someone predictable in it’s outcome.
The goal for this would be to have it generate a preset that would be musically interesting and fitting to the content in the buffer. I can imagine that this desire alone could be a quite lengthy discussion… For the purposes of my patch, since other things are at play, and since this is fundamentally being used as an ‘addition’ to an acoustic instrument, good enough would be great for me!
Lastly, again I’m likely missing something, as the object(s) seem quite comprehensive. But is there a way to pull ‘attack’ data from the realtime object, as in sending a ‘bang’ when an attack is detected? (Similar to the ‘attack’ output of Tristan’s Analyzer~ object)
Here is my testing/working patch showing the descriptors I’m using (all at default values)
----------begin_max5_patcher---------- 1703.3oc2c1sahqCDG+Z3oHhqaq7214b2443npp.Xn4zPBJInscWs8Y+jXGZ oaYS7RggIGoU.4Cn+mewdr83wd+wzIylW7rsZVzeE8OQSl7ioSl3NU6Ilzc7 jYaRddQVRk61lka+Vw7+c1M9KUaet1c546VsxV9ZTsspNRPHj82w1j5EOllu 9gR6hZ+enXo5NxMQBEs8MJw8FibGI59tuzph75pzuaaueJq4x9SmuaSZdls1 oD56mrXW89yx5Na5RmrZj5s7YG7qlmrw8qN6uKSSx1eE+OP8KasdENaUVQR8 raZrqj70yhtu8194zosubyWCTIKW7ZOrQKM2IafBMtEJb8kEMF8Ivlpz04sW 9sOcR3ogvMxqGRnDsfPF6.gjDJH1lT1XF01xGr4IyybW9KBI00o.zfDhxUXA QRTUGqZQRlMhbWzsllWn207w.vnT3cGwOo5bpixJ5QXk3DX0UDObiqzESCCc 3fQmM1ppj01OgmdoQLg4bLKbEV3LmyZyeHMXASC10trxVaxS8W9PabEPDD4e lWn.HB+yDQG+U7zjl+1a.CItBPHYHiRHozZHKIIGmPhIgDRpQIjjZNjPRONg DiBIjLiRHIjwPBI53DRDPcbyFkPhKA0wMebBIBnNtEiRHwDj1gc.EjFmcljZ L.BI03bXIcg9.JHcU6BvfgSqKzqByUObZRNRi3nlnvBhDHEQJo.KHRhUDQXX AQZjhHojhEDYPJhDwwXAQwXEQBMRPjhfTDwikXAQTrhHAGKHhgTDwhwh6ZEV 6539Ash.Dg0tNteHqH.QRbmxCX.QXMqPLXY7GJMpRJjgiFRG4fIXHRLFE6j4 UAEeVt1Om+7yX5ocjzePd5cIBTpPEFOUnfPEx3fJ9pS.AEQLrPYQwlM11Jge J4gHDZew9f1k3P976L9LRExQnReCdGPqVqcCDsypExKrUqgxpY8YyL4A1L+R ayJnrYSuYvCGRaVBiMyX8lpIDHMYALlLOd3DGAJSlCiIatqOaVnTP9XlATI6 9WnFBp.RilhBilqXPZzDXL5aU8YxT8AUnYW3Nkvigpop95RB6COlu31rACcI gQYP9bFntgcKqupyT+r5CkMCT2vtk1+BrCxGym4dgY+9xjdWhbFoeon3W7E9 ASz95alW.Aj4X1Ae1YbQjrzVsnLcacQYUjM2Vt9knrhcKyat8nUY6dNZySYQ sgjJYy1nrz7GVz7XtrHco6fpsk1jtO9j8atuU6AOsqrtnJ0ev7xz0OV6tV0p MQ4EoU1GJSpSKhRyeLobSQd5hz5WhJK1s9w16auJa911EE6xqODPGqidTWG8 TZ+J.py4wkZwtvjW6E6xAOzJqec3kyAU53iQbYisA6TBUXVZ0EKQoz9XiFyt 5y7NVCBulZPBhnBrxHkRgEFg1YMsKbgXfQFrlBGREGMLBqki5ZIGCLJFqLpa o9fBFg0TTnak9fBFg0bTnag9fBFg0196lGYTvHrlCvLICMLBqIALiPPCivZV .2kcBHfQLBV6CYrBMHhhprlZW91jEs4MUf+aXF620uTwm0DBQbLR9kxvJ3+v YcSrIIOI6kJqa+XavHE62.oztOe41LaXjq+d6y+KBKquhD0m7PW9vxRDHJtr uNnCFpwMDEs5BGTVh7rDUVm5bO5+ksgQmA1d9OxqphckK1+yseO7K5cirgU0 o4sE2xO3lZCg7A2zioKWZyOr0pkoUsMf4rOxQexErdDAnm1MLNnzCOD8nfSO rPzSSGkhn3QOsa6VnROFjoGMxziBY5QhL8HPld3HSOLjoGJxzCx7OqPl+YEx 7OKAr7LID8H.q+FtltGTOb3ziID8DCmdBo+yRCb5QEhdzvoGYH5At9yqCY7E J.KOGx3KTvM9KcP9movoGZH5g.mdBw+rBN+ypP7Oqfq9kJD+yJ3FurTE53SA ROAU+BN+yRZni2AH8PBs+gvnGQbn8ODF8vj3JdcrPJO6lLEnDDQhNEIBRQR. UD+rnnMoK2VjlW2E6YJwsfun98X7X+++77gC7K7eJws1VcG8tFNiV2GGt1uI FoB.wcPAskCrfnzPB6FnJhOjiVExdn4lvcXYzfJh.rhXCnHWts.ph3CpHCvJ RLnhz.qH4fJB55ZpAUDz9izCpHnaDwLnhftUj3gTD7sqQFTRDr0TKUANkFzs s.RG.gLLTAjsiDTb3gqbzu1HZOSjCTBxD5LcAkfBJT7T.EjJz4tEJAICcxag RPhPm8VnDDOzzGAHAET9Go0.JHRnIPBPBRnBMinNEAEc+zeN8+r0Gs+D -----------end_max5_patcher-----------
Heh, I knew I got carried away writing a long OP.
1. How can find out the range/unit of descriptors~ output? (Not seeing it in the help)
2. Has anyone made an audio analysis to preset module? (non realtime). How did you determine what to map to what (other than straight one-to-one stuff).