Today, we released Benjamin D. Smith’s exciting new Machine Learning package for Max, ml.star. Machine Learning is a hot topic these days, but it can be tough to figure out where to get started if you aren’t a researcher in the field. For the beginners among us, there are some good example patchers to be found in the package along with lots of documentation. I asked Ben to describe the package in his own words and let all of us know why we should be using it.
Here’s what Ben had to say:
For me, machine learning is all about exploration and discovering new possibilities in my music and art. Machine learning encourages flexible, non-linear approaches that can be surprising and go to unexpected, but connected, places. Solving musical problems with fixed, logical patches doesn’t always allow us to go beyond, to find new things, new sounds, new ways of looking at our work. Again, machine learning is great at enabling emergent behaviors and connections, and finding spaces that we can’t anticipate when we first start in on a new patch or work.
This package came about through my personal obsession with human and computer co-performance. The notion that both are agents that work together to enact greater musical collaborations is endlessly fascinating to me. But I came up against challenge after challenge as my patches could only do as much as I intentionally coded in, they couldn’t grow, they couldn’t handle truly unexpected input (or they crashed), and they couldn’t learn how I played and find new things to create. I quickly came to realize the next step in this direction had to be with smarter algorithms, i.e. machine learning models.
If you’ve never used machine learning, or worked with a neural net of your own, why would you want to?
I think the true power of ML for Max users is twofold:
It can tell us useful things about data streams–such as MIDI data from a keyboard or controller, or audio features from an acoustic musician
It can create natural, but really complex mappings allowing us to easily control large patches, synths, and generators without requiring extreme expertise and cognitive load.
Machine learning models are primarily designed to classify things. In images, they can identify if an animal is visible and what kind of animal it is. In music, we can identify chords, or keys, or harmonic movement, or melodic or timbrel gestures! And when it “hears” something it doesn’t recognize it can tell us how related it is to what it already knows (if I play a cluster on my keyboard it will tell me how close it is to all the chords I’ve already told it about). This information can then be plugged in to drive all your favorite processes and patches! In this simple example we could map the “C-major-ness” of the cluster to the tint of a video, and the “Eb-minor-ness” to a posterization effect. Now, playing clusters or improvising on the keyboard manipulates the imagery in a repeatable, but complex fashion. Of course, chords could be replaced with physical gestures from a hand held device or audio input from a microphone, and we only have to patch a different data source into the ML objects.
And what if we don’t know the chords or gestures that will be used in a performance? No problem, the ML objects can identify them on the fly (hence, “unsupervised”) and create mappings as it goes. This is really exciting for human-computer music making, where the machine can be finding gestures (i.e. patterns in our data) and pointing them out to us. And we don’t have to decide what gestures we’re going to use and code them in manually! We just play. The dream of having a patch that listens to everything you play, learns all of your music, and grows with you is within reach!
Find ml.star in the Max Package Manager (File > Show Package Manager) and start exploring.
The ml.spatial help file crash appears to be a font issue. If someone who had the crash could take this file and test it (confirm a crash or not) I'd greatly appreciate it!
It seems everyone has the 'ML Fun' patches working, so at least the library is good to go! This appears to be both a file and system specific issue, as the ml.spatial object is working in the other patches fine. I'll try tearing the help file apart and rebuilding from scratch.
@Yaniki: thanks for sticking with this, here's a completely rebuilt patch. It really seems to be some font/text glitch, so hopefully this fresh build should go for you. Let me know!
Thanks for your positive attitude and fast reaction.
Unfortunately this new patch is still crashing (report attached). I have no time now, to dig into the problem more intensively but I'll be trying to do some tests on different computers during upcoming week.
Hi Ben here, no crash. I have seen in the tab "one hot", the attrui is not updated at load (0.1) like the 2nd decay argument in the box (1.). hope this helps
I know, it's a strange idea, but can you post a screenshot of the problematic patch? On my computer, when I'm creating an instance of the [ml.spatial] nothing happens (I mean: no crash), so this is probably something related to the patch itself - maybe it is possible to investigate the problem adding elements one after another, treating the screenshot as a "score" ;-)
Hi. Thanks for for your effort on the release of this package. I start doing some experiments with "ml.mlp" and would like to save input data for later use. But when I click save a get the following error "error: ml.mlp: messages with the selector 'save' are not supported". Does this means that I can not save data? (By the way the load message reports the same error).
@Luis: have you tried the ml.mlp help patch, and the save functionality in there? I believe it uses "read" and "write" messages, but at any rate please try it in the help patch and if it fails there I'd be happy to diagnose further.
@yaniki: great idea! Here's the shots (there are 3 tabs). Odd that this one breaks your Max as it's one of the simpler ones. Good luck!
Thank you ;-) I did some short tests based on the images you posted. The object itself (ml.spatial) definitely works fine, and I can't force my patches to crash. I'm a bit busy with current projects, but I want to track down what's going on with the help patch and if I discover something I'll let you know.
Once again thanks for your fantastic, inspiring work.
First, thanks a lot for this package. I'm experimenting with ml.som.grain and I'm wondering what exactly do the "learning" and "plasticity" parameters? I can see that the result change when I change the values, but I would like to understand a bit more what is happening. Thanks, Yoann
@Ben: I tried the ml.mlp help patch and the "save" and "load" functionality it seems not to work. Also tried "read" and "write" commands with no success. Any hint?
to Ben and /or everyone is using ml.star: i can't figure out which model has been implemented with ml.spatial. There is no wiki entry in the ml.spatial Help and i didn't find anything useful on Google (just the Spatial Encoding in Magnetic resonance). the decay parameter made me think to a sort of recurrent neural network but i'm looking for a more precise description but i'm looking for a more precise "wiki" description.
Thanks anyway for the joy you gave thorugh the realization of this pakage.
thanks very much for these tools, and particularly the new helpfiles! I know this is kind of trivial, but the way you have organized the package puts all of your examples in the 'extras' menu, which in my case at least is already overcrowded. I think most of those example patches should be in a 'patchers' folder inside your package, and the 'extras' folder should just contain the launcher....
Great work on this package! Thanks for all the hard work that must have gone into it. Still wrapping my head around some of it, but many interesting possibilities! One or two little points, however...
1. opening the ml.mlp helpfile, the object [ml.mlp 2 3 8 2] is returned as bogus, with the max window saying "error: ml.mlp: Too many creation arguments", although four seems to be the correct number of arguments.
2. the way you've structured your package, with all your example patchers in the 'extras' folder, means that I have 11 ml.* related items in my already crowded Extras menu. I think best practice is to have those in a 'patchers' folder, and only have the launcher inside the 'extras' folder in your package.
Sorry if it was already asked before, Im a newbie in Max. How I am able to use this but with video? I will like to train the machine to recognize numbers that I draw or display in mi phone screen and be able to store them or perform basic math operations.. is this possible?
David, this is an interesting idea, and one that is a hot topic in computer vision these days! To use these objects with video you would want to downsample individual frames (probably black and white) and use jit.spill to convert to lists (I'm a little rusty, I think that's the object). Then ml.mlp or one of the other categorizing objects could be trained to discriminate between different images. It's not an easy problem to solve, but could be fun to tackle. Once you have jit.matrix converting to lists you can start using the ml.* objects to process them.