mlp in ml.star
Can anyone explain how does mlp object works? if I wanna make one audio buffer learns from the other audio buffer in real time, what should I put in the right inlet and left inlet? and don't really get what does the "tlu" and "layer" function in the coll object? thanks!
Are you familiar with how a multilayer perceptron works? That's what ml.mlp implements. It's not a machine learning algorithm that can be trained in "real time."
Not really. by my understanding, the mlp object take learning values from the left inlet and can be played by the right inlet, but I don't understand how the "layer" function take effect in this process and what's going on in the coll object. I have read about the wiki page a bit and have few understanding about why the layer function helps for recognition purpose.
The documentation of ml.mlp is a bit thin. A mlp is essentially a function simulator. It creates a "model" that approximates a function that would give you a specific output based on a specific input.
The right inlet:
A multilayer perceptron needs to "train" before it gives the results you want. It trains on a set of data where inputs are associated with desired outputs. So, if the object gets values "x" and "y", you might want the output to be "a" and "b". That's what goes into the right input. The sub-patch is pretty confusing. It is sending in 5 value lists (two input values and three output values). It keeps doing that (training) until the error is less than a threshold (0.01).
The left inlet:
This is where you can interact with a trained model. That is, after the ml.mlp has gone through the training above. This inlet cause the object to predict or generate output values based on the input values that you send it. It's not very useful until the training is done.
The coll:
Once you've trained a mlp model, this coll is a way to save the result. I don't think that the actual data in the coll are supposed to be particularly human readable. You can read up on how mlp's work. Theses data represent the connections, weights, and offsets for each node and layer. Here's a nice explanation:
http://playground.tensorflow.org/
I wish the help file explained the arguments. My guess is that [ml.mlp 3 2 1 8] creates a mlp with 3 inputs, 2 outputs, and 1 "hidden" layer with 8 nodes.
It also doesn't say what activation function is being used. Any individual node can only reproduce a scaled version of that function. So, adding more nodes creates a network that can simulate a more complex function. The cost is longer training time. Adding more layers also can create a more expressive mlp. In my experience, though, the training cost for more layers is very high and the results aren't really much better.
In spite of what the ml.star overview says: this is supervised machine learning.
Hi MZED
Thanks for the explanation! they have just updated the ml library(still with thin description), and I also found this one from Nick Gillian, do you know is this any different then the inbuilt ml.star one? in this one you can send message "help" to mlp object to read some details of the commands intro.
https://github.com/cmuartfab/ml-lib