Variable Order Markov Model

This is an object for Max that implements variable order Markov models for generating melodies, beats or other musical data. It can be used for Markov models of any order. The download includes a version of Pachet's Continuator for Max. This is a variable order Markov model-based algorithm that can learn a musician's style (to some extent) and trade licks in real-time- lots of fun!

The source code for the VMM object is available on github. Download the VMM object here.

Please leave questions in the comments section below.

davidestevens's icon

davidestevens

6月 27 2014 | 7:22 午後

Cool. Thanks for this - looking forward to trying it out over the next week or so.

aengus's icon

aengus

6月 28 2014 | 12:56 午前

Glad it might be of some use! Please contact me if you come across any bugs, unexpected behaviour, or things that aren't clear.

mikkelm's icon

mikkelm

11月 18 2014 | 9:22 午後

Merci! have implemented this, in no time, in a random percussion generator.
Nice work.

Wetterberg's icon

Wetterberg

11月 19 2014 | 9:40 午前

hej Mikkel - I'd love to see that in action some time - I've yet to practically implement markov chains for anything useful, for some reason.

aengus's icon

aengus

11月 20 2014 | 6:38 午前

Nice one Mikkel! If you've any SoundCloud links, etc. to post here I'd be very interested!

Wetterberg's icon

Wetterberg

11月 20 2014 | 6:58 午前

yeah, me too.

Exit Only's icon

Exit Only

11月 21 2014 | 3:21 午前

it would be awesome to be able to download/fork the source on this! anyway good work!

Hoda Azimian's icon

Hoda Azimian

7月 15 2016 | 2:18 午後

Hi, How to go from A to D or C to G? Although no path

Hoda Azimian's icon

Hoda Azimian

7月 15 2016 | 2:19 午後

If possible, explain the figure above in my email

aengus's icon

aengus

7月 15 2016 | 2:36 午後

Hi Hoda,

The way to read the possible continuations from the above figure is to take the sequence that's already been played and work backwards through it as far as you can, and then read off the possible next notes from those listed in curly brackets.

For example: say the notes so far have been A, C. To find out what notes can come next, start at the root (R) of the tree and follow the path. First go to C, then go to A. Then you can read off that the possible continuations are {A, D}. You then choose one of these randomly.

If the sequence of notes played so far is not in the tree, then you just go back as far as you can. E.g. If C, E is the sequence so far, you would go from the root to E, then find that you can go no further (E does not connect to C), so you stop there and read off the possible continuations (just G in this case).

Hope that helps!

Hoda Azimian's icon

Hoda Azimian

7月 15 2016 | 9:41 午後

Great!Thanks alot

aengus's icon

aengus

3月 19 2017 | 2:22 午前

Below is some background on Markov models for anyone trying to come to grips with using this object.

Basically a Markov model tries to 'learn' what comes next after a given symbol or sequences of symbols. So if you input sequences representing letters, say: 'abcd', 'cbcdf' and 'abde', then a Markov model might learn that 2 out of three times (66%) a 'c' will come after a 'b'. The order of the model is how far into the past it looks, to try and guess what should come next. A first order model of sequences of letters only cares what comes after single letter (e.g. what comes after 'b'?). A second order model takes the current letter (in this example 'b') into account, and the previous one. Thus, a second order model would say that 50% of the time 'c' comes after 'ab', and the other 50% of the time, 'd' comes after 'ab'. It would also say that 100% of the time 'c' comes after 'cb', since the only time in the three training sequences above that it sees a 'cb', a 'c' comes next.

The thing is that if you want to use a high order, you've got to give it as many examples as you can to learn from, so that it learns what are the possible ways to continue from a given sequence. If you give it too little data, then it will tend to just copy the examples you gave it. One example was given above- if it comes across a 'cb', the only thing it knows to do next is produce a 'c', so that's not particularly interesting (though it could be what you want in some circumstances). If more examples had been provided, it might have more options regarding what to do after the sequence 'cb'. So basically the more input sequences you give it, the better it will capture the variety that is possible. Higher order models tend to require more data because there are more possible histories (e.g. for second order: aa, ab, ac, ad, ..., ba, bb, bc, bd,...) and your aim would be to give it examples so it knows what are the probable continuations for all those possible histories.

Variable order models are a bit more forgiving. Say a second-order VMM comes across the sequence 'aa', that it has not seen before. Instead of just picking a random letter to come next, it'd temporarily reduce the order to 1, and then it will know what the options are because it has seen what comes after a single 'a' (i.e. 'b' comes after 'a' in the training sequences above).

To sum up the above: variable length sequences are fine. Providing more sequences tends to train the model better (if there is variety in the training sequences- inputting the same sequences twice is no good). Also, higher order models tend to need more training because there are more possible histories to know about.

Finally, you can have as many values as you want representing a single 'symbol' in a sequence, for either the raw symbols, or the 'reductions'. However, the more values that represent a single reduction, the more possible symbols there are (e.g. combinations of note + velocity), so the more training data you might need (e.g. what can come after note C, velocity 80? what can come after note C, velocity 81?, etc.).

Year

2014

Location

Online

Author