i am trying to extend my markov stuff, beginning with the analysis part.
so far i have only two analysis modes: 1st order for note number and 1st order
for delta time.
i replay stuff from the resulting SRT mostly using external triggers (ah well, of
course not for the delta time mode^^)
any idea what other things can make sense to analyse?
things like "repating pattern" or "tempo change" are far too difficult to
program, the work will stand in an inacceptable relation to what you get.
any ideas? anything you once built?
question too difficult? :D
maybe the links in that topic help?
Perhaps you might find it interesting to take a look at the implementation of such problems that Intelligent Music made back in the day while working on the Jam Factory (our mutual friend Mr. Z. had… um… quite a lot to do with it). I am sure that the google will lead you to something that might (or might not) be worth considering.
it is one of the things which i built but never really use…. at least so far.
you can quickly find yourself making something which is already too overcomplicated
to function as something generic (i.e. something which can==must be combined
with another process to create musically interesting output.)
yesterday i added "2nd order", and made it work with chords.
but even when i read from the table after rules controlled from outside (for example
force a specified note every 8 beats, then let the chain run) the output remains pretty
btw, i am not using a classic transition table with % values, i just hold all notes
(60 63 63 63 67 72 72), this seems better as this data can be used for other things, too.
one day i will find out for what exactly …
oh right, now that you mention z., there is something like that in M isnt it … ^^
gregory, quite a while ago i tried researching jam factory and could not unearth any (useful) information. any chance you/cycling could point us in direction of jam factory material to look at out of interest?
There was a moderately detailed description of Jam Factory and "M" in a paper by DDZ and Joel Chadabe published in Computer Music Journal around 1988 or so. You’ll have to dig around to find it. If you don’t have a university library at your disposal, MIT Press is, I think, selling reprints.
"M" has, from time to time, been updated and revised to run on whatever the current Mac OS was, but I think the last update was prior to the Intel-ization of Macintosh. Jam Factory seems never to have been updated, which is sort of a shame. Of the two, I personally got on better with JF.
I think you should look at some models from, I don’t know if it is max mathews or miller puckette.. and try to understand how can you analyse simple parameters like pitch, durantion and amplitude of events.. you can then get a way of classifying musical parameters, like storing what some specific values gotten from specific points represent in terms of musical material.. and use them to make like some sort of analysis of your input.. kinda I’ve done something a bit simpler (histo+table) – I am still finishing it -, where you calssify a pitch class intervals, to degrees of consonance and dissonance with a coll.. and then I am making stathistical analysis to it to try to understand the degree of harmonic stability of what I am playing.. you can map these sort of things to other paramters.. the application of these are many, from driving synthesis engines (take a look at Gavop, or something like that, something really recent from IRCAM guys in which they drive Diemo Schwarz CataRT, for an instalation context).. also try to understand the implications of these sort of things for generative/non-linear systems (both audio, video, light, etc.).. and algorithmic composition.. and composition in real-time
tiago: for now i am trying to keep signals out of the system, i am just dealing with the
plain math stuff … and i try to find the most generic algos and how one could use them
in a musical context.
i like your idea with the pitch class intervals, but i can basically do that outside and leave
the markov stuff untouched.
what could help the most atm would be a coll3d. :)
@Roman Thilenius "what could help the most atm would be a coll3d. :) "
Hmm, if I’m understanding your ‘coll3d’ need:
usually one would use a relational database for such data structures…
e.g., SQL like:
select valueB from table2 where table2key = (select table2key from table1 where table1key = valueA);
it might be worthwhile to look into using the internal max db and keeping the analysis results in some tables?
just my tuppence,tryin’ to help, ymmv, pax,
cfb aka j2k
It uses 2nd order transition tables, and models both durations and pitch. For me, Feeding in a Bartok Mikrokosmos piano piece to the analyzer, and then generating midi output using the weights found gave back something quite recognizable as "Bartokian" to me and my composer friends: (!) I was impressed.
SQL is currently not a solution, yet i have to admit that [coll] is of course
not the best way to hold bigger amounts of data.
bartok and his russian friends are for sure great resources, i believe that.
bach does not work as good as one might think, at least when it comes to
the rythm part. it is just not chaotic enough, you just get destroyed patterns
out of brilliant original patterns.
but yes, it is amazing that 2nd order already works for most material.
i experimented a bit with one new idea today, i markoving phrases now,
bar and beat ID#s instead of notes, plenty of room for different methods.
oh, and one more thing, what should also work is a dynamic chain lenght, one could
for example choose randomly between 1st and 2nd order vesions