Arne Eigenfeldt's work has been on my radar for years, and I'm always glad to see a performance or installation or paper on the schedule for an event. He's a Canadian composer who creates interactive and generative music systems that he refers to as "musical metacreation." To put it another way, he creates agents that, working in community with other agents, create music. I've wanted to sit down and chat with him for a long time and to share his work with others, and the time presented itself....
It seems to me that in the time I've known you - going back to compositions like "Coming Together" back in the 21teens - that's what you refer to as musical metacreation has come to occupy an important role in your body of work and investigation. Was this always a goal for part fo your artistic practice, or do you feel that it's something that happened gradually?
It took me a while to realize that process is an overriding concern when it comes to my practice. I always hesitated when it came to creating fixed media works, since any compositional decision made in such a work's creation seemed to suggest that it was "the right decision". I always gravitated towards ways of working that offered a myriad of possibilities at any point in time. It also took me a while to realize that my particular outlook wasn't an improvisational practice; although I have tended to work a great deal with artists who improvise, my training is entirely as a composer. And composers love to organize. So as far back as I can remember, my software was always designed so as to present options for performance – both for myself when onstage as well as for reactive artists (which have included dancers and actors along with musicians). For many years, I considered this "real-time composition", since I considered what I was doing in performance was composing, not improvising. And because there were too many strands to control in any meaningful way during performance, I felt the need to explore the potential for some intelligent help. I have no scientific background, so I have a limited ability to fully apply many of the really amazing aspects of artificial intelligence to my art-making. And when I've collaborated with scientists, they have tended to get somewhat frustrated with me, since I, as an artist, have tended to use the tools in ways that benefited my art-making practice, rather than building something that could be laddered upon by others. I won't even begin to get into notions of evaluation, which is a bugaboo for artistic/scientific collaboration. I think every single paper that I've written about any of my systems has come back with a comment by a scientific reviewer asking about the system's evaluation: "yes, but how do you know it is creating interesting music?".
One of the features of engaging in metacreation would seem to me to be that you set things in motion rather than directly engaging in what we might call "agency." My systems are rarely, if ever, "instruments" with which I perform. Instead, I like to think of them more as complex systems that I can nudge in certain ways during performance.
Nowadays, compositions of mine, if you can call them that, are collections of particular musebots (agents) that generate sound and music in certain ways, together with an environment that is either generated autonomously, or controlled in performance by myself or react to live input in some way. I spend a great deal of time fine tuning the musebots themselves, and listen how they react to one another and the environment. Because I can't control what the musebots actually do, or how they react, it is a tricky process of fixing code versus fine tuning parameters. Did the agent do that because of faulty logic on my part, or because it was following its own intention?
For quite a while now, I've tended to create agents whose behaviour I can (kind of) pre-determine through what I call "personality traits".
From a Max perspective, this usually means creating a variety of instances of a single (very complex) patcher, with a few high-level parametric controls. These often include the following:
- Impatience: How long an agent is willing to be inactive
- Persistence: How long an agent will stay active
- Vitality: How much energy an agent has, and how active it may become
- Consistency: The amount of variation an agent will (or won't) attempt
- Compliance: How willing an agent is to restrict itself to the higher level requests
- Repose: A preference to perform in sparser, or denser, sections.
Another way of working is to combine a variety of musebots that generate sound/music in different ways, and have all of the musebots react to valence and arousal parameters. Thus, when increasing the overall arousal of a section, the different musebots will interpret that depending upon their vitality (Will it actually get busier?), their consistency (Will it continue on doing what it is doing, or change to meet the new request?), and compliance (Will it match what the request is, or will it go off and follow its own path?), etc.
It's often the case that when people discuss algorithmic work, they focus subtly on their own ideas rather than the result - describing in detail what the process may be, how it's mapped, and so on. But it's my sense that you're setting something in motion for the purpose of creating results you're explicitly not directly in control of yourself - you're focusing on "self organization" rather than "organization." Do you think that leads to a qualitatively different feeling about the work you're engaged in (working to let go of something rather than to possess the product of a practice)?
I'm certainly creating systems that are capable of a variety of output: that's clearly the goal. I tend to think of the recorded output(s) more as realizations of the process and set of interactions of a given instance (I do have a Buddhist practice, if that means anything) rather than a "work" in the traditional sense. I will spend weeks, if not months, creating a system, then a few weeks recording its output and posting the variety of realizations. I have a problem with naming these realizations, so I have created a phrase-generator that uses a Markov model to suggest titles to me. Another important recognition I had quite a while ago was that when I was composing fixed works (I had a career as a composer of music for dance in the 1990s), I would make many intuitive decisions at any given point, and any of those decisions would then influence how the work proceeded. If I had made a different decision at any of those junctures, the work would have been quite different. None of those decisions were the "right" decision, or the "perfect" decision, they were simply decisions made. So I began to consider generative music as a possibility to explore all possible decisions that could have been made in a given piece of music. Thus, my systems, including those that use musebots, are attempts to re-compose the same piece in many different ways.
Reading through your materials on Musebot creation, there's a real sense in which you're creating a kind of community or ecosystem for agents, as opposed to more, let's say, questions of how you would personally generate and organize variety. As the creation of Musebots is something that people other than you have worked on, is it your sense that you've noticed some kinds of qualitative difference in the creation of Musebots undertaken by other people? Is there a kind of "style" or Musebot creation? If so, what forms does it take?
The musebot spec is really a simple method of sharing information. The idea was that multiple users could not only share what their agent-based systems were doing at any given point in time, but the systems could also communicate what they intended to do (which, obviously, human improvisors cannot do). That was the goal, and I was lucky enough to work with some very clever and talented folks that explored this paradigm (and we generated quite a few publications out of it). Ollie Bown once called the musebots "a conference paper generator". But what ended up happening was that for everyone else, musebots were only one aspect of their creative output, and while they were happy to devote a few weeks to concentrated musebot development and interaction, they had other musical needs in which musebots didn't fit. For me, on the other hand, musebots solved every problem I had with previous agent-based systems. I have used musebots for every creative work and system I've been involved with since 2015. (For a while, I called myself the musebot evangelist, going to conferences and spreading the word.) As a result, I've had to take musebots in directions that only I've been interested in. While this has resulted in better music (since the musebots were a tool in the music's creation rather than the focus), musebots lost their community aspect.
But when I did have the opportunity to work with others for a concentrated period of time on using musebots to solve a particular problem, it was a wonderful experience to share knowledge and working methods. Since others tended to approach musebot development more as adaptation of their own methods, it was an opportunity to peak into their ways of working from a shared viewpoint.
Those ensembles tended to divide musical activities between designers and 'bots. In one ensemble, Andrew Brown's 'bot took the lead role, and Ollie Bown's 'bot was the drummer, and mine played bass. That was really interesting to combine generative approaches, as Andrew's 'bot did things that no 'bot of mine would do, and Ollie's drumbot played in very Ollie-ways. Naturally, Ollie coded in Java, Andrew in PD, and I in Max.
One of the fascinating features of the computers who "taught themselves" to play Go is that they're wound up discovering/inventing (not sure which word I want here....) modes or techniques of play that don't resemble what human players do - techniques of play that human players could adopt. Are there things about style or organization that you think you may have "learned" or "learned to spot" from working with Musebots?
Another story. I was involved in a project in which agents created electronic dance music by learning from a corpus of hand-transcribed music (we paid graduate students to transcribe every note, sound, drum hit of 50 Breakbeat tunes, and 50 House tunes). The generated music was ok, and I learned how important production values are in electronic dance music, as opposed to just the MIDI information.
In 2018, Ollie Bown asked me to create a musebot Trap ensemble. I listened non-stop to Trap music, and I created a dozen musebots that could generate the music fairly successfully (each bot controlled its own signal processing). But the important point was that the intelligence of any individual bot was mine (I scripted all the rules), but the intelligence of the system was the complexity of musebot interactions, reacting only to valence and arousal settings. I was amazed at the different outputs that I received by adjusting only these two values, including not only surface features (the interaction between kick and bass, for example) but also large scale formal shapes. I would get two minute frenetic tunes, and ten minute opuses that played with tension. However, computer scientists would say that the system wasn't creative, it was simply programmed by a clever programmer to sound creative. The number of such comments led me to drop out of the musical metacreation community, sadly, as it became more concerned with scientific validity over musical results.
It seems the case that some of what constitutes the things that you think of when designing Musebots has to do with strategies for reactive behavior as much as constructive behavior - techniques balanced between evaluation and generation. Thinking of those things as a non-builder or Musebots, I marvel at the idea that you're not necessarily sticking to the notion of something like "style" That seems like a pretty thorny problem. How do you approach it?
I don't think that's true. Musebots tend to be simple agents that need to know how to produce something, and that something is easiest to consider as a style. I usually think about the kind of music I want to create – what it should ideally sound like – and then I set about designing the musebot. Ambient synthbots don't need to know anything about beat structures, whereas TrapBeatBots don't need to know about timbral evolution (at least in the way that ambient synthbots would). That said, I do have a HarmonyBot that generates progressions depending upon a given corpora (including Mozart, Miles Davis, Pat Metheny, Jazz Standards...) and BassBots and SynthBots that groove on whatever progression is available to them. The more complex a bot needs to be, the more it needs to know about how it should sound. Ideally, it should know about style, and what the boundaries of that style may be; it can then freely explore that space. It needs to be predictable enough to maintain some sort of stylistic consistency, yet produce enough variation so as to surprise me (and that, my friend, is the Holy Grail of musical metacreation).
I've trumpeted the potential of stylistic mutation or recombination: combine a HouseBeatBot with a MilesDavisTrumpetBot and a IndianRagaSitarBot! It actually ends up sounding as bad as one imagines it, since there is no shared language over which the three bots can explore. More interesting was having a DrumBot share a corpus between House and BreakBeat music, in which a repetitive House pattern would be interrupted with dramatic drum fills more typical of breakbeat music. It wasn't something that I would have predicted, and it was a nice (and surprising) product of that exploration.
What works or artists have inspired you along the way - things or persons who have lead you to involve yourself in this activity....
When I started teaching music history, and teaching John Cage, I realized how much his notion of process had permeated my thinking. Marina Abramovic's "An artist's life Manifesto" is my facebook background: the process is more important than the result (although I reserve the right to select those results I like better than others). Your (early) Cycling colleague jhno's music was very inspirational years ago, because it seemed to me very process based, and not a set of sculpted fixed media sequencer creations. I discovered many ambient electronic composers, like Christopher Bissonnette, Taylor Deupree, Lawrence English, Marsen Jules, and Chihei Hatakeyama, all of which served as extended listenings while I commuted to work, and imagined how I might make musebots generate that kind of music.
It's clearly the case that you're attentive to the ways that self-organization arises, and I'm wondering if there's a sense in which that attentiveness has been honed by your experience of hearing human beings structure musical behaviors as they go (Outfits such as AMM or Joseph Holbrooke or The Hub come to mind here). Does anything like that inform your creative work, or is it best to have as few "human-influenced" expectations as possible?
I began my musical career as an electric bassist, performing in jazz and rock ensembles, most of it improvisational. But, as I mentioned, I was always a composer, it seemed, since I always tried to organize the others into structures I imagined. As I mentioned, I continue to enjoy working with individual improvisors, but I've found that getting more than one of them in a room results in their going places that they want to go, rather than I want them to go. In that respect, my musebot ensembles are visions of how I want virtual musicians to operate (I've long held the belief that all composers have a god complex). I can't help but worry just a little bit that perhaps the whole idea of musical metacreation might not have a great deal to do with what I might consider algorithmic composition at all, and that I could be deeply confused about the difference between creation and metacreation. Maybe an idea like "algorithmic metacomposition" might be the perfect summary or way to denote my lack of insight. How do you see the way in which more traditional views of algorithmic composition do or don't fit in with musical metacreation?
Algorithmic composition seems to me to be about creating a system that can generate musical output, where the designer has to create the rules that govern the possible outcomes. I'm less interested in rule-based systems, and more interested in designing processes. It also seems to me that algorithmic systems are meant to generate something quite specific, whereas I have only vague ideas of what I want it to sound like. I never begin with an idea that it should generate a specific drum part, harmony, bass line, soundscape; instead, I think of "what would happen if I had a 'bot that selected soundscape recordings that had metatags appropriate to a situation, added in a resonant synth that took those recordings, and tuned it to harmonies generated by my HarmonyBot? And the harmonies (and the soundscape selection itself) where dependent upon a valence/arousal setting derived from a video that was playing, which itself was selected from an archive that was attempting to sync with another videoBot with a similar archive...which was...etc. etc.
Putting an idea or a body of work between us as something to converse about is useful and instructive, but it can also be reductive. I don't want to reduce your musical life to being something entirely circumscribed by your Musebot work - it may not the your primary focus of activity at all. I'm curious about other areas of your musical practice, and the sense in which they've developed as divergent from the Musebots. What are you up to these days? Writing operas? Building ocean-going kayaks? Learning Esperanto?
I'm a full professor, pretty much at the top of my pay scale. I don't need to jump through the academic hoops anymore, so I'm able to now pursue those artistic endeavours that truly interest me. At the moment, those tend to be long collaborations with select artists (not one-offs), where we can dig into the artistic depth of a work, and not attempt to create something that we can publish about later. I believe this is something that you yourself have found most satisfying, albeit you discovered that long before I did. I mediate, practice yoga. and I strive to live in the moment (the whole COVID-19 situation has only put a damper on getting my vinyl records from my US postal box, otherwise I've been doing extraordinarily well).
Thanks for the opportunity to reflect. It is always somewhat cathartic to navel-gaze once in a while... Musebot Resources
- An Intro to Musebots
- A Complete Musebot Listing
- Getting Started Making A Musebot
- The Musebot Developer Kit (github)
- A multi-author paper on interacting with Musebots (NIME 2018)
For Further Viewing/Listening:
- Arne's latest collaboration with Simon Overstall - a work in progress (it was hoped to show at SMC in Italy this summer) 2019 collaboration with actor/dancer Kathryn Ricketts A solo work for Arne himself (He says, "I needed something that I could perform on my own"):
- Musebots making Trap
- Musebots modelled after Miles Davis ensemble
- Musebots reacting to video
- Musebots reacting to improvising musicians
- Musebots generating entire compositions (Moments)