An Interview with Jeff Kaiser: MSP Whispers, and I Listen

    The place where Jeff Kaiser started out isn't nearly as interesting as where he's going, and where he's tarried along the way; Classically trained as a trumpet player (and the owner of one sweet quarter-tone horn, by the way) his "instrument" is now a hybridized and extended beast with Max/MSP as the coiled mainspring of the non-human portion. The result lives at the intersections of experimental composition and improvisation and the timbral and formal affordances provided by combining the instrument and the machine.
    And the hybridization just keeps going - as Assistant Professor of Music Technology at the University of Central Missouri’s Center for Music Technology, he now embraces the artist/scholar label, as well. A quick look at his recorded work ought to suggest the breadth of is output and range of his collaborations (editor's note: I have been one of them, myself. So when I use the word "ferocious" to describe his energy as an improvisor, I am speaking from some experience).
    Jeff took a little time off from his performing life and record label owner (his pFMENTUM and Angry Vegan labels have a rich catalog of fascinating interstitial musics) to talk about his own transformations to life as a Max user, as an observer of the interplay of music and technology, and his new venture into Ableton Live...
    You’re one of those people for whom Max allowed you to jettison about a squillion stompboxes and dozens of little shorty cables – back when you began, did you start by implementing the bits of your snarly rig that you wanted to “save,” or was it chance to start over and to dream in software?
    Somewhere there is a conference paper I wrote titled, “How I lost 150 lbs. thanks to Max/MSP.” I was in London touring with Andrew Pask, and in addition to my trumpet, I was hauling around two 75 lb. rolling cases full of cables, transformers and hardware devices. Andrew, in contrast, had his sax and a shoulder bag with his laptop; that was it. Touring had become so physically uncomfortable for me that I decided to learn Max and replicate my hardware in a more portable package. So, that was my initial goal: replication and portability. I thought I was in charge of Max, telling it what to do. But in working with my new rig, the software started making “suggestions.” (Not verbally, of course!) Max has certain affordances that encourage particular possibilities, behaviors and ways of working. Ultimately, the development of my performance rig was a mutual configuration: I would do something, Max would “suggest” something, I would learn something from the Max forum, and so on. Somebody at Cycling74 once tweeted, “#maxisnot telling you what you make.” But in ways, it is. And this is positive, it encourages us to dream big and then surprises and inspires us with previously unconsidered (and un-dreamed) possibilities. Professor David Borgo (University of California San Diego) and I have co-authored an article on some of these ideas titled “Configurin(g) KaiBorg.” You can listen and watch videos of our duo, KaiBorg, here.
    In addition to – or, really, before Max entered your life, you were a ferocious acoustic improvisor (trumpet and voice, if you don’t know Jeff). Someone who sees you now is more likely to see that practice interleaved/alongside of/extended by digital processing in a way that seems so well integrated that it’s hard to tell where one begins and the other ends. How do you see your practice in terms of the boundaries or maps of the physical and the digital portions of your “kit?”
    I see my rig, or “kit,” as what might be termed a system-environment hybrid. There are visibly distinct elements that suggest boundaries: my body, trumpet, computer, hardware interfaces, the performance space, et cetera. But they don’t feel distinct when I am performing—at least not when we are all behaving in a civil manner with each other! Mihaly Csikszentmihalyi would call this “flow,” but that is really about the human experience. For me, it is something beyond a human-centric experience. I’m not saying that all of the elements are constantly equal agents in the performance, but that agency shifts and flows between all elements in the environment. Because of this, I believe that one major test of music performance software is how it behaves in an improvisational environment. One kind of behavior, typical for many compositions, is to have an automaton patch that cycles through fixed presets for a composition. But I’m interested in how flexible and responsive the software can be as an active participant in an improvisational environment that demands such from all participants.
    I can imagine that the process of developing your performance rig has been a gradual evolution, but I'm curious about how that evolution "looks" in terms of the user interface - do you think there's a way to develop a user interface that encourages you to let that agency flow, or is it just a matter of trying to migrate things to pedals and visual feedback?
    Dan Levitin in The Organized Mind has a great discussion of multitasking and attentional switching. He writes, "Switching attention comes at a high cost... Attention is a limited-capacity resource."
    As musicians, we are in a way multitasking constantly - but in the best-case scenario, our attentional focus remains on the big picture: the music. The best interfaces and software, in my mind, do not disrupt or force attentional switching but allow us to maintain attentional focus on the music. As Levitin says, "Multitasking is the enemy of a focused attentional system…Once on a task, our brains function best if we stick to that task."
    Our task is music, I work on incorporating interfaces and software that will support that task. The evolution, for me, is a constant move away from requiring the screen: The Push and Softstep have played crucial roles in this. (I have not yet escaped the screen, the move away seems asymptotic.)Your doctoral research looked at the practice of improvisation across a very large population of artists worldwide – each of whom, in some way, integrate electronics/live processing as a part of their work. I’m sure that generalizing is either difficult or reductive or downright pernicious, but is there anything you can say about what you’ve seen when you think back over the hours of discussions with this community of artists?
    As a scholar, I am particularly interested in questions of agency and how experimental musicians construct value, meaning and community. The artists who participated in my research shared wonderful insights. I could not choose a favorite quote, but here are a few that I keep coming back to:
    “In music we are still under the sway of semiotics and language philosophy, which I think is pernicious because it always degrades what musicians know about music and elevates some sort of symbolic representational concept of music. So the representation becomes more important than the stuff of music. The representation is just a tool, and it’s true that writing can liberate possibilities, but it always has to reflect back to the meat of music, to the wet meat of music....” — Joel Ryan
    “All these things are alive, they just have different kind of ways of expressing it. They are alive, the moment you touch it your senses tell you that you have made a connection. And it knows you’ve made a connection because whatever you do it responds to you. So, what is that, except a living organic connection.” — Wadada Leo Smith (on technology)
    “There is a virtuosity in being quiet, in imagination, in memory, coupled with the technology which has to be clear enough that I can start using it intuitively, getting surprises out of it…where I can discover another turn of sound or technique in either the machine or me or the flute. I would call virtuosity: fantasy, and memory to use it.” — Anne Laberge (on virtuosity and technology)
    Since we’ve last talked, you’ve moved into some new areas of practice – specifically, working with Ableton Live and also including improvised visual material in your performance. In the past, you've struck me very much as a Max programmer of the "instrument builder" persuasion - Do you see the process by which the new stuff has appeared as being along a continuum with the work that preceded it, or is this a step sideways?
    I’ve been privileged to work with wonderful visual artists such as Mark Henrickson. It is an unfortunate reality that these artists are not always available, let alone in the same city. So I decided to start doing my own Jitter work, with their help and insights. My first performance featuring this work was in November and is available to view here:
    Ableton Live has been a surprise for me. I admit I was not a fan of it in the past. It felt too rigid to me. But now, using Max for Live and Push together with Live, wow! Trevor Henthorn and I have a duo, Made Audible that works in Live, leveraging Push 2 and Max for Live devices to create database and probability-driven electronica. Accessing large data sets via MySQL within Ableton Live is so much fun, and always surprising. We just spent two weeks at STEIM in Amsterdam working on the project and will make these Max for Live devices available for free in the Fall 2016 with our first album release.
    Those last four questions lay in a pretty straight line, and you’re someone who loves the spline, the curve, and the plummet. Is there anything you wish I’d asked you?
    It is interesting for me to look back over the last few years. If you had told me in 2005 what my artistic practice would look like today or that I would now be a professor of music technology, I would probably have been skeptical. At that time, I had virtually no experience with software. But I now view Max as essential as my trumpet: an incredible tool to rapidly develop my ideas from conception to reality, with Max adding suggestions and surprises along the way!