Artist Focus: Jeff Kaiser
Jeff Kaiser and I started playing together almost as soon as I turned up in LA at the beginning of the 2000s. The short (and grossly understated) description of Jeff is that he’s a trumpet player and composer who works in Max. The release of Jeff’s new “KaiGen” generative Max for Live devices (about which you can learn more at jeffkaiser.com) provides the perfect opportunity to catch up with him, and to give you an idea of why he doesn’t just “work in Max.”
There’s one thing that is a big part of the way that you and I work: we don’t like to rehearse. How does this all play out when you start thinking about generative music machines?
It is kind of interesting. As a performer who uses Max, I view building the plug-ins and the patches as rehearsing. As you’re building them, you’re learning how things work, you’re changing them, they are changing the way you think about them, you are building structures, creating formal ideas, pitch and rhythmic ideas, and even timbral and textural ideas. But really, you are offloading those processes and a large portion of the decision-making onto the machine. You know, some people talk about practicing their technological instrument when they’re done with it, for me, I think the process is so involved that by the end, I’m practiced on it and ready to go.
In regard to rehearsing, improvisation has quite a bit of decision-making, and I think at some point decision fatigue sets in. Sometimes, the idea of going to a rehearsal can sound like taking time off to go make decision after decision… so why not offload some of that decision making onto the machine so you can create complex interactions where you, as the human member of the system, aren’t alone or even necessarily the primary decision maker. In fact, you are part of a system that is collectively making decisions.
Playing is so much fun! But, another reason I don’t like to practice too much is it gets rid of a lot of the surprises, and I enjoy the surprises. Especially on stage—that is—as long as a crash is not involved. I like to be on stage and have something weird happen, and just be totally thrilled with it. You may construct your technology to be surprise-free, but I love surprises – either by humans or my technological co-creators. With the generative stuff, after you get so many probability gates going, and the analysis by the machine, and the system is putting out stuff that influences you and you play stuff that influences it…how can you not love that? The complexity of the interaction is where it is at, where there is wonder and surprise, where there ends up being so much new and fun stuff even in the midst of just doing what you do, your same voice/vocabulary/thing. And the interactions go in so many directions: space, human, instrument, tech, sound system, audience, what you ate for lunch, what happened at work, how the drive to the gig was… But yes, some surprises aren’t that much fun, like the time I accidentally emptied my trumpet’s spit valve on your power strip right as we started…sparks literally flew.
Do you think that this idea of not practising things that you build has made a difference in the way that you think about interface design over the years?
Yes, there has absolutely been interplay on that. When I’m building I’m throwing everything on it, in it and at it. The KaiGen plug-ins originally had these giant interfaces that would take up the whole effect channel in Live. Then, I realized as I was building it and messing around with them that I couldn’t control all this stuff while playing. I have to limit what I’m going to do—limit my participation and let the machine have a say. So it totally affects the UI. I started stripping away possibilities on the user end and just put focus on a few things. Yet you get a whole bunch of instantiations of simple things, and you can get great complexity. So the dozens of knobs become just a few. There are always people who say, “Oh you could add this, and you could add that,” I say, “Well, yes you can,” with the emphasis on “you.” That is one of the joys of Max – you can add things or take them away. If you just keep adding, the risk becomes a software version of the car Homer Simpson designed…
When you’re designing these things, do you think about a more general use of them or are you just designing them for you?
I really just design them for myself — unless I’m consulting on a project — that’s why they are so idiosyncratic. Then, I just offer to share them. Sharing plug-ins and patches, for me, is a lot like running my record labels pfMENTUM and Angry Vegan – the pleasure and fulfillment I get from being a part of other people’s creativity is immense. That said, there are still a few patches I keep to myself.
Right on. I mean, I don’t think there’s any other way of rolling. I think trying to consider everyone else is for the birds. You know, like Max (ahem).
Yeah – it’s funny because when you start thinking about other people, it can be helpful, but you can also get off track. It is a balancing act. You may think you are too weird, and try to dilute what might be the most interesting thing about what you do. But there are a lot of interesting people out there making and listening to interesting music. Do your thing, get it out there, and those folks will find you.
Your musical vocabulary is epically all over the map. But you haven’t seemed to me to have been influenced a whole lot by pop. So what’s with all the drum machines, man?
You know what, they are kind of like a by-product of the oddities I work with. When I use them – in particular the KaiGen-R – it doesn’t sound anything like a regular drum machine., but you can make them work in that realm! While doing my stuff, I realized if you have a kick on one and three in 4/4 or something, and you realize there’s like a 50% chance of it hitting on a quarter note…I started to think about that: drums as percentages and probabilities. Raise the impulses hitting the probability gate to 1/32 notes and drop the probability of them getting through that gate, you can get a very interesting and varied high-hat.
I personally use it for all this wacky stuff. So when you hear me use the KaiGen-R drum machine, and the KaiGen-M for basslines, it won’t sound like what it does on the video tutorial. I just thought it was kind of fun, and that people might dig it. Personally, I tend to use KaiGen to trigger sample libraries of sounds I’ve made.
I was working with some students and they were using step-sequencers. I love step-sequencers, but never use them. I wanted to get the feel of a step sequencer with a greater amount of variety within the repetition. I like the idea of having some sense of repetition in the basslines that was not actually regular repetition. So you can select a scale/mode and limit the ambitus to act like a step sequencer, it just won’t have the same order of pitches, durations and velocities every time. Those generators allowed me to have the sense of a sequencer without having that literal repetition always. When I do that stuff, it is using the plugs to distribute stuff throughout space in more chaotic and complex patterns, you know?
Anyway, while goofing around with the drum machine characteristics and basslines, I added some horn hits using the harmony aspect of KaiGen. It was like, “Wow, it’s generating ’70s stuff.” It kind of surprised me and cracked me up, you know?
I’m sure it did. Speaking of the 70’s, do you remember that time we did the improvisation gig with Tom McNally at that shopping mall in Venice and you were really, really hungover, and you had turned up half an hour late to the gig? And we decided that we were gonna play disco behind you the whole time?
That was funny. I totally forgot about that.
So I guess that gig wasn’t the seed that brought all of this on.
Ha! Well, maybe…! In the ’80s when I had my Commodore 64, I would write random pitch generators in BASIC like everybody did. And I always liked that sound. So, you know – there you go. And then, just give it a little bit more complexity by putting in other possible variabilities. It just kind of keeps growing.
How much would you say that teaching and working with your students influences the things that you do on stage?
I think it definitely does. Most every semester, and most every music technology class I teach—whether it’s electronic music composition, or Max, or digital audio production, or any of those types of things—there’s a critical listening section. I ask students to bring in recordings that they believe represent high qualities in recording, mixing, production, performance, experimentation, et cetera. And they always bring the coolest things. Usually they do, I should say. Sometimes there is that moment, “Oh, REALLY, you’re bringing that in?” But, usually they introduce me to the great music that I would never hear otherwise.
Well, one of my favorites that a student brought in was Drake and Travis Scott’s “Company,” And, it’s got these low sub-tones. When you listen to the beginning section, to me, it’s experimental music. It’s so cool. You know – there’s that whole thing where you side-chain a kick drum to a sine wave: Drake and Scott just put the focus on the low sine wave going “duuuuuu” in there. I just love that sound. And so I’ve started to use that sort of thing with the generative stuff, where I have these low things coming in.
So, if I go download all the KaiGen plug-ins, can I set it up in an ensemble? Can I make a band gig with it with my buddies on their laptops? Does it all work together?
Yeah, well, I mean… Hmmm. It depends on what your musical goals are. Trevor Henthorn, (the other half of my duo Made Audible) and I use Live and Link with this stuff and do live performances. But it is definitely experimental – more groove than usual for me, but still all twisted-up. You get all the KaiGen going, with my Smaple Palyer and Trevor’s TrevoScrub series, and it gets to be fun. By the way—Trevor just posted his new Max For Live plugin, the TrevoScrub-TxT. It uses the shell object to access the “say” command in the terminal on a Mac, so it is like a giant Speak ‘N Spell – well, Spell ’N Speak is a more accurate description. He has it set up so you can lock it to LFO’s and Beats in Live and stuff. It’s very cool, he has a version of it on his webpage you can download for free. He also has a “radio station” where his plug is reading live feeds off Twitter that have the hashtag “#NoBanNoWall.” And so, it grabs all the tweets and then speaks them while the KaiGen-M and the KaiGen-R are playing basslines and drums behind it. It’s 24-hours a day, and it’s just the wackiest and most fun political thing.
What sort of reaction have you gotten from people now that you’ve made these patches available for download? Have you got any idea about how people are using them?
One of the first people to do anything with the KaiGen is my colleague at the University of Central Missouri (UCM), Eric Honour, who is a fantastic composer and improvisor that works in Max. He didn’t even tell me he was doing it. He took the New Tech Ensemble at the school, and they all hacked a bunch of KaiGen together in Max with the Link package. They have eleven performers on eleven laptops that are all synced via Link, and they each have their own speaker around this big hall. They are actually gonna do something with it for the President’s Gala, here at UCM. It’s spatialized in this big hall, all these synced sounds and I actually don’t know exactly what they’re doing. He just told me about it the other day, I’m totally stoked about that.
Which is the one that uses the fzero~ feedback loop?
That is the KaiGen-I. So much fun, every time I open the patch I can’t believe it works. It was inspired by the work of George Lewis (Voyager), and like George, I developed it at STEIM in Amsterdam. It then really blossomed after hanging out and working with Ritwik Banerji, who has this great improvising patch written in Max called “Maxine.” You can hear that on the pfMENTUM site. I looked at his patch and thought of a different way to handle the decision making mechanism and the probability gates, so I came up with this piano playing patch called “MyRA,” which has now been rebuilt and renamed to KaiGen-I. It is just such a complex patch to run inside of Ableton, I’m working to simplify it. It improvises with you, itself or others. It decides to listen (or ignore) itself and others. It picks and chooses between who it’s listening to and who it’s not listening to, who it is reacting to, or not reacting to.
It is kind of crazy: there are listening modules, self-analysis modules (for listening to themselves), and group option modules. You put these modules all over your Ableton Live patch, so it’s listening to stuff and then feeding it back to the Master. The mood changes as you mess with the fzero~ settings, how big of a chunk it is analyzing, et cetera.
You could give behaviors and “moods” to everybody in the band. I love it.
It should be out soon – I need a free week to finish it. I think I’ll get to that over spring break. I want to simplify it so it’s just drag and drop into Live.
Yeah, it’s really fun though. I use it to follow my trumpet. I have this library of sounds that I recorded at UCSD studio, where we recorded the inside of the trumpet without me playing it. Just like the valves descending, and ascending, and slides going in and out. When KaiGen-I follows the trumpet, it is like clouds of these mechanical noises buzzing and fluttering around me while I’m playing.
That whole fzero~ feedback thing … That’s the gift that just keeps giving. I can listen to that all day long.
Yeah. So other people are doing it now, too?
No, I think you’ve cornered the market, mate. Of course, I use it…
I put it on in coffeehouses with my headset. I have a piano, bass, and drums setting, and it’s improvising along with the clinking of the glasses, silverware, espresso machine, blenders. I actually have recordings of that somewhere.
Yeah, it sounds amazing. And hooking all that stuff up with really nice Kontakt sample libraries and stuff, so it’s like you’re playing on a Bosendorfer grand, and you’ve got the really nice upright bass and all the toys really works a treat.
Oh, it is a treat. Those libraries are fantastic – I use the Native Instrument stuff constantly. It is funny, I’ve had some pretty heavy researchers look at the KaiGen-I and say it shouldn’t work: that it should just play octaves after it starts listening to itself. The thing is that if I was just using numbers, that is exactly what would happen. But since the path involves real audio in each feedback loop, there is a chance for the space (real or artificial) to have a say. So, the audio is analyzed, the computer makes decisions, then plays it, and then that audio is analyzed again…if it was just putting the numbers back into the loop, errors from—and changes in—analysis would not happen as much without some intervention. The audio feedback loop acts like imperfect memory, like that child’s game “operator-operator” where changes creep in. You know, whenever you’re dealing with physical space, or the artifacts and imperfections of audio analysis, it’s going to affect what partial it’s grabbing and sending along as a number. So it’s going to be slightly different each loop…
Jeff Kaiser is a trumpet player, improvisor and composer. A long-time resident of Southern California, he has relocated to the midwest where he is an Assistant Professor of Music Technology and Composition at the University of Central Missouri.