Artist Focus: Mari Kimura
How did you Learn about Max?
How did I first learn of Max? It was from the man himself, David Zicarelli. I think he was still working at Opcode Systems – 1991, I think – and still a Ph.D. student at CCRMA. Instead of going to a classical music festival as usual during the summer, I took CCRMA’s computer music summer courses. There, I found that I was not only the only violin player, but also one of the very few women there – although I didn’t care about that much. I bumped into lots of people like Paul Lansky, who was visiting from Princeton. He saw me with a violin in the studio and said, “What are you doing here?” and then very kindly took a keen interest in my work. That was a terrifying introduction to computer music, since I didn’t know how to turn on the NeXT computer – which was, in fact, my first computer!
I was introduced to the Stanford quad system at the amphitheater that John Chowning created. Someone at CCRMA showed me a terrific little piece of software called Quadrifolio – it was a little Mac program to pan sound around by drawing the trajectory. We used two Yamaha reverb units, placing two pairs for quadrophonic stereo diffusion by using MIDI fader information to do the panning. So you could place the four speakers in a distance you wanted and then record trajectory and speed – it was a very early quad trajectory system. I happened to really love that little program and was very impressed by it; it just so happened it was David who worked on the program, so that is how we met.
So this is when he showed me how Max works.
What is your favorite Max object?
It’s the matrix~ object.
Why the matrix~ object?
Because without matrix~ I don’t think I can perform! It’s really meat and potatoes; I’m sure it might not be a very exciting answer to you. The selector~ and gate~ objects are the things of the past, and they are not useful to me on stage except for showing some examples. Without the matrix~ object, it’s really difficult to perform on stage.
So you’re using that for routing?
Well yeah, not just for routing, but I wouldn’t use anything else that doesn’t go through a matrix~ on stage.
So you must be coming up on 25-30 years of Max usage, right?
Close to it. I started with Max version 2.2, I think.
Mari, how much is Max a component of what you teach at The Juilliard School?
Max is a tool, so of course I teach it, starting from “what is a toggle object?” and stuff like that. But ultimately what we want is to make music with it.
One year, I had a student who wanted to write something – when I was talking with him, I thought to myself, “No, he doesn’t need Max – he just needs automation.” He just wanted to do a sort of karaoke, playing along with changing effects with his flute. That was one of my first Max pieces that I did in ’93 or ’94. That’s what he wanted to do, so I said, “Okay, let’s forget about Max,” and just draw a curve to play along karaoke-style with effects in Ableton Live – this was before the days of Max for Live. It’s about the musical concept – how students come up with music that they want, and which tool to use to realize their idea. If it doesn’t have to involve interactivity using Max, that’s perfectly fine with me. One year, I had a student who, in the course of talking with her about her work, turned out to just want to make ‘Sound Art’-type music – so we did everything on Logic.
We then added some processing to try to add something interesting in Max and then dubbed it onto Logic. So it’s not always interactive.
Whatever gets the job done, right?
Well, whatever outcome best fits the person’s artistic goal, actually.
So, Mari, tell me about your new album.
It’s called Voyage Apollonian. The name comes from the title track of the album. The first version was created in collaboration with my friend Ken Perlin, a computer graphics artist, SIGGRAPH prize winner, and Technical Oscar winner. He’s probably best known for an algorithm called “Perlin Noise”. Ken has a great blog, and he posted a really beautiful fractal animation using a fractal called Apollonian gasket.
He created the fractal transform in a very interesting way, so I made a music for it. The transformation of the fractal image was very much like a journey, so we decided to call it an Apollonian ‘Voyage’. This album entirely consists of my recent interactive pieces using motion sensors. The violin bowing motion reflects my musical expression, which is mapped to the parameters of processing and triggers.
There are 3 pieces on this album other than my own, and they are Brazilian ‘jazz’ – compositions by Brazilian giants, Egberto Gismonti, Hermeto Pascoal, and Joaõ Bosco. I’m making a lot of transcriptions for the violin, so some of them – when I perform – I perform interactively when I can. For example, I can trigger a ‘vamp’ section – for as long as I want or until I’m ready to move on. By using the motion sensor, for example, I can program Max for Live saying, “when I’m ready to finish improvising, go to the main melody section if I hold a long note or I bow for a certain duration with a pitch range of X to Y.” It is very seamless to both the performer and to the audience.
When did you start working with sensors? Was that immediately when you first started with Max, or did that come later?
In 2006, I was on the paper jury for the NIME (New Interface for Musical Expression) conference. I was reading through papers, and there was one submission from the IRCAM’s Real Time Interaction Team.
They had a motion sensor called the Augmented Violin system, which was a six-axis motion sensor using ‘X-bee’, containing accelerometers and gyroscopes. After seeing their NIME submission, I was really curious – so I contacted them and asked if I could see their sensor and learn how it works. At NIME 2006 – which was at IRCAM in Paris – I met them and started to talk to Frédéric Bevilacqua, who’s the head of Real Time Interaction Team. We started to collaborate in 2007. In 2010, they set up a Composer-in-residency in Musical Research program at IRCAM, and I won the ‘competition’ and was chosen to be the first one to be invited for this program. I spent 3 months at IRCAM working with Frédéric Bevilacqua’s team.
At IRCAM, I used their sensor called “MO”. Modular Musical Object, and this is when I started to write for music with motion sensors. From 2010 to 2015, I used “MO” to perform my works. Now, the technology is moving forward; until now it was six-axis sensor, and now many people are using nine axes, using magnetometers in addition to gyroscopes and accelerometers. Since 2015, I’m working with Liubo Borissov, Professor at Pratt Institute, one of my long-time collaborators. Liubo is a media artist and a specialist in interactive media, with Ph.D. in Physics from Columbia University. Liubo and I are using the Adafruit – an Arduino-based, original sensor. I’m currently using a prototype we now call “µgic” (pronounced ‘mugic’).
So the pieces on my new CD “Voyage Apollonian” are created using both “MO” and “µgic” sensors. In concert, I now perform all my pieces with our new “µgic”.
Wow, that’s excellent! That’s like a first version of that sensor made it onto the piece’s of the release.
I think a lot of people are using sensors now. I am interested in having a device like a MO and µgic, which we use not just to trigger something like a controller or like a pedal or a Wii remote would do, but – more importantly – to interpret human expression through tracking body movements. Liubo and I are now bringing the µgic sensors to my summer program “Future Music Lab” at the Atlantic Music Festival in Maine. Every year, four chosen performer/composer Laureates get to use µgic with their instruments. This has become a very important ‘lab’ for me to learn about motion using different instruments other than the violin.
Right. Wow, that’s very interesting Mari. So apart from sensor development, are there any other areas that you’re kind of pursuing in regards to Max?
Because of the current political situation that we are in, traveling is going to be more difficult, and I’m getting older – I’m ancient! So, maybe telematics is the way to go, now that the technology is really getting good. Initially, I was never really a big fan of it because the technology was not up to par. But now, I’ve seen some excellent examples of a distant teaching and performances and I think finally it might be the time for me to jump in.
That’s a really interesting view of it. I have to say I haven’t thought about it myself much, recently. But I remember trying to do some telematics stuff 10 years ago or more, and it was bad. There was always these long delays in streaming…
Yeah, it was not good…
It was very novel for a while, but – of course – when there’s no real time immediacy, it becomes uninteresting fast.
Yeah what’s the point, right?
Yeah… It’s like I might as well make something, send the files away with Dropbox or something, and wait for them to come back.
I just saw a really, really good piano lesson, with two Disklaviers at YAMAHA Artist Service in Manhattan, where there was a piano master teacher from New England Conservatory giving a lesson in Georgia. We were sitting in New York, and Makia Matsumura – who is an Operations and Technical Specialist at YAMAHA – had this instrument called Disklavier CFX. Makia had set up video and high-resolution MIDI data was being sent to Georgia via YAMAHA’s proprietary “Remote Lesson” technology to another Disklavier CFX piano. Makia said that since the video transmission rate is slower than the digital information sent to the pianos, they had a really clever sync program running. Therefore, the video, sound and the key movements of the students playing in Georgia was playing ‘physically’ on the Disklavier CFX in New York in perfect synchronization. I asked the teacher how he felt. He said, “This is exactly as if the student is sitting right in front of me and the piano sounds exactly the same as a ‘live’ lesson.” Because the technology is so advanced now that… It was kind of spooky in a great way. Only thing missing was this flesh and blood human being, while everything else – the musicality, the expression, everything – was there except she wasn’t there. I believe it is this kind of sensitive care and attention that takes technology to truly breathe life into our musical life.
The teacher was playing the examples in New York which was ‘physically’ playing the Disklavier CFX in Georgia. The visual and the live piano was completely well synced, so it was very instantaneous. It was an amazing, eye-opening and almost a visceral experience – I thought, “Well, if this piano teacher was able to teach a student like that and the student had a good experience with it, then maybe it’s finally time for all of us to jump in.” Before, I didn’t want to waste my time with it, because it was just not worth the trouble. But now it seems like a good idea.
And by the way – Makia Matsumura was my first student at Juilliard who used MSP for the first time with Disklavier at Juilliard concert!
Yeah, especially with gigabit Ethernet and the like now… I mean, we can send a lot of data down the line really quickly now.
Yes, so it might be getting to the era where – if they’re going to close the borders and build a wall – then we’ll just go digitally. Art should not have borders.
We’ll go over the borders and across the airwaves and through the fiber optics and satellites!
Here’s some more work of Mari’s….