In the last 20 years William Kleinsasser has received national and international recognition in competitions, conferences and festivals by pushing technology to its limits. The c74 CD Available Instruments showcases the composer’s ability to adapt digital technology to the orchestral environment.
In this interview with David Zicarelli, Kleinsasser discusses the fundamental challenges facing computer music, and connects the dots between yesterday’s tape music and his modern interactive compositions for computers and traditional acoustic performers.
Music and Computer History
What made you interested in a career writing music?
There was always music in my house. I grew up to be a performing musician but it was the turntable and tape recorder that first brought music into my home. As far back as I can remember, I have been fascinated with microphones, tape recorders, mixers and speakers. In the late 60s, when I was about 8 or 9, my parents bought a reel-to-reel tape recorder with stereo sound-on-sound recording. I began recording then and I bought this great blue book filled with radio schematics and explanations of electronics. That’s about the same time that my mother got me a banana bicycle seat – just like the Cycling ’74 logo – for the metal-flake green Stingray that my father and I built from spare parts.
I was fortunate to go through the public school system in Eugene, Oregon in the 1970s which encouraged exploration of interests and allowed us to build our own course of study. Beginning in junior high school, I was free to take a rich array of music courses along with my other studies. Around that time, I began playing double bass and continued bass lessons for about fifteen years. Even before college I had already taken courses in composition, conducting, rock music, music history, theory and jazz ensemble while playing bass in jazz ensembles, the string chamber ensemble and youth symphony. Looking back I recognize how unusually lucky this was. By the time I was a junior in high school, I was composing music for our string ensemble and jazz band and conducting the performances. During those years, I learned that I could make music and that music could possibly offer a professional life. This was all in Oregon in the 1970s before political pressure closed down that approach to education and replaced it with more competitive and pragmatic outcomes-based models.
In my late teens, my immersion in music was of two distinct kinds: the first rooted in concert music composition and performance and the second independently exploring recording and sound manipulation. I still maintain these two streams of work.
I studied bass and composition at the University of Oregon. The University had an electronic music studio which housed an Arp 2600, a Moog modular synthesizer and two four-track tape decks, all of it feeding through a simple mixer. I took classes in electronic music learning classic analog techniques working with Allan Strange’s book.
From 1984 through 1991, I lived in Bloomington, Indiana and worked toward Masters and Doctoral degrees in composition at Indiana University. Those were intense years studying composition with Fred Fox, and Eugene O’Brien and working with Harvey Sollberger as his technical assistant for Indiana University’s New Music Ensemble. I had the pleasure of attending master classes and seminars with a steady stream of visiting composers too numerous to list here. I was particularly struck at that time by a performance by Harvey Sollberger of Roger Reynolds‘ Transfigured Wind for flute and computer-generated tape, and later by New Music Ensemble performances of music by York Höller, Todd Machover, and David Felder in which acoustic performance was integrated with computer-generated tape. Being the sound engineer responsible for the tape synchronization and mixing during performances of these works, I had the opportunity to learn this music from within. I began focusing my composition on integrating acoustic and electronic music and have continued since then.
How did you become interested in using computers?
In 1983 I remember a composition seminar at the University of Oregon when a fellow student brought in a tiny Atari computer and talked about how computers would soon replace analog synthesizers. I had never really thought of computers before then. In the early 80s, a friend showed me the way he could program control of FM and additive synthesis on an Apple IIe computer. He also demonstrated how FM was implemented in his DX7. I was aware of work going on at IRCAM, UCSD, Stanford, and other similar centers at that time and had been to concert featuring the Synclavier, all of which were beginning to influence my compositional thinking.
At Indiana University I took a computer music programming course offered by Gary Wittlich. We learned Basic on PCs and I wrote a program that would print out all of the possible voicings of any pitch collection across the range of an orchestra. I learned how to print graphics to the screen and output sound. It took a maddening number of hours but I was drawn to it. By 1986, I had a DX7 synthesizer but it didn’t take long to hear the edges of the FM universe. I realized that programming computers could free me from the marketing and limitations of hardware – Yamaha had replaced the DX7 with the DX7II, and on and on…
I bought a Macintosh 512Ke with MIDI sequencing software, a DX7 editor librarian, and Deluxe Music Construction Set. In those days, when you bought a computer, you went into a show room and they served you coffee while a sales person sat with you and answered your questions and showed software that came with impressively-printed documentation. The Mac had less than a MB of RAM. Hard drives were still prohibitively expensive.
PBS aired a television program in the mid 80s on the work at IRCAM and Boulez‘s Repons. It was fascinating to see and hear how the soloists, ensemble and computer all interacted in real time as the piece was performed. Boulez was interviewed describing how the piece was one curve influenced by another curve influenced by another. I was taken by both by the conceptual and sonic world of this music.
In 1986 and later in 1988, I participated in the June in Buffalo summer festivals. At these festivals I met David Felder and was introduced to his works, like Boxman, which integrate live performance, DSP, and video. In 1986, David told me about Digidesign’s software for digital audio editing on a Macintosh. This was the beginning of SoundDesigner and eventually ProTools.
In the Computer Music Studio at Indiana, we were running Sound Designer and Jeff Hass had begun working with Csound. Jeff introduced me to the idea of sample-level programming, and between 1991-93, I dove into teaching myself Csound and hoped that processors would soon become fast and inexpensive enough to allow for interesting work on Macintosh or Intel computers. It was only a few years before that was true.
In 1992 I was hired to teach at Towson University in Baltimore. By 1994, I was doing real-time Csound work on unix machines in the university’s computer center. The National Symphony invited me to present part of a piece I had for symphony orchestra and electronic music, Millennium’s Edge II. The concert demonstrated how the symphony orchestra had evolved through the centuries by adding new instruments, the most recent being digital electronic music and computers. The electronic music for that piece was created in Csound and loaded into a digital sampler as sound files. The files were synchronized with the performance using tap tempo sync in MOTU’s Performer. I was intrigued by the possibilities of combining large ensembles with computer music, but I needed a more sophisticated hardware/software system that could handle live signal processing. By 1996, we had established a studio in the department of music at Towson supporting integrated computer music composition.
How did you become acquainted with Max and the ISPW?
Beginning in 1988, I composed several pieces integrating digital music on tape with live performance. Interactive systems were moving beyond the performer+tape model. I began searching for ways to integrate electronic music with live performance without the sync to tape issues that limited the performer in many ways. I attended the 1990 SEAMUS conference at Louisiana State University where I presented a piece, Reflective Image, for oboe, saxophone and tape. During one of the concerts, Cort Lippe presented a work for harp and computer. Cort and I began talking when we were both delayed at the Atlanta airport. He gave me a two hour crash course in DSP theory. I think it was the following year that I attended a presentation by Cort on his use of Max and the ISPW in his clarinet piece at a conference at the University of Illinois.
Before the mid 1990s, my focus was on the use of samplers and sequencers as well as Csound and unix-based programming for my computer music work. By the mid 90s, I had composed pieces that used Max for performance control of samplers and audio file playback. One of these, Free Shadows, was for piano, Disklavier and computer, which I wrote for Dan Koppelman and Ruth Neville – duo runedako. Dan and Ruth are compelling performers; they expand their musicianship by integrating traditional skills with contemporary music, while engaging challenging technology musically. Later, in 1998, I composed Available Instruments for Dan.
In the fall of 1995, I attended the ICMC in Banff and spent time speaking at length with Paul Koonce. Paul began working on an SGI and he generously introduced me to many of his programming approaches that used phase vocoding and other similar methods. Paul helped me learn the unix system administrative environment as well. By 1997, in addition to Max, I was working on an SGI O2 system at Towson with Miller Puckette‘s Pd software, real-time Csound, and other DSP software. I ran Max which, through MIDI, controlled the SGI, which was doing the signal processing. I was writing Csound FFT resynthesis, pvoc, and convolution instruments, and other methods of introducing one sound’s internal components to another. It was not long after then that MSP was released, and the Macintosh was able to handle an integration of midi, DSP, and video. I can still recall the thrilling feeling of convergence in 1998 when all of these tools found common ground in technology that was readily affordable.
Back in 1993, I began planning several large integrative pieces. In 1994, I received a National Endowment for the Arts grant to compose two large works for soloists, chamber orchestra and computer. I set out in 1995 to compose these two pieces using an integrative model that I developed with the help of a grant from Towson University. These two pieces, Concerto for saxophone, chamber orchestra and computer and Double Concerto for viola, cello, chamber orchestra and computer, were completed in 1995 and 1996 respectively. I spent some time after that writing the software and finishing the computer music for the concertos. I was reconciling work in Csound, pd and other software into the Max/MSP patches that present the computer music in those works.
In 1997, saxophonist John Sampen presented the first performance of Concerto for saxophone, chamber orchestra and computer at Towson University during a 20th Century music festival with Paul Rardin conducting. In 1999, Paul and I produced the first performance of the Double Concerto at the Towson University festival with Christian Colberg and David Shumway as viola and cello soloists, both performers on this CD. By then I had developed, in Max/MSP, an assembly of my previous DSP and spatialization software, which was integrated into the 8-channel surround sound system in our Concert Hall. The performer+tape model had been fully replaced by and integration of live DSP and overlaid pre-composed digital audio files in a multi-channel distribution environment. This was essentially due to the work that had been done by those who created Max/MSP and made it possible for composers like me to learn its power from within our own musical worlds.
Do you think of yourself as a composer primarily in the electronic medium, or is there something more general you are trying to do in which electronics play a role from time to time?
This is a question that I have asked myself over the years. While I have created music that is for electronic playback alone (the tape piece paradigm), I think that I am essentially an integrative composer. I’m old enough to have deep roots in an approach to music that fully values excellence in traditional acoustic performance. But I am also young enough to feel, without threat, that traditional acoustic music is enriched through the integration of electronic transformation. I see this as an expansion model and not a replacement model. What I do with digital music is not the same as adding a new instrument; it’s more complex than that. I see digital music offering an entirely new layer increasing creative dimensions. The layer crosses levels and the metaphor is rich in implicit meaning and potential.
I see us living at a time in the development of music when acoustic and electronic music converges and the musical interaction of these streams offers important social and cultural metaphors. Many composers now speak of a blurring difference between composition and programming. Creative tools like Max/MSP have brought new concepts and methods to the musical endeavor. In a piece like my Double Concerto, there is the possibility to engage many levels of interaction, difference, integration, and transformation. I don’t see this as a question of composing electronic music or acoustic music but as composing music in general.
The fact that the piece exists both as a work for live performance and as a recorded performance on CD further resonates that idea. When listening to the CD, a listener is confronted with the fact that the music is technologically fixed as an artifact but what is heard could not exist in any other form. When listening to the piece in a live performance the listener is offered sound that is more than what the acoustic instrumentalists are playing and the very notion of live performance is drawn into question. How can one choose which of these two engagements is more significantly reflective of our time?
It was interesting to learn about your background in recording. What were you looking for in doing new recordings of the pieces on the CD?
This was partly practical and partly musical. The only recordings I had prior to this project were the archival recordings from performances of the two pieces. In the case of the Double Concerto, this was a live recording with a remixing of some of the overlaid computer music. While it was a wonderful performance, I never intended to create a published CD with that recording and this was the understanding of all of the performers involved as well. Recording the piece anew allowed for higher quality performances from everyone, including the computer music and mix.
In performance, the piece uses microphones on every instrument. This is to bring the sound of the orchestra and soloists into the sound field of the whole piece projecting a strongly present image to the audience–as though they’re inside the ensemble. The microphones also bring each instrument, especially the two soloists, into MSP where they are processed and mixed back into the overall sonic fabric.
With Available Instruments, I had similar concerns. Dan had performed the piece on several occasions so I have excellent archival recordings of those performances but recording the piece for this CD allowed us to create an ideal setting. Unlike the Double Concerto, Available Instruments does not use live signal processing; the computer music was created in a studio environment and is based on recordings of the piano music. About one-third through the piece, the Disklavier, heard in the center-right of the stereo image, plays a long quiet passage that is a realization of a long stream of sketches from the composition of the piece. In this way Available Instruments offers its creative history through the mechanical Disklavier, a processed memory of its performance through the computer music, and a present-tense realization through the live piano performance. In both pieces the performances on the CD far exceed the performances of any recording I had from live concerts.
The Challenge of Interaction
I want to ask you about your concern with what you call “interaction.” I assume you are speaking about the way that the timbres of acoustic and electronic sounds interact as well as the way the performers interacts with the live electronic system you’ve created, yes? Are there ways in which these two types of interaction are related in your mind? Perhaps there are other senses of interaction you had in mind?
Interaction is such a large musical word with so many important meanings–from the interaction of audiences and music to the interaction of software and imagination. I appreciate it that you have focused on two of the most important ideas in these pieces. Since the electronic sounds are all born of the acoustic performances, either through studio techniques in Available Instruments, or with a mix of studio electronic music and live processing in the Double Concerto, I hope that the acoustic and electronic sounds are clearly related. I have attempted to create them with potential for audible interaction, connection, and reference. Much of what we might call meaning is bound up in those relationships. I see this as similar to the way past composers have used musical developmental processes. Electronic transformation opens up these other dimensions for development which are not even new since electronic music is now many decades old. The ideas of time scale and temporal reference are two important developmental dimensions in these two pieces.
The interaction of musicians with the electronic system is also worth discussing. In Available Instruments there really isn’t a performance interaction the way there is in so many current pieces that use Max/MSP. In Available Instruments, the musical flow is under the control of the performer since the electronic music is made in many tectonic layers that overlap and adjust to the performer’s tempo and timing. Since the computer music is pre-determined and digital sound files are mixed with the piano as the performer plays, the performer is not free to alter the computer music in any way other than subtle masking and one-way responses to the composite musical mixture. Playing pieces like these involves its own kind of musical interaction. Many composers, when speaking of works for instruments and tape, choose not to see this as a limitation but describe it as a unique performance practice–a different kind of musical virtuosity– that brings energy to those pieces.
The question of interaction also involves the problems of system latency. How much time does the input-output and processing take? How fast does the hardware and software need to be in order to feel like one is performing a musical instrument and not engaging in a request-response cycle?
In the Double Concerto the computer is presenting pre-recorded music but it’s also involved in near real-time processing as the music is performed. The input-output latency for the system I used was quick enough to give the impression of live processing. The processing, though, is mainly based on recording the performance and recalling parts of it in varying fidelity to the original sequential order and time/pitch scales of those parts. This means that the performers are not really playing the computer processing.
This raises the question of whether the computer system is itself a musical instrument that is performed as such with interactive gesture/response feedback and expression. I feel that this is one goal of my work in Max/MSP, but that goal is not the main concern in these two pieces.
I’m after a fundamentally different musical idea than the computer instrument. What I’m working toward engages the overall notion of live performance in dynamic tension with stored and recalled memories of performance. I see this as an exciting musical opportunity that raises intriguing and metaphorically resonant questions about the nature of interaction between traditional performers and new technology. In the pieces on this CD, the performers are both directly connected and significantly disconnected from the computer music. This paradox represents an important internal tension in these pieces; it’s part of the story that they tell. Another way of stating this is that if the computer system were entirely responsive to the performer’s gestures and control, then the risk is that the computer simply becomes another musical instrument or effects device. In these two pieces, I have attempted to carve out a different role for the electronic music–one in which the performers collectively vitalize a complex memory and transformation system. I see this as one response to the challenge of composing idiomatic computer music.
I am also deeply interested in approaching this whole question from the point of view of developing the computer into a musical instrument or meta-instrument. One of the most interesting challenges to convincing traditional performers of this possible future is the understanding that with nearly all traditional acoustic instruments there is a direct connection between the performer’s contact with the instrument and the resulting vibrational energy. The brass player uses her own lips as the source of vibration, the oboist feels the buzzing reed and alters its properties based on those feelings, the cellist contacts the string and, through those contacts, directly influences the sound made. With keyboard instruments and percussion musical expression is invested in the initial strike but it is still present in significant ways. It is this very contact that ensures that musical instruments, to be performed well, require intense, dedicated study and practice. We should listen closely to what this offers, for it’s why people spend entire lives learning to play musical instruments with care.
If we imagine a computer developed into a musical instrument capable of similar expression, we quickly come upon the fundamental difference: computer music disconnects the source of vibration from the gestural contact of the musician. This is both the challenge and potential of the instrument. With computer music the expressive gestural contact can be mapped into seemingly endless resultant sounds. There’s potential for a musical ecology that conserves effort in this model. It could involve learning how to be musically physical free from the specific limitations and contortions required by an instrument that is strongly coupled to the acoustic demands of its own sound; imagine examining how humans wish to move musically and using these motions as input to expressive instruments rather than those imposed by the construction of the instrument. These expressive gestures could be directly connected to resulting sounds in computer music and the mapping can be flexible and dynamic, unlike nearly all of our traditional musical instruments. Certainly these are not original ideas and many people are currently working on clarifying and realizing these possibilities. Embracing this future requires a change of perspective and, like most changes, this entire idea must be subject to careful, disciplined consideration lest we risk losing more than we gain. This discussion also leads to a consideration of the relationship between difficulty and quality. If it’s easy to play these new musical instruments, will they still be respected as excellent? The entire notion of what constitutes musical quality and value floats challengingly in this question.
When you are working on interaction in the timbral sense, is there a way to “preview” what the result will sound like when you get both the acoustic instruments and the electronics together? This would seem to be a challenge for anyone working in this area.
This is a wonderful question because it seems simple at first but cannot be answered without broad consideration.
Composing both pieces, but especially the Double Concerto and the piece that preceded it, the Concerto for saxophone, chamber orchestra and computer, involved composing many different layers of connected music and software development. Despite the many modeling benefits offered by current technology, there is still a chicken-and-egg problem in all of this. It’s inevitable that the true musical composite exists only in the imagination for long periods of time. The process requires an optimism that the whole will emerge and be eventually realized. I find composing to be fascinating because it involves learning to seek, engage, and carry through on those optimistic projections and the pleasant surprises that come with the process.
One of the joys of working with electronic music when I was younger was that it enabled me to work directly with the sound rather than with abstract representations of performance instructions which is what music notation is all about. I continually discuss this with composition students; it is necessary to imagine and compose the sound and not just the notation of music. When I was younger, I thought of this as being like a painter. With electronic music, I could realize-while-making rather than limiting my previewing to imagined inner hearings or abstractions offered by other modeling techniques. But this contact comes with a cost. The many decades of electronic music have taught us that music which relies entirely on a single creative imagination can suffer from reduction when compared to music that is open to, and enriched by, interpretive and creative realizations by many musical imaginations. An analogy can be made to the difference between a play written for interpretive performance by directors, actors, lighting designers, costume designers, and set designers, and a film written, acted in, shot, and edited by a single person. Total creative control comes at the cost of potential for richness and flexibility over time. What is obviously missing from this analogy is the most common model of collaborative film making which engages the spectrum between the analogy’s extremes. In most film making there are creative collaborations on many different levels and the medium offers the ability to exceed the limitations of live performance. Integrating the film-making model into the live theatre model offers compelling potential. If this composite analogy were brought into the context of non-narrative music then it’s another way to describe what I am attempting in these two pieces.
So this notion of previewing, or pre-auditioning, is not merely of practical concern. If the musical world of interpretive performance is engaged, then there is no complete way to pre-audition the final result since at least part of the value is in leaving some elements open to change from performance to performance. What these open elements are, and how broadly they impact the identity of the work is a matter of creative decision which changes from person to person, piece to piece, context to context.
This question can also be answered by considering the preparation and rehearsal process for the first performance of the Double Concerto. Once rehearsals had begun, all aspects of the piece were fully composed but it remained to be heard exactly how the entire composite would fit together. In rehearsals, the performers learned how their individual music interwove and influenced the larger whole. This is also when the computer processing was fully introduced, which allowed the performers to better understand and influence that greater dynamic as well. It wasn’t until the concert itself that the total sound of the piece was realized.
If a piece is composed to involve live performance, any pre-auditioning must be clearly understood as modeling since the performer, at the time of performance, becomes responsible for creating the present-tense realization of the music. In pieces where the computer music is tightly interactive and influences moment-to-moment performance decisions, the preparation and rehearsal phases require learning the interactive nature of the expanded instrument so that it can be played freely and musically.
This entire process of creating, learning, interacting and performing pieces like these is richly flexible and can engage many different models of pre-auditioning, fixed realization, presetting, experimentation, pursued surprise, and intended, controlled outcomes. This is one of the reasons why I am attracted to it and to people who are engaging these challenges.
A lot of musicians are attracted to technology because they think it will make their work easier, but you seem to be doing ambitious projects that are made more difficult by their inclusion of technology.
Some things gain from being made easier. Some things are difficult for good reason. The challenge is discovering and continually understanding which is which. I don’t think there is anything that I have done technologically that has directly replaced one difficult process or outcome with an easier one. My aim is not to ease but rather to open opportunities for extraordinary experience. I would like to bring listeners into compelling music that is clearly inviting and resonant with the mysterious promise of more than what is immediately heard.
My wife has joked with me about the practicality of composing pieces that seem to require my actual presence for each performance. The upside of that is that I get to work with wonderful musicians in very direct ways. The downside is that it isn’t a practical path to music that out-travels and outlives its composer. There is an irony in all of this composing, performing, recording, technological development, and metaphorizing.
Daniel J. Boorstin, in his book The Creators, offers a comparison between a Roman stone building, which was intended to last, as is, for centuries, and a Japanese temple, which, made of wood, disintegrates naturally after a few decades. Boorstin offers that it is the building itself which is maintained for the Romans while it is the ability to make buildings which is maintained for the Japanese. In both cases there are techniques to ease the making but these are balanced with the greater goals of care, quality, reverence, beauty, and optimism.
Do you find the technology you’re working with now to be limiting your aspirations?
One expression of my gratitude to those of you who create these new opportunities is that, while my creative aspirations will always be limited by many factors, the musical potential that you bring to us has opened unexpected, compelling, and inspiring possibilities. Do I wish music technology were faster, less market-focused, more richly understood, less feared, more responsive to ecological perspectives? Sure. Who wouldn’t? I’d have use for a real-time 3D graphic object that offers dynamic point-of-view while rendering colorful, textured objects from live data input. I would also have great use for a polyphonic pitch-tracker that was 100% accurate regardless of the musical and ambient context. But those improvements have little to do with ease and much to do with the interaction between imaginative, talented people and the world they are born into. So we continue to do what we can with what’s available to us.