Experiences from “Welcome Sound”
Many of us are invited to perform in unique circumstances – it’s a part of the Digital Media life. Recently, we’ve been featuring some interesting examples of Max-based work, including Andrew Benson’s work with M.I.A. and Dana Karwas’ installations. So when I was asked to play with an electronic music All-Star Band, I couldn’t help but document the experience.
Rather than try to tell the story myself, I’ve asked each of the participants to write up a few paragraphs on their gear and their experience. Here are their stories:
Terry Pender: Gigs come and go, but PGT’s performance in Roosevelt, New Jersey was truly one of a kind. We were performing at bandmate Brad Garton’s home, where I’ve played and recorded on numerous occasions, so I felt right at home. With the addition of Darwin Grosse and Dan Trueman, our ranks had swelled to five, so Dan Trueman and I had decided we would play acoustically, without our laptops in order to avoid an electronic meltdown.
Since the show was close to my home, I was able to bring quite a few instruments to make sure I had plenty to explore over a four-hour performance. Besides my usual Gibson F-5 mandolin, I brought along a Martin D-35 acoustic guitar, an anglo concertina, two pennywhistles, two harmonicas and an mbira. I played these instruments through a Shure SM-81 small diaphragm condenser microphone to my Mackie 1202 mixer. From there I split my signal, sending a feed to both Brad and Gregory where they were free to sample, edit, and process my sound, as well as use it as a source of data to be mapped to various parameters on their own instruments. Brad controls the final house mix and determines the balance between the clean and processed mandolin sounds. It’s this live mapping and mixing that gives us a truly interactive format. At some point, I lose complete control over what happens to the material I generate on my instruments or how it may be mixed into the overall group sound.
The setting itself was simply magnificent. We were actually playing in a tiny clearing in the woods behind Brad’s house. It was a beautiful day and we were sitting beneath a canopy of leaves in the shade on folding chairs. There was a very long extension cord run for power, and the live sound of the mandolin and fiddle rang out above the sumptuous electronic underpinning, in subtle counterpoint to the sound of the birds and the wind rustling through the leaves.
Throughout the afternoon, a steady stream of families walked along a long, semi-circular path that wound its way in front of us. People were sitting in lawn chairs and on benches in the woods, listening for as long as they liked, taking pictures and chatting with one another – truly the way music should be enjoyed. Once they felt they had heard enough, they were free to wander off to another installation or performance to experience. Each home that hosted an installation was marked by a sign with a large ear on it, and visitors young and old wandered through the town on a sonic scavenger hunt. It was a wonderful (and free) adventure for the entire family – I know my kids really enjoyed it. Why wouldn’t they? It was fun, they could run around, and they didn’t have to be quiet!
Brad Garton: Technical Description: My performance with ‘PGTGTr’ at the Roosevelt event uses an approach probably best described as ‘process improvisation’. Much of my work with digital machines and music has involved manipulation of algorithmic music-creating processes. Basically I’m a lazy guy, and letting the computer do my musical work for me is a very attractive way for me to “compose” or “perform”. I use a music language called RTcmix to build and manipulate sets of these musical algorithms. The language is imbedded inside Max/MSP, as this gives me easy access to real-time control and signal-processing capabilities that don’t exist in other environments.
My performance patch consists of about 50 discrete [rtcmix~] objects loaded with different algorithmic scripts I have written, some with sliders or number-boxes attached to alter the RTcmix script parameters. Often I edit the scripts to modify them while performing. I guess the trendy way to describe this is a type of ‘live-coding’, but I tend to think of it as more involved with overall musical trajectories instead of the just-in-time crafting of individual sound objects typically associated with live-coding. My contribution to the music is to bend and steer unfolding sonic processes in response to the sounds I hear.
I also take audio inputs from the acoustic performers in the group — at the Roosevelt event it was Terry Pender’s mandolin and Dan Trueman’s hardanger fiddle — and route them through various plugins and RTcmix processing. I generally use a program like Digital Performer or Logic to mix all the signals together before sending out a stereo feed to the main PA system. Although I have a few keyboard-multisliders augmenting my [rtcmix~] objects, I don’t use any external controllers for performing. After years of lugging around various DEC, SGI, Sun, NeXT computers (with heavy monitors!), I made a conscious decision to go with as minimal a set-up as possible. Everything I do runs entirely on my MacBook laptop.
Commentary on the Afternoon: The Roosevelt event was particularly special to me. First of all, Roosevelt is where we (my family) live, and the town’s unique character and history make it an appealing place to engage community/artistic events. I’ve also been a long-time advocate for re-imagining how our work as composers can situate itself in society (one of my more widely-read youthful polemics was titled “Why I Hate Concerts”). The “house tour/sound installations” afternoon was a perfect example of how an expanded concept of artistic presentation can make for a genuinely wonderful experience. By pairing artists with homeowners in town, we were able to promote a collaborative creative ethos that endowed the entire show with a profound feeling of sharing and fun.
I often get discouraged with the all-too-facile acceptance of standard paradigms for presenting and performing music (and thereby defining creative success), especially among younger composers. The Roosevelt event reminded me how terrific an alternative approach to musical presentation can be. And within this alternative context, the chance to make music I really enjoy with good friends and musicians I truly respect and admire — heck, right in my own back yard! — well, this is what life should be.
Gregory Taylor: Gear stuff: I’m performing with my own Max instrument. Although I started my Max life making or modifying a new patch for every new situation, I’ve now got a configuration that I use whenever I perform – either solo, with PGT, or at garden parties in darkest New Jersey with large and largely acoustic ensembles. Although I probably started with some of the ideas in radiaL I’d come to love when creating the new performance patch, this one is a collection of different “modules” that I’ll swap in and out depending on who I’m playing with and what I’d like to do. In addition to the usual loop-modification stuff, I’ve added a pair of granulators, a double delay line I ran run in parallel or chained, and something that vaguely resembles a similar idea to what you’ll eventually see in Max for Live that Darwin created called the ‘buffershuffler.’
For this performance, I added a second recording module I could use for Dan’s Hardanger fiddle or Darwin’s dulcimer input in addition to the normal one I’ve got when working with Terry’s usual inventive mix of Mandolin and whatever he’s brought along to surprise and delight us. I also removed a bunch of things too – most notably, some of the more algorithmically-oriented modules that I often use for solo performances [LFO banks that wouldn’t surprise anyone who’s read some of my recent tutorials (link)], flocking algorithms, etc.
Finally, I made one correct guess. I figured that adding two more people whose musical skills I really respect playing instruments I really loved would mean that I would spend a lot more time listening than making noise, that I’d have to start thinking of things would unfold even more slowly and organically than is the case with playing with Brad and Terry. That definitely turned out to be the case. I used only a fraction of what I brought, and wound up adding some stuff to the ensemble mix [I’m thinking of the Mellotron flutes that just seemed to make sense when Terry got out the penny whistle] that I’m not sure I’ve ever done. The nice thing about Max, as usual, is that it expands or contracts to meet my needs.
The place: From its very beginning as an ensemble, PGT has been a way for Brad and Terry and I to work in a constructive way on some of our own feelings about performance contexts. Faced with the choice of a stage and passive audiences, we’ve gone as often as not for being “the lounge band” or working in places where other ensembles might not necessarily go. We’re also probably horribly unfashionable in terms of our love for things like relative consonance and pieces that unfold slowly in ways we don’t expect and in explicitly trying to make it hard to tell who’s doing what (virtuousity as the emergent property of an ensemble rather than a shifting stage for serial demonstrations of “chops”). So that means that we work less, but enjoy what we do more. Roosevelt is an extraordinary place with an amazing history (link) that – were I the kind of person who thought in terms of the “historical resonance” of a place – I’d be honored to perform anywhere near. In our specific “garden party” case, we had chance to actually go out into the woods and play for hours at a time with some of your best friends in the world while people walk by and maybe tarry. Why would you *not* want to do that?
Don’t get me wrong – I love extraordinary rooms full of hushed audiences whose attention and energy somehow spins our straw into gold. But I will also always remember those moments when some configuration appeared from nowhere and my buffers were ticking along and I could just ease back on the controllers in cruise mode and stop and look straight up into the sunlight sifting through the trees while this amazing engine I was sitting in the midst of spun some kind of imaginary 21st century Americana into the warm air.
Of course, playing out in the woods also includes things that a coughing/program rattling/fidgety Merkin Hall audience doesn’t have: mosquitos, poison ivy, bees, and the possibility of a downpour that might electrocute us all. But hey – no irritant, no pearl.
Dan Trueman: My technology: lots of thin wire stretched over an oddly shaped, ornately decorated wooden box, excited by a stick with taut treated horsehair (otherwise known as a Hardanger fiddle); one of the unique features of the Hardanger fiddle is its set of 5 sympathetic strings that run under the fingerboard and through the bridge — i think of it as a built-in reverb unit with carefully tuned modes! amplified with a DPA miniature mic, going through a Presonus preamp (output split to feed Gregory and Brad), and Acoustic Image amp, and a custom hemispherical speaker (pre-Electrotap model).
After years of playing fiddle WITH laptop simultaneously (and sensors and so on), and growing weary of it, it has been exciting to return to “electronic improvisation” (or whatever you want to call it) with ONLY my fiddle. playing with PGTGTr was really refreshing, in part because I could participate in these rich, gorgeous electronic textures while focusing my complete attention on how to contribute and interact with the processing that Brad and Gregory were cooking up; as if it is not already enough to do just one or the other! I loved doing all of this outside, surrounded by trees and attacking spiders (simultaneously peaceful and terrifying) and wasn’t always sure if the sounds were our own or those of the woods (or processed woods). Quite honestly, I can barely remember our sets, it was all so engrossing; hearing the recordings has been a trip (did we REALLY do that??).
Darwin Grosse: I’d only performed with PGT once, and had never met (let alone performed with) Dan, but it seemed like this talented group of people that couldn’t help but create interesting work. I loaded every sound-making geegaw that I could stuff in my bag and hopped a flight to the East Coast.
Given the amount of noise that a set of electronic musicians can make, I was worried about finding a place in this quintet. Luckily, some of the players left their laptops at home, working instead with acoustic instruments to be processed by other players. In order to fit into this group dynamic, I chose to self-process a McNally StrumStick (a variant of a dulcimer) and a Native Instruments’ Maschine system using some quickly programmed patches. Since we would be performing for 3+ hours, I also grabbed a Fender Strat from Terry’s instrument collection as a hot-swap backup instrument.
The weather couldn’t have been more beautiful, and the location was perfect. We were set up in Brad Garton’s back yard, deep among the vegetation and underneath a canopy of 50-year old trees. The sunlight rippled through the leaves and a gentle breeze added a subtle movement of light throughout the afternoon. I’ve come to prefer gigs with integrated visuals; in this case, nature provided the best possible backdrop for our performance.
It was clear that we all took inspiration from the surroundings, ending up with a very subtle, earthy sound. Dan and Terry’s live acoustics, combined with the real-time processing by Gregory and Brad, created a gentle shimmering sound that wafted through the leaves. Given the massive amount of noise that five electronic musicians can generate, it behooved each of us to tread lightly, listen carefully and add sounds thoughtfully. It’s a testament to the quality of all the players that we never over-played – instead creating an atmosphere that compelled people to sit, relax and enjoy the music in this gorgeous setting.
This was an incredible experience for all involved, and shows yet another venue for both interesting music making and interaction with an interested public. Hopefully, this can inspire you to imagine interesting performance options for your own work.