Articles

Noiseunfolding: An Interview with Cory Metcalf

Cory Metcalf, along with his collaborator David Stout, make up the interactive media collective/performing ensemble NoiseFold. NoiseFold has put the notion of transcoding and cross-media synthesis at the center of their practice since forever – while I occasionally think or patch about the subject, it is the bone and sinew of every jaw-dropping NoiseFold performance, whether it's on stage or wrapping the the surface of a downtown hotel in blinking eyes. I had a chance recently to talk to Cory about how the Noise originally Folded, and how those origami creases are made.

How did NoiseFold happen? Has your programming work nudged the nature of that original collaboration into different territory - and, if so, how?

In 2003 I had assisted David with a live-cinema tour using Steina and Tom DeMeyer’s Image/Ine software, which was developed at STEIM when Steina was the creative director in ’96. It’s kind of crazy, but at the time, all you could get away with in Image/Ine was layering some still images and a single 320x240 QuickTime movie. But it was really amazing that you could do live digital video processing at all.

The show was a more traditional kind of live-cinema thing with live manipulation of video and sound clips, and live musical accompaniment. There was David, myself, and a clarinetist whose name I won’t mention as he turned out to be something of a nightmare collaborator. I was doing live sound processing (with Pluggo in its earlier incarnation), David was playing the images, and the clarinetist was (supposed to be) the front man soloist. Anyway, the tour was pretty crazy; clarinet guy was a no-show for almost every concert - the result was David and I hauling some massive road cases of rack-mounted audio gear, a bunch of computers and a couple of keyboards across Europe. We were probably each carrying/pushing well over our own body weight. To make a long story short, the tour fell apart and we ended up licking our wounds at STEIM in Amsterdam. Daniel Schorno was the creative director at the time, and we spent a few weeks house sitting at his place in the Jordaan, during which time David was barking, clapping, howling whistling and generally verbally abusing his computer as he started exploring A/V feedback in Image/Ine. That spun out into a series of installations where the machines had microphones and cameras attached, all responding to each other in these call and response feedback loops.

Daniel was actually the first person to show me Jitter and I had said to myself, there’s no way I’ll ever be able to figure that out. Not long before, Steina had said something like “Have you heard of this Jitter? I think that is where it’s at.”

Around the same time, I was doing a lot of extended vocal stuff with my brother, Rio, under the moniker Eugoogalizers. Think some combination of Mike Patton, Diamanda Galas, Meredith Monk, and Joan La Barbara, that kind of stuff. And I was pretty good. David was doing his SignalFire thing, which was an evolution of the feedback stuff but processing pretty crazy noise-based video as the source material for generating the sound, then folding it back in to manipulate the images, so there was pretty crazy image to sound back to image to sound feedback looping. We started working on a performance in which I was manipulating the images with my voice, but we never actually performed it, because we moved on to the NoiseFold thing.

NoiseFold emerged pretty organically in 2005 as the result of a project I was helping David with for the Artist in Residency program at Harvestworks. David had proposed a project to combine Image/Ine, Isadora (Mark Coniglio’s real-time A/V software that David and I used a lot), and Jitter. R. Luke DuBois was the main programming guy there at the time and I guess he chose our project to work on. So we had like 100 hours of Luke’s time to try crazy stuff. What came out of it was this installation, 100-Monkey Garden, which was a simple little A-Life breeding machine that used geometry crossfades to generate novel forms. I pretty much learned Jitter just looking over his shoulder. Right after that, we took the core elements of the installation, turned it into a performable patch, and NoiseFold was born.

I was a pretty awful programmer at the time. I’d only been working with computers for a couple of years and most of what I knew I learned by debugging MacOS9 to make Image/Ine work. But I had learned a lot about analog video processing techniques and processing images, so I just made it up as I went. I’d take apart patches and kluge them together into something new and we just kept adding to the system. I don’t think we ever did the same show two times; we were always adding something new. It’s actually still that way.

As a performing outfit, Noisefold works with vision and sound. I wonder if you could tell us a bit about what form that takes in terms of things like real-time control and transcoding in your performance environment. Do you have any "go-to" solutions, or do you tend to create the tools you need on a performance-by-performance basis?

This has changed a lot over time. One of the things that is relatively unique about our approach is that we almost always start with the image and transcode it to make the sound. That used to be a hard and fast rule, though we do bend it a little these days, so you might hear a straight up oscillator or synth punctuating the soundscape now and then. What this does in terms of control and composition, is it makes every decision have at least two-parts: the image has to look good and it has to make a cool sound. We’ve thrown away lots of really cool images because we just can’t make the things sing and vice-a-versa. Likewise with the real-time control; when we’re looking for interesting niches of parameter space, it’s got to work both ways.

To do the transcoding itself, we have a bunch of techniques that we’ve developed over the course of the now decade-long collaboration. We do a lot with wavetable synthesis, where we basically unwrap a geometry matrix into a dynamic buffer, then scan through it at different frequencies. We’re not really concerned with a perfect or “pure” translation of the image to the sound (where every sphere sounds like a sphere), so we tune, filter and tweak to our hearts content. A lot of times we do one of these for each axis of the form, so you can tune them into a chord or something. We use that a lot, because it’s so direct, but we are also really interested in drifting between one-to-one sound/image relationships and removing it a few steps, so it can be evocative of the essence of form and not always mickey-mousing.

Another technique we use quite a bit is actually a lot like the Image/Ine thing: we take a scanline of the texture and turn it into sound. This gives you a really visceral sense of the connection between image/sound, especially when you have textures that have a lot of contrast or sudden changes. We do a lot of other things too, like track a particle moving through space to control a pitchshift or a filter, or look at the distance between different points or the overall scale of the shape to control some parameter somewhere.

In the original instrument, there were a lot of hardwired connections: A always controlled B. Over the last several years we’ve turned the Frankenstein monster that was the old instrument—maybe it should be called the Frankenfold Monster—into a modular environment that we call the nFolder. So now anything can control anything, and we basically make a new configuration of modules for every show.

Your work together is, I think, marked pretty clearly as having a distinctive sense of "style," to use that hoary old word. In particular, I think that the visual aspect of that has a lot to do with the use of color and textures. Assuming that I'm not barking mad, could you talk a little bit about your visual programming, in that regard?

Sure, though really I think a lot of it has to do with us not being trained in this stuff. We’re still kind of hackers when it comes to the OpenGL/3D world; we’re pretty bad at lighting and materials and not very smart when it comes to shaders (in fact, until jit.gl.pass came out, we’d never even actually used a shader!). The byproduct is that we make work that avoids looking like a lot of “traditional” 3D. We got started by basically forcing our real-time video processing chops onto the 3D paradigm. Even the way we process geometry is by treating it like video matrix data.
Something that’s probably relevant is that we started out with the idea that we would only use purely generative imagery – no real world photographic images of any kind. Again, we are a little less pure about that these days (we’ve been doing some live camera input stuff), but it’s still a guiding tenet. So the textures we use are all just little video clips from some generative process, which might be a complex geometry animation patch that David built in Isadora years ago, or a procedural noise generator with a tinted gradient. Also, early NoiseFold was 90% all black and white, so when we would bring in a little color, usually right at the end of a show, it felt really pronounced. Gradually we started adding color bit by bit, so we never really rushed into our color palate. It’s something we’ve been able to refine a lot over time.

As with any long-term collaboration, you're part of an ongoing project. It's always hard to talk about the things you're thinking WITH rather than ABOUT (and there's alway the question of the "secret sauce" of new work), I was wondering if you could generally talk about what the drivers for your new work tend to be.

Mostly I think it’s about creating a playground for exploration. David and I are lucky in that we have a lot of crossover interests but also different focuses and reference points. Both of us are inspired a lot by biology, botany, physics, and science in general, so that has been a big source of inspiration, particularly in our installation works. At the same time, we are equally interested in mysticism, alchemy and contemplating the unknown. Neither of us have a problem abstracting or a need to make literal representations of the concepts, so they become an inspiration as aesthetic behavior platform, but never feel like something we are beholden to. A core aspect of what we look for when we make, is parameter space that seems to hide a lot of different possibilities. When we perform, we want to be able to explore and find something that we’ve never seen before.

I think one of the really interesting things about our collaboration is the dialog between me as maker, and David as user. He’s really a total power user of my programming. I’ll make something and tinker with it a bit, then hand it off and the next time I see him, he’s made like 300 presets to show me what it really does. We’ll sift through and find the ones that are the “best” and then tweak them, mine them for a range of behaviors, and map them for performance. Often a couple of those 300 will show me the key to some new technique, and out comes the next tool.

Things changed a lot when we put together the modular system. At the core of the thing is these routing matrixes that expose every control parameter for scaled control. Nothing has to be hard patched. Suddenly everything in the toolkit can talk to everything else with very little hassle. It enables the exploration of a wide range of recombinant possibilities, something David spends a lot of time doing. The fact is that David barely knows how to program max at all, but is an expert user of the nFolder. Instead of having these one-off patches, we’ve built up a vocabulary of techniques that continue to unfold as they talk to each other.

Our latest work using particles is a lot different, and kind of in its infancy. It’s definitely still NoiseFold, but there’s some risk again. It’s really fun coming back to a raw place where we haven’t figured out how to completely navigate yet. We can get lost and wander around, hoping to find something really unexpected and amazing.

by Gregory Taylor on April 26, 2016

Bob's icon

Hi Cory, Thanks for this interview and the video... Some time ago I asked you about using 2 independent objects with my own patches. And it looks like you're doing this here. Would you care to elaborate ? :) Thanks, Bob

Bob's icon

PS: Not simply how to add them – how one person can control them in realtime?