An Interview with Barney Haynes
Barney Haynes has been working in the fields of reactive installation and invasive media for 10 years. As a Professor at the California College of Arts and Crafts in the Film/Video/Performance department, Barney is helping to define the parameters of artwork at the intersection of computing, physicality, mechanical behavior and appropriating dot_com detritus. In this conversation with Ben Nevile, Barney describes some of the work of his students, explains how he became entangled with the MakingThings modules, outlines some of the aesthetic lineage of his type of generative art, and gives some tips on how to make art of out what was once surplus junk.
You recently had an end-of-term event where all the students showed their work…
The class is called “Interface” because it is about the intersection of technology and art. It satisfies an interdisciplinary requirement that all students have to take, so it has this great cross section of students from all the programs at CCAC. The college offers an array of disciplines including architecture, textiles, glass, industrial design, film, graphic design, sculpture, and painting, to name a few. So we get this really amazing mix of people with different sensibilities approaching this problem from multiple vantage points.
So how did it go?
There were a lot of great projects, 35 in all. There were kinetic sculptures, media installations, circuit bending performances, and ethernet amalgamations. Some of the work was really smart and highly crafted. There were some highly refined constructions that conveyed some exceptional ideas. This one student, Daimon Marchand, made this piece with cattails…
Yeah, and he had about 50 of them in a field. He set up a tracking system using a surveillance camera. So as you walk back and forth, your movement would vibrate the cattails. He mounted tiny motors on each one with solder dripped on the shaft as the offset cam…
Yeah! So as you walked back and forth you could see different cattails vibrate mirroring your position, almost like a shadow. The tracking system was programmed with Max/MSP/jitter. He put a camera on the ceiling and used different math operators to reduce the video signal down to a very simple pattern of on/offs, and each cattail was activated by the state of a pixel using the MakingThings digital out modules. Another project by Maggie Simpson featured a glass eyeball in a bell jar. As the viewer walked around the room the eyeball would follow her/his position. It was quite elegant as she completely transformed the space into a surgical theater. The piece also used jitter to track position and the Teleo Servo module to actuate the eyeball. The sound of the servo motor and the slightly spastic behavior of the eye movement induced this creepy b-movie horror film effect.
There was also a nice collaboration between Andrei Pasternak and Kyle Mock: the sounds from a circuit bent speak and spell were analyzed and connected to motors via the Multi IO‘s PWM out, which in turn rotated 3 patterned disks mounted in a projector. There was also a performance by Guillermo Galindo that transformed gestures into frenetic mechanical actuated clanking. The show was particularly gratifying for the MakingThings people as they got to see their hard work and vision realized in such elegant ways.
How did you guys get involved with the MakingThings people?
For the last four years with Don Day and Todd Blair I have been teaching classes at CCAC that revolve around interfacing the physical world with computers. I do the Max/MSP/jitter programming, Don teaches the electronics, and Todd teaches fabrication. Todd is also involved with Survival Research as a fabricator and facilitator. It was there that he met the MakingThings folks, Michael Shiloh, Anne Swabb, and David Williams. Among other things they designed, built, and programed the control components of the machines. If you’re not familiar with SRL, their shows are like destruction derbies but the vehicles are cruel technological amalgamations, all dangerous, all vying to be the last machine twitching. The engineer’s mandate at SRL is to build electronics that can endure a lot of damage, like figuring out how to make circuits run while being abused by a flame-thrower. As might be expected in such an environment all the electronics have to be custom made.
Realizing that other artists would benefit from having access to these systems they decided to use this experience building bullet proof circuitry to make I/O modules for artists, musicians and scientists. The idea was to provide the means to be able to access this world so that artists wouldn’t have to get into PIC programming or complicated electronics. As part of their research they asked Todd, Don and myself to meet with them and discuss what we wanted as end users. We talked about the existing I/O modules on the market and their pros and cons. Through that conversation we realized that there was a certain affinity between us. We were really excited about their capabilities and the ideas they presented so we decided to enter into this arrangement where we and our students would serve as the beta testers for their modules.
What could be a better relationship? I mean, as long as the products are good. Can you give me a sense of how these units compare to what else is out there?
Most of the available controllers are skewed towards or only have input. The Making Thing’s modular system allows for custom configurations with a variety of input/output options. In addition to the Multi IO which has analog in, digital in, digital out, and PWM, they have a number of single purpose modules. If you are only interested in output you can purchase the digital out module. If your project requires control of beefy DC motors you can buy the 10 amp H-bridge module. They have a module that can control up to 8 hobby servos and they have an analog in module planned for release soon. The other major advantage is the protection circuitry built into all the modules. While a certain amount of caution is necessary with any electronic gear, with these modules I experiment with a lot more confidence when connecting sensors and actuators.
The Teleo modules have a dedicated Max object for each input and output. One problem I always had with complicated control patches is that you had one object dedicated to the IO device but a number of different sub patches for computation. Of course you can send/receive values, but I find in easier to embed the specific input or output object within its attendant sub patch. The Teleo Max object has really handy features such as scaling, range, and delta, and the computation is done in the module freeing up the computer for media hi jinx. Currently they connect to the CPU with USB, but they have other modules planned such as ethernet and RC. They are connected via a network cable that has the capacity to handle 63 modules, and you can mix and match according to your needs. Plus their tech support has been superlative.
Anyway, it was also one of our most successful shows as most of the projects not only worked, but worked well. The one problem we have – and it’s a good problem – is that students tend to be overly ambitious for a one semester class.
A few months is not enough time to learn all of the intricacies of max, I guess.
Yeah. Even though it’s great for artists because it’s a visual program, the learning curve is still steep.
Some would say that it has to be steep to maintain the flexibility that an artist needs.
Sure. To ease into it the first thing I teach with Max is how to cannibalize the help patches. I tell them that you may not understand what’s going on right now, but if you use this patch you get a specific result. Then when they see the possibilities they get hooked and start learning the intricacies of why this object is connected to that object… soon we’re actually going to teach a math class based around max.
A math class?
Interesting! What type of math are you going to teach?
Well, I’m one of those math challenged art types so I’m going to co-teach it next year with this math whiz from the graphic design program. Over the course of this semester I’m going to show him Max/MSP/jitter then we can identify the types of math he will teach. The inspiration for this class comes from being utterly mystified by the innards of some of the example patches. I personally want to learn how to generate different sound and visual phenomena with math. It’s also a college wide requirement so why not teach algorithms within a creative context? It won’t be limited to media production. We will show how math can be used to calculate complex motion for mechanical actuation and for designing circuits. Teaching math with Max is part of a larger goal which is to develop an art practice at CCAC that reflects the Bay Area’s art and technology scene.
What, sort of a computer-centric…
Computers certainly play a central role but the art also requires an equal dose of physicality. The courses are designed so that students learn programming, fabrication and electronics concurrently. For instance one assignment is to measure physical phenomena and translate them into structural metaphors for media presentation. We discuss haptics or force feedback as a catalyst for causual sequencing or random permutations. We want to delve into quantifying biological data such as breathing patterns, heart rate and how sweaty your palms might be. This data could be propagated through a network or used to influence an installation. By merging disciplines compelling hybrids are realized, so the computer is an intrinsic component but without the familiar interface of keyboard and mouse.
This gets into an interesting area that I’m still grappling with: I’m sure you’re familiar with the laptop phenomena where you don’t know exactly what it is that you’re watching in terms of a performance. It’s interesting in a sense, it’s very punk or DADA to go and go up there and open up your laptop and…
Yeah! Produce this intense wall of sound with absolutely no gestures whatsoever. I think that there’s something very compelling about that. Sound implies motion. What does it mean when movement is almost eliminated? The MakingThings I/Os offer a solution by making it possible to create gestural interfaces with sensors and switches. Or one could construct mechanical orchestras like the ones Matt Heckert and Gordon Monahan have done and program them with Max. However this freedom adds to the conundrum of computer driven music, performance or art. When you can ascribe any gesture to any sound how do you create meaningful connections? Is it pattern recognition, metaphor, is it deterministic or nuanced? Do you simulate interaction or is the piece reactive? We approach these questions as research confessing that we are in the process of developing our own conclusions. That’s the exciting thing about it, it combines research and art and process.
When you’re teaching, what kind of things are different about trying to get your students involved in interactive projects? Does this work require a different frame of mind than producing music, or producing a video, or anything else that’s not interactive?
It is a challenge. A lot of students have never been exposed to interactive work. So to break people out of that we have to immediately start talking about how meaning can be conveyed in non-linear structures. That’s why we start off with a random media or motion assignment. It forces them to consider chance and accident as form. We spend some time discussing this, and we point them toward the eu-gene list… it’s an email list, one of my favorites right now. It’s really interesting because its a forum about generative programming.
Its hard to define, and a lot of the discussion centers on what is and what is not generative. Here is Phil Galanter’s definition: “Generative art refers to any art practice where the artist creates a process, such as a set of natural language rules, a computer program, a machine, or other mechanism, which is then set into motion with some degree of autonomy contributing to or resulting in a completed work of art”. One of the things you have to decide when you’re setting up these relationships, this idea of using physical phenomena to drive the media and machines, is how deterministic you want to be. If you make the exact same gesture does it repeat that gesture or is there something that interprets it and creates a counterpoint to it? Does this program interpret the heat of the room or your position within it – and just mimic it back in some way, or is there some kind of generative process that’s much more unpredictable and random? If you’re going to make a user interface, commercially the idea is to make it as intuitive as possible, right? We talk about this a lot in class: what is the nature of accessibility? Is it dependent on theory or is there an entryway that hooks the user in? What is interactivity? Can meaning be apprehended in cause and effect interaction? We want it to be intuitive, but does it have to be completely deterministic…
Yeah, exactly. If you were a musician, and you’re going to perform, you’re going to want to know what you’ll get out of it. To me there seem to be all sorts of metaphoric, or fictional possibilities when things don’t exactly do what you think they’re going to do. So you can create these complex relationships that way…
Injecting some sort of randomness into the system?
Yeah, randomness or perhaps the system “remembers” how previous users have treated the thing, and reacts accordingly. As part of this discussion we have them do research projects where we point the students towards different email lists and web sites. Have you ever seen Steve Wilson’s technical artist list?
He teaches over at SF State in the conceptual design program and has this truly amazing compilation of links that point to artists working in technology. They are organized into a slew of categories. In fact, he’s written this book called Information Arts that details the trajectory of art and technology. It’s one of the most exhaustive studies of this kind of work that I’ve seen. Anyway, we point them in these directions and they do research on how artist use technology. So through that, and through the work that we expose them to, we start corroborating different ways they can approach their art.
Actually they just started selling the modules this year. CCAC purchased a bunch of them and the SUDAC program at Stanford also bought some. On top of that there is so much interest that they’re just keeping pace with demand.
What’s the aesthetic lineage of this kind of work? Who are some of the people that have influenced your thinking?
There are so many.
I guess the performance art world would be a strong influence?
Yeah. Laurie Anderson obviously is somebody who… I was kind of blown away by her early work. Then there’s Survival Research that had a huge impact. Miranda July does these performance/projection pieces using multiple screens on multiple planes. She brilliantly integrates the projection space of the image with her performance in this witty and natural way. I also get a buzz out of going to The Exploratorium.
I love The Exploratorium!
Yeah, I used to hang out there a lot.
I suppose really though, what you’re doing is new. This is technology that has only been around for a handful of years, right?
The technology is evolving to the point where you don’t have to have an engineering or computer science degree. Till the advent of Max/MSP/jitter and I/O modules such as the MakingThings products, with only a few notable exceptions artists had to collaborate with engineers to realize their projects. Not that collaboration between technicians and artist doesn’t produce great art, but if you tend to work more organically, experimenting without any designated closure, it can be frustrating for everyone involved. I come from a painting background and most of my work is informed by process as opposed to premeditation. When the tools become intuitive and you’re not penalized when you make a mistake the focus is on creative decisions rather than technical impasse.
What we are implementing at CCAC is a practice that is fluid and that embraces accident in a medium that demands a rational approach. However, I would be disingenuous in claiming that we have completely renounced logical process. Designing and building a robotic appendage with 5 degrees of freedom requires a little forethought. When Don, Todd, and I tackle a student proposal we often offer 3 different approaches. Todd is meticulous in his approach, Don will morph between Cartesian analysis and Cagian chance, and I tend to kludge or collage sections of Max Patches or chunks of machines into monsters. We are melding logical procedure and artistic process with materials and technology perviously unavailable to artists.
As well, the technology is new but the art is well grounded in a number of areas. There has been a subset of artists working with technology throughout the 20th century. It really picked up in the early sixties when film makers such as Hollis Frampton and Tony Conrad started exploring the material nature of film and the projection process. Content was denounced and effort was concentrated on the means of creating films like Tony Conrad’s The Flicker which had this intense physiological effect on the audience. With the advent of video, artists such as Nam June Paik and Wolf Vostell were iconoclasts questioning the sacrosanct reverence of the television by dismantling the trapping of the box and experimenting with the components. Concurrently Jean Tinguely constructed these amazing mechanical/kinetic sculptures some of which would self destruct.
Because you’re making pieces that are interactive, do you find it difficult… let me rephrase. I’m a musician. When I’m working on music I’m never really sure what other people are going to think of it. At the same time, I don’t have anything missing from my process when I’m working on it, and when I feel like I’m done then I’m done, I have my own methods of evaluation. When you’re working interactively, how do you know how your piece is going to be used or how successful it’s going to be until people are interacting with it? How do you solve that problem?
I guess I embrace it. Perhaps it’s all these years working in video where as the saying goes, “never the same color twice” (NTSC). Sure one tries to control the presentation of ones work, but at some point when there is a snag it has to become an opportunity rather than a disaster. When I made linear work I edited a different version for each screening. In one show I realized half way through the piece that I didn’t know which version it was. Admittedly this was very unnerving because some versions were never intended for the public.
When you engage in interactive art you have to relinquish control. It is more about the user’s experience with the art rather than the artist making the art. The piece becomes successful when it transcends the original intention. What are they going to select, and in what order? In some ways I find that difficult to deal with, but it’s fascinating to watch how people experience the thing. I sometimes feel like showing them what I’ve discovered about it, and I really have to fight off the urge to program those things so that they become part of the piece. When people find the stuff themselves…
It’s their own discovery. Makes it that much more special.
Yeah. To me it makes it a far more engaging experience when there is a little more nuance.
I’m just looking at the photos of the Symbiont project you have on your website now… wow, what an interesting presentation. I think it would make me really uncomfortable.
It was also uncomfortable to present. The first time I did this piece was in Germany. I really had no idea how people would react. I had one of those moments right before the opening where I seriously questioned my sanity. But they were very receptive. I always offered the option to opt out but everyone from octogenarians dressed for the opera to art damage types took a turn in the chair. The only snag involved the nipple speaker. Everyone gets a fresh nipple, but I thought I could speed things up if I changed them in between participants. I found out right away that putting on a fresh nipple was something you had to do in front of people in order for them to feel comfortable. The participants get to keep the thing, too.
A party favor! The screens are really interesting, too. Are they projection tubes?
One of them is a tube and one of them is an LCD panel. There’s a lot of great surplus around the Bay Area, as you might imagine. When they tear apart different dot coms as they go bust you can go down and get amazing machinery like robotic arms and different kinds of linear actuators…. I think of them almost as mechanical collages. All I do is bolt together these different parts that do the specific action that I want. I don’t have the kind of fabrication skills necessary to build any of these things. It’s interesting: you cannibalize machinery, you cannibalize patches… you do these things as an entryway to this kind of work and then you can start understanding the elements, the physics of how something works or the electronics or the programming of it.
I’m fascinated by the creative process necessary to work with these kinds of materials. How do you see art in something that was surplus junk?
[laughs] Okay, I’m laughing not because of the question but because it brings up some funny memories. We go down to these surplus places to buy gear and we’re looking at this really nicely machined, beautifully articulated junk, incredible stuff that has this super-fluid motion, and it’s almost like it has this fetishistic quality to it. One can get mesmerized.
Where do you go to get your stuff?
There are a couple of places down in the South bay near Silicon Valley. There’s this place called Triangle Machinery. To the uninitiated it’s an overwhelming experience… they have lots of cool stuff but without an engineering background you wouldn’t have any idea what to do with it. It took me four or five trips before I even tentatively bought something. If there’s a company that goes out of business, or a manufacturing company revamps their assembly line, businesses like Triangle just come in and buy the stuff as scrap and disassemble it into parts. In some cases they’ll leave sections intact so you can get parts like fully functioning linear actuators or robotic arms. So that’s where it gets exciting. For the piece in the movie, for example, I needed to come up with some way to put the nipple in someone’s mouth. That took a lot of iterations to make it right, to make it safe. I would go in and look for, say a robotic arm or a rotary actuator, and it was generally never quite right so I had to futz around and adapt it to my purpose.
In one sense finding junk or surplus and mining it for its aesthetic value has been a feature of Bay Area art for a long time. Going back to the Beat era and the artwork generated in the late 50s and early 60s, a lot of it was informed by junk. There’s been this kind of funky, garbage art that has become kind of a tradition. One route is to become a master dumpster diver. I have one friend who not only found everything he needed for his projects, but also made a lot of money selling aluminum. There’s an internship that you can apply for at the San Francisco dump where you spend six months there and they basically give you access to the garbage!
That’s amazing, an internship at the dump!
You can go out there and get anything that you want, what people throw away, and make it into art. I guess recognizing value in the stuff is sort of like pattern recognition.
So spotting artistic value in the garbage is something that you can only learn from experience.
Yeah, we try to accelerate the curve by demonstrating the potential aesthetic of these things by analyzing their behavior for alternative or subversive uses… a stepper motor can do all sorts of really very interesting kinds of motions, behavioral motions. You can make it go very slow, then ramp it up and down in very sophisticated patterns. Some steppers have great sonic qualities that are almost note-like. When I program my pieces half the decisions on speed and functionality are predicated on the sound quality of the motion. The problem with steppers is that they need expensive controllers to activate them. MakingThings is prototyping a stepper module as well.
In the Symbiont the nipple speaker is based on motor vibration. The sound is produced by two little motors connected to a consumer amplifier. They vibrate your teeth and your jaw so the sound resonates inside your head. There was a sensor inside the nipple to test your engagement, so if you stopped chewing it would go into rejection mode.
The nipple. It makes me uncomfortable even to talk about the nipple.
Ha! We don’t have to go there if you don’t want. I come up with these ideas, but I also have to live with them, and because I’m always there I have to explain it and watch people wince and get uncomfortable… I’m not impervious to this stuff, that’s for sure.