Articles

What You Hear Is What You See: An Interview with Andrew Blanton

Andrew Blanton is a media artist and percussionist. He received his BM in Music Performance from The University of Denver (2008) and a Masters of Fine Arts in New Media Art at the University of North Texas (2013). He is currently an Assistant Professor of Digital Media Art at San Jose State University in San Jose California teaching data visualization and a Research Fellow in the UT Dallas ArtSciLab in Dallas Texas. His current work focuses on the emergent potential between cross-disciplinary arts and technology, building sound and visual environments through software development, and building scientifically accurate representations of complex data sets as visual and sound compositions. Andrew has advanced expertise in percussion, creative software development, and developing projects in the confluence of art and science.

Your early background was in percussion performance. How did you get involved in new media? Was it a natural progression for you or a departure?

I started playing the drums in 5th grade and have not stopped since. I see the use of digital tools as a very natural extension of my percussion practice. At the University of Denver, where I studied classical percussion, I focused explicitly on acoustic instruments. By the time I finished my undergrad, I really wanted to expand the sounds I could produce with percussion. I was also getting very interested in the physical properties of sound. By augmenting physical instruments with software based signal processing, I was able to begin constructing reverberant structures not possible in physical space through Max.

Having the opportunity to study at North Texas in the iArta cluster really allowed me to hone my practice in transdisciplinary ways. There I focused explicitly on cross modal representation of human sensorium (the representation of visuals sonically, the haptic representation of sound, etc.). At the same time I had been studying phenomenology (principally Heidegger and Merleau-Ponty) and thinking a lot about representing these ideas both in New Media Art and Classical music. This continues to be a major trajectory of my research. I’m really interested in how we as humans interpret sensorium weather expressed visually, sonically, or otherwise. So for me, the two are deeply intertwined.

You work with a wide range of hardware and software instruments and tools both as an educator and visual and sound artist with Max often playing an important role. How does Max fit into your practice?

At this point, Max is important in three primary ways for my practice. Firstly it feels like a very fluid environment for me to sketch ideas out very rapidly, second, it acts as a great glue, or really, a central core to connect multiple environments. And finally, I have been trying to add more organization and architecture into my patches, building robust standalone applications for performance. My ideal scenario is software that I can have open and ready to play in one click as well as reuseable smaller components throughout the software. Because Max is an amalgam of different types of data processing (as in numbers, signal, and matrices) for me it works really well as a platform for making connections. Like say for instance if I’m using custom built drums and microphones with Max, I can easily connect to other environments such as Processing, Unity, Maya, Open Frameworks, node.js etc.

Your recent work seems to deal a lot with the visualization and sonification, sometimes by directly mapping data sets but also through modeling networked behaviors. Can you talk a little bit about your approach and some of this work that’s come out of it?

Networks really interest me lately. I have been sonifying and visualizing node edge graphs including human connectome data as a part of a collaborative team of artist and scientist as well as using the physicality of the internet as a resonant chamber. In particular I have been sending impulses through the networks and generating responses based on the edge weights of the graphs. I guess it’s something like if you clap your hands in a big concrete room and hear the sound reflect off all the surfaces, but in this case I’m listening to the connections of the network. Because these environments are so flexible and virtual, each component of how that impulse reflects through the network is controllable. Technically, I achieve this sonically by using gen~ with custom multi tap delay lines, feeding the data into the delay lines, or by using a general multiband effect and affecting each band with data, among other processes.

We talked recently about your new work Waveguide, which seems like a really good example of this investigation. Can you tell us about it?

As an extension of the idea of network resonance, I was working on sending data from Max to a node server to be able to play the audience’s cell phones in real time. I had been in conversation with artist and theorist Yvette Granata (who wrote the text for the piece) about the conceptual framing of the work. We had talked a lot about the interesting new challenge of our constantly divided attention between reality and our digital devices, both inside and outside of the concert hall. This led to the idea of taking over people's devices and permeating control over that space within the concert hall. We wanted to embed a critical discourse within the technology for the performance. Interestingly this opens up many possibilities when all of the sudden you have access to a mass array of tiny cell phone speakers and screens in the performance space. Listening to the resonance of the network through a large participatory installation/text/performance, is for me, a way to appropriate the audience’s smartphones for a bigger communal experience, by somehow exposing the interconnectivity and networked nature of these devices. This will all be presented as a new work Waveguide at Gray Area Theater in San Francisco on September 3rd as a part of the Soundwave Biennial.

This has been an ongoing body of research for me, some of the works have included the visualizations and sonifications of a neural spiking network, visualizations and sonifications of human connectome data, and sonifications of star data to name a few.

As an assistant professor at San Jose State University, you have been teaching Max in the Digital Media Art program for a little while now. What do your classes focus on, and do you think teaching Max has changed the way that you work with it?

Teaching Max has been such an interesting experience. I personally started learning max in a very unstructured way, and it was not until I had been using Max for about three years that I got the opportunity to study with Darwin Grosse. That class helped me frame a basic set of objects and learn how to solve problems with those objects. I try to approach my classes in the same pedagogical method. It’s always fun and surprising to see the creativity expressed when students bring fresh eyes to assignments. I think a lot of times students can be overwhelmed by the possibilities of Max. By limiting each week to the formal introduction of a few objects while presenting conceptual and artistic problems for the students to solve, two paths can be simultaneously undertaken. First learning the technology, while secondly, positioning their work as not just technical demonstrations, but working toward a conceptual ends as well. Through this process we have led large scale collaborations with the School of Dance at San Jose as well as the School of Music. For instance, last semester our students from my data visualization class as well as our interactivity class led by my colleague Craig Hobbs were able to work with the School of Dance to create real time animation for performance. My class also collaborated with Pablo Ferman’s composition class to create real time audio visual works.

What’s next for you and where do you see your work going?
I’m really interested in the politics of performance venues - the top-down nature of the performer on stage disseminating experience to a passive audience. I’m also focused on creating work that highlights our experiences, our perceptions, as humans. This for me has been a major point of exploration in digital space. For instance if I’m having a conversation with someone on Facebook, that interaction is limited to just the text. We miss out on all of the other social cues that we interpret as part of a conversation. In that way technology limits our understanding of empathy and interpersonal communication, I’m really interested in building art and music that brings people together and furthers understanding of our lived experience, not diminishing it or refining it for data and analytic consumption. Building software that helps us understand what it means to be human is a primary goal of mine, and doing so in collaboration with the audience is ideal. The underlying question here being, can we as artist form a discourse about what all of this technology in our lives means. I’m really interested in the ways that classical music (an ancient art form) is evolving with technology, the concert hall is the perfect place to explore that territory.

Andrew's website

by Cory Metcalf on August 9, 2016