Articles

An Interview with Tom Hall

I've known and enjoyed Tom Hall's music for several years, but never been in the right place or time to see him perform in the flesh. Over that time, though, I've heard plenty of mentions of his interest in and work on the visual aspect of his performances. I expect that there are any number of us out there who've wondered what the trajectory of that kind of practice looks like, even if we're not necessarily engaged in it ourselves. How does one start to integrate visuals with live performance? Where do the ideas come from? What parts of programming Max for audio might map easily to creating live visuals? How does that process change over time?

Tom was kind enough to take some time out from the Cycling '74 roles that many of you are more likely to be aware of and take a step or two sideways to talk about his life as an audiovisual artist....

Tom Hall [photo: Marcus Fischer]


Prior to creating your own visuals, did you work with other people in a live performance context?

Actually, in the early days - the 2000’s - I didn’t play live with visuals, and then projectors started being available at shows. So I guess that the newfound availability of equipment is why I started doing my own visuals. Using ‘found video’ helped me to create the environment I was seeking to portray.

There were two live shows I recall where I thought, "I need to have live visuals to add to the experience." One was the launch of my first album Fluere in 2007. The second was my first fully fledged A/V performance for Lawrence English's long running Open Frame series, opening up for Tim Hecker in 2008. So yeah — I'd been playing live for almost 4 years by the time I did those shows.

It stayed that way for many years. My very first visual collaboration with a group of other visual artists is very recent, in fact... Mexico City in 2018.

Drift_ excerpt from collaboration with /* Pac Interactive, Mexico City, 2018


Hall's first show with visuals - Brisbane, Australia, 2007

When it became possible to work with visuals, what sorts of visual work that you’d already seen influenced the direction you wanted to pursue?

My own entry and initial forays into the art world started with photography rather than audio. Initially, my live visuals involved “found video” - both in its natural form, and in manipulated forms.

It wasn't easy at first. I was one those people in the early years of the new millennium doing battle with hard disk drive read speeds, Codecs, and image resolution. It was a tricky combo that always left me feeling like I was walking a tight rope... especially when you consider that I was using my laptop do do the audio and the visuals. There were plenty of disasters and mid-set crashes. Funnily enough, they now make up some of my best stories :-)

Prior to developing your own visuals, what previous work had you done (in Max or in some other way) with visual materials?

In terms of my live work, there wasn't much. In fact, it was a few years after getting into Max before I even looked at Jitter at all. Consequently, it’s one of my weaker sides in terms of programming, but I've been working quite a lot in recent years on bringing my visual programming up to speed.

In terms of other work, as a photographer I was very much inspired by artists like Wolfgang Tillmans and Doug Aikten and that interests carried over into my early photography, video, and live visual experiments.

Wolfgang Tillmans

When you started, were you working with Jitter as matrices, or did you go straight to using the GPU?

I actually found Joshua Kit Clayton's jit.gl.slab and optimization thread pretty soon after I got started (thankfully). There’s no way I would have gotten very far back then without it.

I really way just shooting a bunch of abstract videos, and then playing them back, cross fading between them and applying different effect shaders - some audio reactive, and some not.

Horse Bazaar, Melbourne, Australia 2009

Were there any Max patches out there from other people that really inspired you or gave you a place to start?

None that I recall. It was more that I had a vision of what I wanted to do and then set about designing Jitter patches to be able to do it, some that ran somewhat autonomously (because I had to play the sound live too) but with the ability to interject if I wanted to.

Were there any Jitter objects that you just felt an immediate connection to?

I was one of those people that really actually started at Tutorial 1, but at the same time while doing them it was more the case that I was pulling out the pieces as I went along that I knew I'd need to build my vision.

The power of getting stuff on the GPU was even more obvious back then, so it was a matter of needing to get into those later tutorials to work out how to do it (there was no Vizzie back then). If you wanted it run at a frame rate higher than 5fps, you needed to do the leg work.

What kinds of patches did you start from?

Finding Joshua’s UYVY Matrix to Texture write-up and using jit.gl.slab objects meant that I could really keep everything together. I certainly wasn’t worried about hijacking code from Cycling’s help patchers and tutorials ;-) No concerns at all — I was in Australia then, so it wasn't like they'd send people after me (hahaha!)

As you started moving into the live visuals world, what was your live performance rig like in terms of software or hardware? Did your performances tend toward a long single set, or was it divided into something like individual pieces (or “scenes”)? Did that particular way or organized what your audience heard guide how you developed videos, or did things happen the other way around so that your audio changed based on how you did visuals?

My live setup back when I got started was pretty much a Titanium Powerbook, G4 laptop and an audio interface. That changed over time - I went to MacBook Pros, added a few MIDI controllers, then a Nord Lead 2, and a couple of analog Moogerfoogers, and since the turn of this decade, hardware modular.

It was pretty much always long-form pieces with plenty of dynamics, but no “song” stops for diversions. No matter how my work has been over the years musically, I’ve always wanted to present it as an environment - something to enter into, to absorb, to be challenged by, to question etc.

You’ve been working with live visuals for a while now. Is there anything about your approach or the Max objects you work with in your patching that has changed radically over the time you’ve been working?

If anything, it's that, "Less is more." Like all visual art, it's easy to keep piling it on in live visuals using Jitter —more effects, more color, more layers, and so on.

But what's harder is self-control and asking yourself "What is it really saying?" I find these days that my patches are more succinct. As my programming knowledge has grown, I've found ways to express more with less — and, for me, that's important in delivering the message. I think this is particularly important, given our current climate of diminished attention spans and addiction to bling taking precedent over quality or any kind of meaning beyond 5 seconds of throwaway swipe-up entertainment. I sound like a social media hater! Let's try this: Jitter is a great tool, but you need to realize you're playing with fire and excess these days.

How “tightly” did you conceive of integrating what your audience heard and what they saw?

These days, it’s pretty tight, Max/Jitter has a lot ways to integrate event control, but I really try to be pretty careful not to implement it in a way such that the integration becomes predictable or stagnant. I also seem to enjoy keeping the possibility of failure around - sometimes, to my detriment. There’s noting like sitting on the edge to make you feel alive.

Is some portion of the live visuals you do generated directly from the audio? If so, what tools or procedures do you tend to favor for control?

I’m not using any ‘found video’ in my work these days. Instead, I'm generating everything using OpenGL. A big part of moving in that direction was to make stuff that was more fluid in movement that could keep up with the audio in terms of frame rates. It also meant that I didn't need to relive my past battles with 7200rpm hard disk drives anymore. A return to found video is something I want to explore again in the future, given that improvements with solid state drives and newer codecs would make that a lot easier and that 4K projectors are now starting to pop up at venues.

A lot of my visuals don’t do anything at all without audio input, but they’re not moving in a way where that's explicitly obvious, like pulsing to the kick drum. It might be that I’m measuring activity within certain frequency bands, smoothing that output, and then using that signal to drive the overall movement, while other frequency ranges would be controlling growth & movement, dimensional control, and that kind of thing. Sometimes those processes are going on one at a time, and other times they'll overlap or be running concurrently.

Ultimately, I’m trying to create a synesthetic experience for the audience. I'm interested in creating something that sits in the middle and internally creates its own environment that references the ‘everyday’ by virtue of the algorithms, temporal exploitation or found sound and synthesis I’ve obliterated with distortion.

What kinds of controllers did you bring to the table already by virtue of doing audio? Did you use those controllers, or repurpose them to do visuals, or did you choose new controllers?

Honestly, I never really ever had a controller for playing visuals. I still don't. For some reason I just found it too cumbersome. If I was just doing visuals, I would have a controller. But since I feel in control of the sound or I can at least predict what it's going to do these days, it turns out that the sound itself is the best controller for me.

Once you’d gotten through your first show or two, did you immediately start changing what you were up to, or did you maybe start thinking about making the equivalent of an “instrument” for producing visuals rather than creating specific patches for specific pieces?

Not really. What I do is set up an intricate network of event control which is following the time since the start of my set, but which also has the possibility of being interrupted and coerced into other types of manipulation by the audio. Simply put, the audio is the controller: I start the visuals and off it goes. If there’s no audio nothing happens.

Have you ever done video for someone else? Is that something you’d find interesting? Do you think your practice would change in a situation where you were only “responsible” for what an audience saw?

I’ve done bits and pieces but they’ve never really included my own creative input apart from the programming choices. Artistically, I’m open to collaboration but it really needs to be the right fit.

My practice always changes through working with others though. To be honest, it’s hard not to. I’ve had both incredible creative breakthroughs when working with others, giving presentations or even presenting workshops. What other people bring to the table is always interesting. As I get older, I know that I have developed certain perspectives (despite fighting it), and what’s exciting is when someone you’re working with comes in and just shatters it, flips the box on its lid. I love it.

Experience more of Tom's work on his Website.

by Gregory Taylor on June 11, 2019