An Interview with Natasha Barrett of DR.OX
Some of us listen to many different types of music and are open to experimentation but, correct me if I’m wrong, sometimes the music that comes out of academic circles can be cold and dry. DR.OX is a welcome change. I had the pleasure of interviewing one half of DR.OX, Natasha Barrett, and I found her focused, enlightened and outspoken.
Tell me about your background — where you’re from, your childhood, and your education. Are you English?
I am, yes. I’m from the west of England — a small forest area called the Forest of Dean. It’s very rural, not very close to any big cities. So I lived there until I went to university in London. I spent seven years in London, so it was kind of a shock moving from a forest to living in a big city.
Were you interested in music as a child?
Yes, I was actually. I was doing the normal stuff like playing various instruments and such.
What instruments did you play?
Classical guitar was my main instrument, and also cello. Classical instruments. I took classical guitar right the way through university, taking lessons at a music conservatoire at the same time as my normal university education. And yes, I thought I was going to be a classical guitar performer! I soon found out I was not good enough, and I actually didn’t want to do that at all. [Laughs.] So then things changed quite a lot. I guess at school, prior to university, I was finding myself more and more in the dark corner of the music room where they had a few more interesting things like some old tape recorders and keyboards, which were normally dominated by a few boys, but if you went at the right time of day, there was access. [Laughs.]
Where did you go to school in London?
City University. I went there directly after finishing my school education, so was at university at nineteen, then stayed continuously through my degrees.
Were you always really focused on music?
I took a music degree for my first degree. And the thing is, at the university I went to, it wasn’t the normal music degree. It was actually a BSc, a science music undergraduate degree. We learnt about acoustics, music technology, perception and psychology, a lot of things that you didn’t normally get in a music degree at most other universities, at least back then. Maybe you do now, but this was back in the early ’90s. In most music undergraduate degrees in England you normally didn’t have much to do with science or music technology. But now things have changed quite a bit.
The ’90s in London. How fun! So when you were in — well here they call it high school — you started to show some interest in electronics, in the stuff that the ‘guys’ were doing?
Before I went to university, when I was at normal school in the ’80s, I had a strange background because my parents would not allow any kind of pop music to be played at home. My dad was into classical music, and so all I heard, really, was classical music. But that was fine I guess – he had a good hi-fi! But then at school — I guess a around the age of twelve or fourteen I found these other things, you know? What does this keyboard do? And what was this tape recorder? I could start doing some other stuff — and its stuff I’d never really heard up until that time because of the restrictions at home. So I was playing with sounds without actually knowing much about what was going on in the more interesting pop music that some of my friends were getting into, like New Order, Kraftwerk and more interesting stuff that came before the ’80s pop music [Laughs], stuff that was in fact already historic! So I guess I was playing around without actually realizing what I was doing. A lot of accidental things. Then university in the ’90s started to add some clarity. Understanding the new sound technology, insight into some of the things you could do with it, without feeling you’re reinventing the wheel, if you see what I mean.
So when you went to university did you rebel, and delve into pop music at all, and go to clubs and things?
I guess a little bit. But it was a difficult time, because then — the early ’90s pop music and leftovers from the late ’80s was — well I mean, dire. I thought it was awful! [Laughs.]
That was when grunge came into the scene.
Yeah, right. I don’t think I ever rebelled against anything. I just found things, and I thought. “Oh, wow! All right. OK.” It wasn’t really a rebellion because I was still playing cello in a classical orchestra, and I was still playing classical guitar as a main performance instrument. So the idea of rebelling in that situation doesn’t really come about. It’s a discovery. It’s about finding out things. And also, at City University I arrived at a time when echoes of the composers like Alejandro Viñao and sounds made by the Fairlight system still hung in the air. So already there were interesting things left over from the previous generation of students and composers.
Did you have any strong mentors at that time?
No, not really. Discovery was what drove me, so I was listening to as much as I could. Everything was so fresh, so new. The library was quite well stocked and I was getting into most of the computer music CDs because of sound. I found what sound could be — away from an acoustic instrument — and that was a really important thing. Also, away from a keyboard and away from MIDI. Finding what else you could do when you start to work with what was actually then mainly tape techniques.
Natasha with her son
Why do you think you had this inquisitive, creative mind, and you just didn’t stay in the classical world? Do you think its just part of your character, or was there something, the way you were raised, or something that happened?
I don’t know. I can’t answer that question. I remember there are a few very clear memories I have from when I was quite small. One of them was–I guess all good mums send their daughters to dance classes, and my clearest memory is the sound of the whole group of small children tap dancing. The sound in the room, this enormous boom of sixty little children tap-dancing. Things like that. I have this very clear auditory memory, but I don’t have memories of dancing. I can’t dance to save my life anymore! But sounds I seem to remember quite well. Obviously the memory is probably better than the actual reality. I’m sure I remember it in a much fuller fashion than it really was. You remember what you want to remember. My memories are often audio memories, more than visual memories.
How did you end up Norway?
I finished my doctoral degree in England, and at that point you have three choices. Choice number one is be unemployed and poor, and try to work as a composer. Choice number two is to get a job, probably in academia and really not have much time for composing, and number three is to leave the country. So I chose number three and found a research grant to be in Norway for one year. So that’s how I first got here. And then I stayed.
Do you work as a composer? Do you have an academic job?
I don’t have an academic position. I’m freelance, composing non-commercial music.
I know, that’s why I’m here. [Laughs.] There aren’t many countries in the world where you can survive working with contemporary composition — non-commercial, experimental composition.
Definitely, not in the States.
So that’s what I do. But having said that, life isn’t just sitting and composing all day. I present master classes and I travel a lot making concerts. I’m also working with all kinds of sound related media. But the point is being able to choose, rather than having to go to a fixed job every day. So I do teach, but I can chose what to accept. And as a matter of fact, teaching master classes is interesting because each time you are challenging yourself, designing something specific for each context without repeating it each year, and normally with a bunch of interesting students. So I know I’m quite lucky, actually. [Laughs.] I think there are a lot of people, friends I have back in England who maybe wouldn’t mind having a position with a bit more flexibility. But it’s not a high-income life — though enough to live off.
That’s so inspiring! So let’s talk about Max/MSP. When did you first get exposed to Max/MSP, and start working with it?
Well, you know, I was working with Max before MSP came out. At that time I was using MIDI to control keyboards and samplers. Also in Max there was a little play object that accessed sound directly off disc. This object gave a performer some flexibility — you could trigger and overlay sounds of any duration instead of having to play with a fixed tape part. At that time this was about all the real-time audio you could do with cheaper computers other than basic sound effects, and samplers had too little memory for working with long sounds. I only have one composition for instrument and tape from that period — all my work with performers was with Max, mainly using this little play object to synchronize, in real-time, layers of pre-made material.
My first MSP piece was in ’99. I thought, “Wow.” I couldn’t afford an [IRCAM] ISPW signal processing workstation, which was the main alternative before MSP came out. So finally being able to do this stuff on an everyday computer. I mean, in software! [Laughs.] That was great, yeah. I first composed a work called “Diabolus”, which is for soprano, percussion and computer. And that was really good fun. I began discovering immediacy in electroacoustic performance that was more than using an SPX90, sampler, or some external outboard effects unit.
Do you have any favorite objects?
Well… two objects actually come to mind, which are both homemade objects. One’s called essence~, and it’s an FFT object that tracks pitch, amplitude and centroid (the average amplitude-weighted frequency within one FFT window. Its used as a measure of the brightness in the sound). Essence~ is pretty CPU efficient as all three processes are in one object. Essence~ feeds into a second object called vecstat~. This object works out the average, the standard deviation and the average of the magnitude of the derivative for pitch, amplitude and centroid data within a user-interactive time frame. I use these objects mainly when working with live performers, to analyze what’s coming into the computer and automatically extracting numerical information useful for all sorts of things. For example as control data for other sound manipulation objects, for transformations over short and long time spans and so forth. There is in fact another object that goes with essence~ and vecstat~ called statlib. Statlib takes the output of vecstat~ and works out whether a sound type has previously been ‘heard’ by the system and how ‘far away’ the new sound is from the existing pool of sound. It constantly updates itself. The three objects were originally designed by me then programmed by someone else for one of my interactive installations. When I think about how I work with MSP, I don’t use many high-level objects. I guess I’ve always been building big patches out of simpler low-level objects. For example I like biquad~, and also work a lot with recording live sound into buffers and dislodging time so you don’t always get immediate cause-effect actions. So not always working with immediate sound transformation, but recording and allowing the temporal domain to be shifted around quite a lot.
Can you talk more about how these objects came about?
They are programmed by Øyvind Hammer. He wrote a lot of signal processing software many years ago for the Silicon Graphics machines, which I used quite a lot when I first came to Norway. Øyvind programmed these three objects a few years ago, initially in Pd, then later in MSP. They were made for an installation, which is a permanent, public-space interactive work that amongst other things analyzes sound as a way to determine how the installation interacts with the visitors. I guess MSP now has other objects that do these things, but not at the time my objects were made — and then you get used to using something — you know its stability and I guess we like to stick to our favorites.
If it’s not broken, don’t…
Yeah, right. [Laughs.] But having said that, there’s always a new little twist to things coming out all the time.
Do you use other music software programs, or sequencers or anything?
Well, you know, my composing life is separated into two parts, two main things. One is live or real-time work and one is studio composition. In the studio I work quite differently than with live performance. There are many differences. Live performance needs to be completely stable. I have quite an old laptop — I’m going to buy a new one this year, but you’re not only restrained by your CPU, you’re restrained by your brainpower, thinking speed. That’s the biggest problem at the moment, at least for me. Computers are pretty fast, you can build these big Max patches, you can do so much, but does your brainwork quick enough to actually work with it live? I guess it’s a brain problem for me. [Laughs.] Am I thinking quick enough? I use all kinds of things in the studio that I wouldn’t use live, simply because they’re maybe not stable enough, or they’re not completely real time, or I need ‘non-real time’ to consider the meaning of the result. It’s different processes, compositional process and creative process.
Do you use any type of mixers, controllers, or any alternate controller-type of gear?
Live, or in the studio? Live, I guess, because that’s what we’re talking about, isn’t it? I have this Peavey fader box, which is one of my favorites, simply because it’s small and so reliable. You get the same thing each time — you know what you’re going to get out of it. But I also use foot pedals, and I have homemade controllers made out of various sensors for different kinds of proximity interaction. I’ve also been using accelerometers and gyroscopes, but find it’s quite a big thing to wire your hands or an object with an accelerometer and a gyroscope and actually use that information in a meaningful way. I find they’re quite difficult to control. So these types of sensors I use to get ‘surprise’, but it’s not something that I would necessarily use as the main controller. So the Peavey box gives me the security, you could say, and then sensors like the light sensors and gyroscopes give the surprise. [Laughs.] Yeah, surprise is good.
I love your CD. I just find it so exciting, and one of the most exciting things to me is hearing acoustic instruments with electronic instruments, and working with Cycling ’74, you can believe I’ve heard so much computer music…
Oh, I bet. I guess you’re listening all day, if you need to.
This is just so exciting to me. I was gong to ask you how did you make this choice to work with both electronic and acoustic instruments, but it seems like that’s really been a long time coming, that that’s something that developed within you a long time ago.
Yes, it is. I went into a phase, and I guess I am still in that phase to some extent — where I found it problematic working with an acoustic performer. I was in this ‘acousmatic’ world, where the live, visual performer had no meaning, had no purpose. Now I’ve come round a bit from that. But even then, when I wrote a piece for instruments and electronics, I really had no care for what this person on stage looked like, and the fact that he or she was there was an annoyance more than a feature. I was only interested in the sounding result, the music, not the visual and live part of the performance. But now I’ve come round somewhat from that, and take advantage of the visual and the live part as a thing which is good, rather than a thing which is getting in the way of the listener’s reception of the meaning in the sound. I mean working with Tanja is great. I couldn’t think of working, at least in an improvisational way, with a performer who wasn’t open to their sound-world and didn’t also have a high technical skill. And she’s great. Tanja has a very good technical skill and she’s very open to sound. It means we can play together and actually work on the sound, and on what we’re doing, rather than discussing technical issues, and feeling limited by each other.
One of the exciting things to me was your use of space, silence. Something that I think a lot of young composers don’t really take advantage of. Do you consider this space as instrumentation, as a third instrument, almost?
I don’t think of space as instrumentation as such, more as tied up within the sound. We’re talking about temporal space here, yes? Rather than spatialisation? These two things are for me related, quite strongly.
It’s kind of the pauses, or the use of silence.
Yes, if we’re thinking about temporal space, I feel that first of all a sound needs, it needs a certain time to be heard, to be understood. If you cut it off then you understand it in a very different way than if you allow it to continue. It has a natural progression, and silence, even if it’s a very tiny moment of silence, changes how you listen to the next sound, and how you remember the sound before. I don’t really see it as instrumentation, I see it as being a blank canvas, you could say, which you put things into. So the silence helps you work out whether you’re defining something as a small entity, or whether you’re looking at a much larger map. If you cut out silence, then you can be looking at a very large, single entity. If you bring in silence, then I think you can start to define counterpoint more easily; you can start to define simultaneous complexity more easily, without it falling into a grey, or a grayscale. Does that make sense?
Yes, that is a very interesting point. I actually have one thing I’m confused about, about this release. Is this a studio production, or is this a recording of a live performance?
The material on that disc was a two-hour recording session, and then we took the best from that. Actually, we had two days of two hours, but one day we found we just weren’t playing well, and most of that was ditched. So these are 99 percent live improvisation takes. The cello and the computer have been recorded simultaneously on separate tracks, so that we can control the mix, because what you do live on stage, mix-wise, is maybe different to what you do for a CD [studio] production.
What did you use to mix? Did you use a software program, Pro Tools?
Yes, we recorded straight into Pro Tools and then mixed it. It was a very simple production.
Did you use any effects plug-ins on the mix?
The only plug-ins are EQ and compression. There are no effects as such [in the mixing stage]. Everything is done live, all the sound manipulation is done live in Max/MSP in real time. I don’t see much point in doing it otherwise, or else I would be composing ‘acousmatically’ in the studio, which defeated the object of what we’re doing. The point of this, for us, was to make a disc which was the two of us ‘live’, playing together, not me composing in the studio. The only thing which isn’t 100 percent live is the actual duration. So maybe an improvisation lasted ten minutes, and we’ve taken the best chunk, which was maybe three or four minutes of that.
How much time did you spend composing before you recorded? Or how much is improvised?
All the material is improvised. You could say the composition part is making the Max/MSP patch and rehearsing together. I never used to care so much about improvising live. I wasn’t that interested in just pure improvisation. I wanted to find a way to combine improvisation with a bit more control, in the sense of a composition. So I build MSP patches where I can set up compositional structures controlled in a live way. So, for example, things to do with how sound is processed and layered, how it’s mixed and refers back to previous moments as you’re playing, to try and create a performance which isn’t just a point-to-point improvisation, but something which combines spontaneity and control in terms of the moment and in terms of the longer duration structures. I don’t know what Tanja’s going to play next, but I have to try and respond to that, and find meaning, with respect to what we’ve done a minute ago, in the improvisation. And that ties in more with this crossover between composition and improvisation. You could say that the compositional structures programmed in the MSP patch helps overcome the ‘brain problem’ I mentioned earlier. But the material is completely improvised in the sense that it’s all done live. There’s no processing or remixing in the studio. It’s really just selecting the best bits.
Tanja Orning and Natasha Barrett of DR.OX
It was very inspiring for me that you’re a woman in this arena. It’s warm, it’s not this cold, electronic sound, and there’s so much emotion. It’s just very exciting to me.
Great! This is one of the reasons why, when I improvise, I want to work with an acoustic performer. I don’t think I could improvise with just electronic sounds. But working with a live performer with an acoustic instrument, you get a quality, a complexity.
That’s what is makes it exciting for me, definitely. Is there anything you’d like Cycling ’74 to do? What could they do to Max that help? Do you have a wish list?
No, I don’t have a wish list. I have more a request, a wish for them to keep working on it, you know, because Max has been tossed around so much, from IRCAM and Opcode. It needs to have a place where it is stable and it gets constant support. I have an installation, which is a permanent, public-space installation, and in Norway, if it’s state commissioned, it has to be running for twenty years, yes twenty years, for an interactive computer installation! So I used Linux. I couldn’t use anything that was commercial, because there would be no 100 percent guarantee that in twenty years’ time, the software would still exist. And when the computers need changing out, you’re going to recompile everything for the new system. So I think that’s the most important thing, to keep performance based systems maintained and developed over a long period of time. How to make your work live many decades, when it’s based on a certain kind of technology…
That’s a really good point!
Interview by Marsha Vdovin for Cycling ’74.