An Interview with Icarus
Sometimes I reach for electronic-based music to lose sense of time and space. Craving escape, I want to be pulled in on a journey. Icarus’ Fake Fish Distribution, “album in 1000 variations,” generates a one-of-a-kind variation for each download, providing the listener with a truly unique experience. I love the idea and after listening to multiple downloads, the synergy of the deviations does indeed deliver!
While this may not be an entirely new concept, with cutting edge tools, Icarus has provided a seamless creation and delivery engine; a notion that I hope others will explore. I spoke to cousins Sam Britton and Ollie Bown about their process and tools including Ableton Live and Max for Live.
Could you first describe Icarus and what you’re doing with that project.
Icarus started out in 1997, 1998. It’s a collaboration between Ollie Bown and myself. He’s in fact my cousin, so we kind of grew up together, and started making electronic music ages and ages ago. He was the first one to have a computer, and we started mucking around with MIDI. Then I bought a sampler, and things kind just developed out of a desire to make music together. From school bands all the way up.
I think what really cemented our interest in doing that professionally was Drum and Bass in the mid-90s. We were in our late teens, and it was just a really inspirational time. Not only for us, but for music culture in general at the time.
Photo: Steve McInerny
It’s interesting; I was chatting to Keith Fullerton Whitman the other day. He does a lot of work with modular synths. He used go by a D&B moniker called Hrvastski. And we were just chatting about how dance music in the ‘90s was just such a predominant force in music culture.
And I think we really got a vibe off that, started producing Drum and Bass tracks in the late ‘90s and basically got signed very quickly, and then just started writing albums.
We got initially quite frustrated, I think, by what we saw as dance-music purism. But actually what has turned out to be, I guess, the phenomenon of the music industry pigeonholing certain styles, certain genres and ways of making music. And selling them as a kind of done deal.
That’s kind of classic music-industry paradigm: define a style, define a way of doing it and stick to that. That way people know what they’re getting.
And I think we initially kind of got frustrated with the fact that most dance music was produced as 12”, and much of the earlier music that had inspired us was all much longer than that. We were kind of more interested in albums, and the variation between different tracks, and how you create a bigger work.
So, beyond the composition of a track, to the composition of a body of work...
Right. So it wasn’t just about having a hit and making great dance-music tracks. It was also about articulating the form of drum and bass, just making stuff and saying, we made this, so therefore it must be drum and bass!
And I think that’s kind of defined the way that Icarus has evolved over the years. With each album, with each record, we’ve accumulated different interests, and the sound has developed in a different way.
So whereas in the late ‘90s it was kind of drum and bass inspired, and in the noughties [‘00s] it becomes more, I think, organic. There’s a lot of inspiration from jazz and improvised music. And we’re exploring more ideas of process and how you program a computer in order to perform, and not only for us to perform but for it to start to generate its own patterns and its own sequences.
I’m curious as to how you apply this album concept to a collection of 'generative' music? Is it more of a composition of process?
What we've been doing in Icarus and further afield, through our work with improvising musicians and soloists, perhaps amounts to our own skewed take on a musical ‘Turing Test’, in that if, what we are designing our computers to produce, can be deemed sufficiently musical—not only to us, but the other musicians involved as well—then it must have some merit.
That's not to say that the work necessarily has any allusion to a musical 'humanism', but that it exhibits traits that humans nonetheless find interesting in a musical context. I guess, in this sense, it's perhaps perversely comparable to the fact that chat-bots on Twitter have human followers. Given a sufficiently well-defined contextual framework it's totally plausible that programmed behavior can be interesting and even captivating.
That’s so true! I never thought about chat-bots in that context.
In this respect, one of the most interesting things about using generative and algorithmic processes in musical composition is how you end up contextualizing them. What's curious is that quite often in our musical contexts, the idea that the more rigorously researched and well-implemented processes yield more musical results, is often a fallacy.
We've often found that 'cruder' processes; those that play on context, forcing musical situations and allusions, that you could never have conceived of when programming them at the outset, are incredibly valuable. In fact, the album title Fake Fish Distribution alludes precisely to this conundrum.
During the development of one of the patches used to generate some of the rhythmic templates for the parameter variation, we decided to start using a Poisson distribution instead of our own hacked together algorithm. But we ended up ditching it in favor of the latter, our own, which just seemed to do a better job given the context.
Ollie might not agree with me entirely, but I tend to find that this type of thing is born out all over the place in the music we're involved in, not only in the album, but in our live performances and the work we do with autonomous software and improvising musicians.
Do you have any formal music training?
No, I’m actually trained as an architect. But I kind of stopped halfway through and kind of gave up after the undergraduate degree. Primarily, because we were writing so much music, and doing so much at the time—performing a lot.
And then in 2006 I went and studied at IRCAM...
Where Max was started!
Yeah. That was really interesting. When we first started out using Max as Icarus, we were just chopping up break beats.
So we were using Max as a way of generating or randomizing break beats. Then we started refining that more and more and more. We were really listening to drummers, and listening to the way in which they would do solos and vary their beats. We were trying to use Max in a way that you could get this randomness that sounded slightly organic. Maybe it didn’t sound organic, but had a kind of spirit to it.
So that was our first experiments in Max, which were chopping up beats and trying to make them live in some way. So we were listening to a lot of improvising drummers and jazz drummers.
Then going to IRCAM, the first thing that really struck me was that improvising was really strangely forbidden. And there’s this weird—I mean I don’t say it’s forbidden, but it was very much like, well...
Frowned upon?
Yeah, definitely frowned upon. Improvise, that’s what other people do — those who don’t compose.
But for the information, IRCAM was amazing. I met Emmanuel Jourdan [Cycling ’74 developer] there. He was teaching Max. And that was quite funny as well, because Emmanuel, he’s like the supreme logician. And I basically learned Max from the ground up just by patching.
So, it was actually really great to just go back to the basics, and have someone like him, sort out all of your idiosyncrasies. But at the same time, I learned a lot about my idiosyncrasies and the things that I
liked doing in programming, and how I could start to formulate those in a different way that would allow for more complexity. It was a great experience!
So that was my Master’s. I didn’t really do an undergraduate degree in music or anything. And now I’m doing a PhD in composition.
Good for you!
Well, I’m very aware of the amount of people who start and somehow don’t get around to finishing. I’ve been doing it for four and a half years now. And it’s part time. So I have no idea how long it’s actually going to take me. But there you go. It’s a start, anyway.
How is your composition education affecting your generative and improvised work?
At the moment, I don't see too much of a divide. The work I'm doing with Icarus and as Isambard Khroustaliov, feeds directly into my PhD and vice versa. I've yet to sit down and write it all up, as there is so much going on at the moment, but I'm looking forward to doing that soon.
Are you able to make a living from your music?
Yeah, just about. I combine it with a lot of different things, of course, in the sense that Icarus is not the only thing that I do. I have lots and lots of other things going on, like, what do they say? “Many strings to your bow.”
How do you work with Ollie when he’s in Australia?
We’ve been doing a lot of it just using Dropbox and stuff like that. So he’ll put a project up, I’ll do some work on it and so on. Dropbox is quite fluid, because you just save the project and then carry on.
The most difficult thing is the time difference, to be honest.
It’s almost completely opposite...
Exactly. If I talk to him on Skype or whatever, he’s always just about to go to bed, and I’m just getting up, or vice versa. So you’re in a completely different headspace, which is the weirdest thing. There’s always hours of the day that overlap, but that headspace—you never really figure that to be such a big difference, as to how you are in the morning versus night. I guess it’s obvious when you think about it. But it’s quite a weird one to negotiate.
And obviously, you’re not performing live together with him.
Right, no. I think, with all of those things, I find you always have to end up in the same room at some point. Actually, that’s probably the part where most of the productivity actually comes in, as well. But yeah, there was a lot of to-ing and fro-ing on this project.
Although, we did do all of the writing, for the last album Fake Fish Distribution here in London. And we had a residency at Steim.
I love Steim. Did you stay in their little Steim apartment? There’s all this food that people have left there for millions of years. [Laughs]
Yeah, it’s quite classic, isn’t it?
It seems like Steim would be a great place for you guys. It’s more of an audio lab environment.
It was a great experience. But that actually brings up an interesting point.
If you really want to learn music technology, you don’t necessarily have to attend a ‘music’-oriented school. There are alternatives that can provide a valuable contribution.
An important part of what we do, something that is important to Icarus but also to our respective practices, is relating the music-making process to other processes, particularly through the act of software programming. While programming, you have to think about interfaces, usability, workflow and so on. This is one of the ways that I think Max really captures people's imagination and changes the game.
What type of alternatives are you suggesting?
Well, for example, Ollie teaches Max as part of his 'Sound Design and Sonification' unit at the Design Lab at Sydney University, which is actually based in an architecture faculty. His students are design students who don't have any background in sound or music, but they're really switched onto design and new technologies. I think they can be more in tune with ideas like Fake Fish Distribution, which may not resonate so much in a conservatory setting.
Right, that makes a lot of sense. I’ve always felt that a ‘designers mind’ could be applied to multiple disciplines.
Exactly...
So, Fake Fish Distribution, the thousand albums. Can you talk more about that? How you got the idea and how you implemented it?
We’d always been interested in generative and algorithmic music. A lot of Icarus was written in a kind of generative algorithmic headspace from the ground up, with us just patching and experimenting and then applying certain things that we’d read about; bricolage-type stuff; taking inspiration from academic papers and ideas within them, and then just straight-up patching them and experimenting with them in music.
So we’d always been in that kind of a headspace—increasingly more and more so. And it got to the point where we’d done a lot of albums that were essentially live performances. We did a string of I think three albums that were almost all done from live improvisations, with us developing the software, recording the performances and then editing and manipulating them afterwards.
It got to the point where we thought, well, look, this is one thing to do this, but it would be really nice to do another studio album. We hadn’t done one since 2004. And we just started thinking about it, “OK, well let’s definitely do a studio album.”
Then it was like, well, what does it mean to do a studio album in 2011? Which is when we started doing it. It was literally, “what were we going to do?” Were we going to put it out as a CD? Was there any point in putting it out as a CD? All these questions just flared up.
It also seemed like if we were going to do something along those lines, we’d really want to represent the kind of algorithmic processes within the music, somehow. I guess we’d seen a lot of installations, and Brian Eno’s done a certain amount of that kind of thing with his apps.
Like the Bloom app...
Exactly. It was like those ideas of generative music, they’re definitely of a lot of interest to us, and we’ve taken a lot from those kind of frameworks over the years. But, at the same time, we were definitely very clear that we wanted to write something that was an album, and that was bounded in a way.
One of the things that we felt about a lot of generative music was that because it was completely unbounded, infinitely varying, that that was actually its downfall in a certain way, because you could never familiarize yourself with it as a piece of music of finite duration.
I think one of the things we always really bugged out to, is just the fact that you can put on a record and listen to the same track any number of times, and hear different things in it. Particularly when the music is quite complex.
So in a way, I guess we saw that those types of unbounded, generative music as slightly problematic from our point of view, particularly with the type of music that we’re making. Hence the idea of actually creating something that had a fixed set of parameters.
So we totally arbitrarily just came up with the idea of a thousand different records. It seemed a suitably large number for us, enough that you couldn’t actually have made it all without any software input. And yet, maybe small enough that if you were really obsessive, you could actually listen to every copy. [Laughs.] I’m not sure that ever happened, though.
Can you describe the generative patch used for your 'thousand variations'?
Max for Live has a set of objects called live.path, live.object and live.remote~ that allow you to get information, script and control virtually everything in Ableton Live by referencing the Live Object Model.
We started off by playing around with various ways in which we could control Live using Max; triggering clips using simple number series and probability distributions, controlling various effects using streams of data derived from audio analysis. We worked our way up the hierarchy of the Live Object Model, experimenting with different ways of cross patching and chaining processes. At the same time, we were also composing the basis for the various tracks, so the processes and architecture of the patches were developed in close conjunction with the musical material.
At a certain point, that is always pretty hard to define, you start to feel that there are enough musical elements and enough scope for variation that it's time to start arranging. At this point, the patching takes on a different nature, going from a kind of low level web of Max for Live devices in various tracks to a timeline based paradigm. Some are sending information to each other to effect control, others requiring human intervention in the form of 'knob twiddling' and triggering.
The timeline control and scripting of the variation, we centralized in one Max for Live 'master controller' patch, into which we could integrate—using live.object and live.remote~ —all of the control we required over all of the other Max for Live sub-patches and the various methods we had been using to trigger and control the larger arrangement and sequencing of the track. Broadly speaking, at this stage—and after a lot of de-bugging—you end up with one patch that can more or less control everything. Although practically, there almost always ended up being processes in each track that we left as autonomous agents. These were generally low level DSP processes. From here, you can start to build an arrangement by varying parameters in Max.
As we were using Live in its clip triggering mode and not the arrange mode, we created our own timeline patch in Max that was synced with the clock in Live, from which we could arrange the tracks. An entire arrangement for a track, therefore, consisted of a set of parameters varied over time. Most DAWs offer you the possibility to control, say, volume via a breakpoint function, so you can automate a track at the mixing stage. The difference for us was that everything, including the arrangement, was controlled using breakpoint functions. This was the biggest potential bottleneck in the project and I think it's fair to say that without the extended features of Emmanuel Jourdan's ej.function object, we'd have probably gone insane!
Within our parameter timeline we also built in function generators, allowing us to generate breakpoint functions as well as draw them in and edit them. In many respects this was a practical measure as much as conceptual one. More often than not, you know you want the track to vary in some way at a certain
point and, rather than spend 10 minutes drawing in some form of gestural variation, listen back to it and then re-edit it. You might just as well generate the initial variation using an algorithm you can specify and control and then re-edit it. At some point, you end up with a pretty definitive arrangement of the track via editing all of the breakpoint functions.
The next level then kicks in, in which you have to design how the arrangement of that track, manifested as a set of breakpoint functions, is going to vary. Our aim was to produce a limited set of variants, bounded by the process of variation and, for this purpose, we used interpolation. In its most simple conception, you would produce two versions of each breakpoint control function and then perform a linear interpolation between the two that had 1000 steps, thus producing our 1000 versions of the track. I'm not sure we ever did exactly that, as in practice it proved too simplistic.
I think Spineez of Breakout exhibits some of this simplicity in its variation, although there are a lot of other factors at play as well. But, as is I hope obvious from this description, there are any numbers of points in the patching process of building a track and an arrangement at which you can choose a different path by which to arrive at producing 1000 variations of each track. The whole process is modular thanks to the inherent modularity of Max itself.
Why did you choose to do this project with Max for Live?
Ableton Live with Max for Live is a great DAW platform with a whole next level you just don't see anywhere else, not only because it is scriptable—Reaper and Ardour were other variants we looked at too—but because the workflow in both programs is invaluable when writing and mocking up musical ideas.
Furthermore, Live comes with a massive number of plug-ins, rock solid DSP, and both platforms also have a massive community of users, programmers and contributors. Live / Max for Live also has the advantage of being self contained —if you don't use any third party plug-ins—which is a massive advantage when you start to consider what it would mean to install the whole project on a host of different machines, in order to render all 1000 versions which, as a result of certain scripting limitations, had to be done in real time.
Fundamentally, there is always going to be the argument that we could have done the whole project in say SuperCollider or equally, in Beads [Ollie's audio library for processing] but there are good reasons for choosing to work within a dedicated DAW that has a graphical interface. Namely, that when you are writing the music, you don't necessarily have to concern yourself with lower level programming—unless you want to.
What type of parameters are varied and how?
Here's one example and in truth it's a bit incongruous, because it makes fairly minimal use of the breakpoint function editor I describe above. However, in this respect, it is also more brutally algorithmic than some of the other tracks.
In Colour Field, all 17 elements of the track are laid out over around 60 clip slots. The progression of each element is controlled by a quadratic residue sequence. The sequences have cyclical patterns whose length is determined by the seed. The QRS controls how long before the next sequence in line is triggered. This sets in motion different phases of the various elements, echoing a kind of minimalist approach to musical construction where various different length motifs are combined and juxtaposed to produce an evolving tapestry of sound.
By co-varying the seed in the QRS for each element, you vary the progression of the track in a manner that both reflects the hierarchy of the various musical elements in the arrangement and the musical structure at play, namely, the cross cutting and interplay of various rhythmic sequences and harmonic progressions.
So, how much is the piece varied between separate releases?
The seed is linked to the version number, so the pattern of interactions generated between each of the elements is completely different in each version.
Varied variances... brilliant! Was there a particularly challenging aspect of the patch?
The most challenging thing about Colour Field was composing the relation between the patterns of the QRS and the patterns within the musical sequences.
There you go. It’s back to composing!
We're pretty happy with the way Colour Field turned out. Although it's a very simple mechanism, it seems to play well with the notion that there is a collision happening between two worlds; the sonic elements that make up each of the sequences and the algorithm that is controlling their larger arrangement. I also like the fact that QRSs are most commonly employed to diffuse sound reflections in acoustics and here we're using them to diffuse musical patterns across an arrangement.
Anything you might have approached differently in hindsight?
In a general sense, I think I’d have given ourselves more time. I think we completely underestimated how long it would take to listen to a bunch of different variations, edit something and then listen to another bunch of variations and work out whether the edit you've made is better or worse. It's just an immensely time-consuming process.
There were definitely days where I think we were pushing the limits of our sanity, given the knowledge that we only had x number of days before Ollie went back to Australia, and we would be 11 hours and 10000 miles apart once again.
What's up next for you guys?
Given that there appear to be so many bands called Icarus now, we're considering some type of plunderphonic mayhem...
Interview by Marsha Vdovin and Ron MacLeod for Cycling '74.
by Marsha Vdovin on May 15, 2012