Eric Lyon, developer of numerous cool MSP objects including the FFTease series, has written an amazingly comprehensive new book on writing audio externals in C for Max and Pd. Eric takes you through a series of twelve examples to illustrate how to implement audio and DSP concepts in external objects. Learn more about Designing Audio Objects for Max/MSP and Pd on the publisher’s web site or on Amazon.
Cycling ’74 will be closed October 17th, 18th, and 19th for company-wide meetings. We will re-open October 22nd. Orders, authorization, and support inquiries will be delayed until then. Thanks in advance for your patience!
This weekend, I’m heading to Pittsburgh, PA to perform as part of the annual VIA Festival. VIA is an all-volunteer run festival that pairs hot musical acts with current visual artists to create a unique audiovisual experience. I’m excited to be part of the show, working with the original moombahton group Nadastrom and checking out some really amazing artists along the way.
Memorable Expo ’74 Artist Jeremy Bailey will also be joining the festival along with some other really fantastic people. For my set on Saturday night, I will be trying an experiment with iPhone cameras in the crowd streaming through a complex video performance patch that has been in constant evolution since 2009. To help make this happen, I’ll be leaning on Airbeam Pro and Syphon to route the iPhone streams into Max, along with a small army of volunteer phone-camera people. If you are near Pittsburgh this weekend, come check it out.
Saturday, October 6, 2012. 8pm-1am, 6000 Penn Ave, Pittsburgh,PA.
I’ve spent the last week or so with Brian Eno and Peter Chilvers’ new iPad app Scape, and I have to say that I’m impressed – both by the application itself, and also by the experience of using it.
The video explains what’s going on quite well and there are a couple of good pieces about the app and its creators here and here, but I’ve found myself thinking a lot about the interface and what is not explained, or by the relationship between the iconic representation of elements in Scape, their function, and the process by which I’ve come to some understanding of what’s going on (My thinking here probably has some of its origins in our being focused on the idea of discoverability during the luge ride that was Max 6 development, but I’ve been fascinated with how trying to make sense of the program has slowed me down in a rewarding way).
On one level, the app certainly is discoverable in the sense that there are intuitive models from other software threaded through the design that make it “easy” to use. But I’m interested in the parts that aren’t efficient in the “Let’s get this interface stuff out of the way so that we can get down to producing things” sense.
I think that this application shares some features with slow food in that there’s an admirable sense of reward that originates in process – interacting with the application over time, and learning by going where to go.
In the case of Scape, that winds up being about looking and listening in real time. There are no pop-up quickie hints, no chatty paperclips. You will only figure out how things work by working with them – watching and listening. The elements themselves invite scrutiny about representation – both in how they look and in the way that they’re animated onscreen while they’re “running.”
From the very first, you start wondering about what the relationship between the little animation you see and what you hear is, and about what the graphic space in which you’re working represents. If you skootch similar elements so that they’re in close proximity, their size alters sometimes. What does that mean? When elements approach the edges of the screen, they shrink or vanish. Huh?
All that might lead you to believe that there’s a simple one-to-one relationship between what you see and what you hear. But that’s not quite true. Some elements, when placed, appear to “stay out of the way” of similar elements. As you watch and listen, it becomes clear that there are other more subtle rules that govern the interaction between different elements – but it’s something you’re going to hear rather than see – you need to trust your ears. While inefficient, I’ve really enjoyed the process.
The pleasure of the relationship continues as you use the application: the next time you launch the app to create a scape, the interface may open to show you that a new icon has been added to your palette. What does it do? Only one way to find out – drop it into a blank scape and listen. Add another element and see if things change. Choose a new palette on the right and see how that effects things. You can leverage what you already know for this exploration, but it’s the same iterative slow-time activity. And the new element is always a small surprise to open and explore.
I expect that there some users who’ll be driven to howlin’, fist-shaking fury at the way that Scape doesn’t explain itself (or the way it isn’t documented). I’m sure they’re dedicated and clever individuals for whom the time necessary to live with the app and to develop some kind of personal and inner map of what they think it does (acts which I’d say live at the corner of Idiosyncracy and Virtuosity) might seem a waste of valuable time if they’re focused on doing the pragmatic thing and wanting to just “get on with making Enoesque audio” (The good news for them is that I expect the “random” feature of the app will be as good as they’d be without the investment of time and attention and the effort of listening. Maybe better. Or it’ll at least save a lot of time – simply punch the button and listen for 20 seconds, and then maybe toggle the moods on the right and stardom will certainly follow).
But I’m seriously entertaining the thought that this application is interesting and reasonably unique precisely because of the way it’s set up to encourage developing a relationship with its interface that’s reinforced by the temporally bound experience of listening, and because of the way the application unpacks itself over time in a way that encourages the continuation of that relationship. I’ve never run into anything quite like it.
By now, my version of Scape has what I think might be a full palette (although I’d love to be surprised again), but the thing still engages me – for example, are my intuitions about how the elements interact anything more than personal ones? I expect that the answer is either “Not at this point,” or that I’ll eventually develop enough facility that my idiosyncratic readings will constitute my “style” when working with Scape. Maybe that personal or internal map of things is somehow the point – instead of something defined for me, I’m writing my own internal manual for the interface, and reworking the documentation with each new piece I get as I work with it (Some things have and will probably completely escape me – I note that Eno says that some elements play more sparsely based on time of day, which I would probably have been the last person in the world to figure out – I have spent my time with it in the evenings only, so far).
And, as an Oblique Strategies afficianado from waaaaay back, how can I not like the Scape Strategies?
To finish on a note more directly related to Max, I don’t think it’d be particularly difficult to add some of what I think I’m hearing in action to the Max patching I already do, honestly [another reason I’ve found working with it to be a salutary experience – the ideas aren’t that difficult or subtle; their implementation is]. Obviously, as a programmer I am going to be less successful at surprising myself – but I do think that considering probabilistic interactions outside of the boundaries of an individual bpatcher – between what I think of as the pieces of what I’m using to create the larger whole – may yield some interesting results.
And, since it might be that I’ve learned an interesting lesson, that process may… um… take a while. A good while.
Don’t worry fans of creativity, SUE-C is still creating thoughtful teases of imagery and transforming spaces to exercise and delight your imagination muscle. She uses Max for both her live performances and recorded work to animate hundreds of different objects (paper, photos, small models, shiny things, etc.) You can see pictures of her set-up here.
Her newest work, Infinite Jest, lives both as an installation and as a live handmade film
inspired by the complex and remarkable novel of the same name by the late author David Foster Wallace. When the piece lives as an installation, the audience experiences the space as an environment comprised of projected videos, a text based soundtrack, a gaming console and mini tennis court intended for the audience to walk through and play with. During the performance, the film is brought to life through the lens of live cameras that follow the manipulation of photographs, drawings, scale models and various three dimensional objects by visual artist and performer SUE-C, along with the live lush electronic soundtrack and vocals by AGF. Set in a slightly futuristic world the film is an attempt to create and re-create what character James Orin Incandenza, optics expert and filmmaker considered to be his life’s major works. After many unseen failures, his ‘film’, an entertainment, is eventually released, which proves to be fatally seductive.
With this as a jumping off point, long time audio-visual collaborators SUE-C and AGF explore the expression of seduction in sound and image in collaboration with Kevin Slagle. You can see photos of the installation here.
The installation opens on October 5th at LaBoral Art Center in Gijon, Spain, and stays until February 25th. The live show is October 6th at 8pm.
SUE-C will also be accompanying Morton Subotnick at the SF MoMa on November 15th for a concert featuring his most famous piece “Silver Apples of the Moon” with her live animation.
More about SUE-C and friends here:
While I generally leave the business of standing alone on stage and fingersquiggling on a smartphone touchscreen to others as a performance modality, I remain an unabashed fan of the idea of being able to construct elegant multitouch module interface/performance setups like this one:
The video crossed my transom almost in tandem with the announcement of the release of Brian Eno and Peter Childers’ most recent iPad app Scape, and together they strike me as a useful object of contemplation: thinking about exercise of “reverse engineering” approach when it comes to making new things in Max, and how or whether it interacts with the idea of encoding opportunities for virtuosity.
Even on those occasions where I have tried to duplicate something that moved me as exactly as I could, the result was never a precise copy. Despite that, it seems as though nearly every important thing I learned about Max programming winds up being traceable to what happens when you break down a problem into its component parts and then try to imagine what the “Max version” of those parts would be. It’s usually the case that a really interesting give and take always occurs while the process is going on that, in retrospect, might be more interesting than the initial goal.
In the case of the touchpad/Eurorack example, you can actually puzzle it out – a combination of the text itself and a quick look at the front panels and jacks for the Cyclebox, ES-3″, Maths, Doepfer A-132-3, and Tom Erbe’s Echophon* modules in the rack will more or less tell you what you need. So, reverse engineering this wouldn’t be too awful a proposition.
Beyond that, it’s just a question of downloading a fingerpinger external for Max and engaging in little tweakery.
That’s the way it is when someone else breaks the conceptual ground for you; you’re freed up to find the parts you’re less comfortable with and tweak the patch, or use what you find for the next thing.
* Mr. Erbe, for those of you who may not recognize his name, is the author of the wonderful suite of Soundhack MSP external objects.
- Connect with Codebar:
Gen patchers now have a sidebar that shows you the text code generated by the patcher in the GenExpr language. This will give you a better idea of what’s going on behind the scenes in Gen. Select objects in the patch to highlight the corresponding code segment, or copy the code from the codebar to use inside of a codebox. It’s a great way to learn about the connection between visual and textual programming in Gen.
- Organize Subpatchers:
Now you can use subpatchers and abstractions to make your Gen patches easier to organize visually and create reusable elements, just like you do in traditional Max patches. Use the gen object, with an optional filename to nest a Gen patcher or abstraction.
- Name Your Send/Receive Objects:
We’ve added the ability to clean up some patch cord clutter in Gen patches with the introduction of named send/receive objects. They are limited for use within one Gen patcher, but they can make patchers easier to read and maintain when you have many objects and connections between different logical sections of your patch.
- Save Time with Loops and Branching:
Now GenExpr supports looping and branching constructs like for, while, and if/then. These make it cleaner to write code which does similar things many times, and can save CPU by only evaluating code under certain conditions.
For more information on Gen, check out our Gen Patch-a-day series.
- Grow with Abstractions:
All VIZZIE modules can now be used like any other Max abstraction. (For example, the BRCOSR module’s abstraction is called vz.brcosr.) You can now integrate VIZZIE modules into your regular patching and use them the way you’d use any Jitter object, and you can mix and match VIZZIE bpatcher modules with VIZZIE abstractions when you don’t need a module’s user interface (If you do need to see the UI, just double-click on the abstraction).
- Hit the “Stomp Box Switch”:
VIZZIE module displays still function as on/off “Stomp Box Switches”, but you can also use Max toggle objects or 0/1 messages to turn VIZZIE modules/abstractions on and off.
- Discover Presets:
You can use Max messages to VIZZIE modules and abstractions to choose presets or interpolate between any two stored presets on the fly.
- Try the New Modules:
VIZZIE now includes two new modules and abstractions: Use CROPPR/vz.croppr to sample a portion of your video and move it about, or expand it to full frame size. And the WYPR/vz.wypr will come in handy for those classic screen wipes effects (try ‘em with a little CHROMAKEYR and some vz.rotatr for some serious fun).
In Max 6, Jitter Physics is a new system for programming physical simulations in any Max patcher, opening up traditional game engine features and realtime motion graphics techniques. Since it is integrated with Max, you can also use physical systems for more subversive tasks like controlling audio or video in ways that have nothing to do with physics or animation as we know it. Based on user feedback, we’ve added many useful features to Jitter Physics in Max 6.0.7.
- More Useful Collisions:
We’ve made major changes to detect and make use of how and when objects collide, including collision force and normal information, and the ability to find collisions from an individual object when it happens rather than in the world’s reporting of all collisions.
- Come Together with UI handling:
We’ve improved how objects can be selected and operated on with mouse and other UI events, and how the physics system and the OpenGL system can work together.
- Physics Multiplied:
Using jit.phys.multiple you can easily create many instances of the same physics body by passing a Jitter Matrix. In this update, jit.phys.multiple gets a set of really exciting new features like the ability to add constraints (joints, motors, etc.) to these instances for more complex simulations.
It’s not always easy to describe what Max does or how it works to your relatives, but now I can say “look at the Bay Bridge” thanks to artist Leo Villareal and his crew.
Read more about the project:
So happy to see Alex Harker’s and Pierre Alexadre Tremblay’s amazing work with the convolution reverb (and many other things) packaged up and announced. If you haven’t downloaded and played with it yet, please do so!
Here are some places to find more impulse responses:
And remember that convolution isn’t just for reverbs…
Loud Objects do live improvisational electronics, soldering complex noise circuits from scratch. The performance is as visual as it is sonic – they do all the soldering on the surface of an overhead projector, and you can see the smoke and spatter as their hands manipulate the silhouettes of microchips and wire. It’s more like jazz than engineering – watching them perform you get the sense that they’re more wrangling chaotic forces than fulfilling a schematic. As a topography of wires and chips emerges, the sound becomes richer, more complex, more convoluted. At one point, they step away from the circuit, which is now generating an evolving pattern. The pattern cycles faster and faster, and at its peak, they cut the power.
A Loud Objects performance is an audio/visual event, a generative system, and a physical instrument all in one. And it is, in fact, quite loud.
See and hear Loud Objects at SFEMF.
Pomona College in Southern California has been a Cycling ’74 customer for over a decade. Beyond our appreciation for Pomona’s use of the software in several different departments, we can empathize with the school’s nerdy obsession with a certain number.
You can probably imagine that we too ascribe magical powers to a certain number similar to the one revered at Pomona, a power which often seems even greater when we are confronted with numbers on either side. I’ve pleaded with my friend Andreas Killen to write a sequel to his wonderful book 1973 Nervous Breakdown with a book about, well, you know… (as a programmer I would think of this as an off-by-one error). Whenever I go to a bakery or a meat counter that uses a take-a-number system, I always hope I will be able to tear off the number 74. In fact if the currently available number is, say, somewhere in the high sixties I might just hang out for a little while. The best possible result would be at a place like Gayle’s where they use take-a-numbers that are of the form
Have you been seeing 74 everywhere lately?
Stanford University has begun an important project to preserve the works of the great Max Mathews. Geoffrey C. Willard at Stanford University Libraries posts about it on their Digital Library Blog.
Read the complete post here on which you can hear a sample of Mathews’ work, and find compilations of other early electronic music gurus for sale (album cover art at left).
I’ve always been fascinated by the ability of sensors to pick up on our brain waves and react with a physical outcome. Like in cool old 70s movies where the girl moves the train using Alpha waves or now the new Star Wars based toy that allows kids to move the trainer remote using The Force.
Now thanks to Max and the work of artist Andrey Smirnov and his assistants, we can hear the brain and all its chattering feedback.
Way to think outside the box!
Since Labor Day is coming, you might have some spare time this weekend and might be wondering what you could do. How about a little Max project?
Here’s one simple but fun idea, based on an optical experiment I found on Brusspup’s Youtube Channel. How about making a patch that would let you do this?
No doubt that you are familiar with Wagon-wheels effect, which is what’s happening in the video above. The interaction between frequency of movement and framerate of the camera gives this cool gravity-defying effect. You might be able to do something low tech for the water fall, but for the Max patch, you should go nerdy!
For instance, instead of using cycle~ to generate a low frequency, why don’t you make your own using a fast algorithm (just like this one) in gen~.
To record the video in Jitter, jit.grab will do the work, but you might also want to record the video using jit.qt.record and upload it to youtube! Don’t forget to share it.
Happy Labor Day!
Recently when I was hearing about a very exciting stage forthcoming stage production (you’ll just have to watch this space!) that uses Max, I was reminded of one the last theater productions I saw that used the software, Schick Machine by the Paul Dresher Ensemble. This amazing piece continues to tour around the world (it was in Hong Kong last month and will be in Illinois early next year).
We did a series of interviews with Paul Dresher and Alex Stahl about the software behind the show when it premiered a few years ago. If you never saw these interviews at the time, they’re a fascinating look at how software development interacts with virtuosity, danger, and the demands of the stage.
The “Water Light Graffiti” is a surface made of thousands of LED illuminated by the contact of water. You can use a paintbrush, a water atomizer, your fingers or anything damp to sketch a brightness message or just to draw. Water Light Graffiti is a wall for ephemeral messages in the urban space without deterioration. A wall to communicate and share magically in the city.
Sometimes mathematics seems opaque and mysterious until someone finds a way to illustrate a concept in a unique way. Often the results are beautiful.
For each natural number n, we draw a periodic curve starting from the origin, intersecting the x-axis at n and its multiples. The prime numbers are those that have been intersected by only two curves: the prime number itself and one.
“These days you have to know everything: you have to be able to play your instrument, you have to know music theory, and you have to know Max” – Dane Orr, Sonnymoon
The most overused sentence between my friend Jon and I is, “Listen to this band and consider going to their show with me”. I’ll admit, he knows what kind of music flips my pancakes, but when we went to see the band Sonnymoon I had no idea at the end of the night I would be saying,“ This is what I’ve been searching for”. Hive mind at its best.
Named in homage to the Sonny Rollins track “Sonnymoon for Two”, vocalist Anna Wise and producer Dane Orr, are masters of the musical balancing act, an eloquent embodiment of musical talent paired seamlessly with music technology. With the recent addition of Tyler Randall on the sitar (Yes, you read that correctly) and percussionist Joe Welch, the quartet will scorch the apathy right out of your body.
The setup for their live performances is carefully orchestrated, with a plethora of controllers and convoluted cables (Imagine a less precarious version of Indiana Jones landing in a pile of snakes). However, the line is clearly drawn between tools and musical capabilities. They have adopted a policy of placing their laptops out of view, which for me, is part of their magnetism. “Simplicity is key for us, because expression, control, and connecting with an audience is the most important”, Orr has said about their approach.
That paradigm is also transferred in the way Sonnymoon utilizes Max. “At first we used little abstractions that we found interesting, ones we thought might open different sonic doors for us, but it was when we stopped thinking technically and started thinking artistically that we truly became friends with Max…People needn’t be scared of Max or think that you need to dedicate your whole life to computer programming to be able to use it. The same way a guitar player can dedicate some time to learning to use different pedals, people can explore using Max to expand their musical world”.
I’m glad Sonnymoon and Max have become friends. They are so similar. Harmonizing technology with creativity while exposing both fragility and strength are the things they do best.
For many years Robin Fox has been at the frontier of Australia’s leading audio-visual practitioners, innovatively using Max and interactive design systems to engage and challenge audiences across the globe.
As expected, many of his projects involve Max in one form or another from controlling motors that deflect lasers to taking data from wave-rider buoys in the Southern Ocean. He is most notably known for his synesthesia inducing laser based live shows but in recent times has implemented some very interesting works involving live interactivity and tracking.
Commissioned by the City of Melbourne to create a Giant Theremin, Fox set about designing and implementing a live scale version in full polyphonic. Using a camera tracking system to trigger sounds (in place of an electro magnetic field) and the ability to track up to 8 individual people at any one time, day or night.
I find these particular projects interesting as it not only uses Max in an innovative way within the public realm but encourages people and strangers to interact and step outside their comfort zone.
Lots of people use Soundflower for producing podcasts and doing various work where audio needs to get from one app to another. One great thing about Soundflower is that it is hackable so that you can customize it to your needs. A clever Soundlower user — who hosts “LuBlog” — contacted us with a great example where a group call can be recorded using Ableton Live with each individual recorded on their own track so the whole thing can be mixed properly in post-production.
Interested in learning more? Follow LuBlog’s detailed instructions.
One of the things that first drew me to Max was its ability to connect such a wide variety of software and hardware in meaningful and flexible ways. Anything from video game controllers to muscle tension or brainwaves becomes fair game as a performance interface.
The Resistor JelTone created by NYC Resistor, a NYC based hackers collective, is one of the most humorous and unexpected interfaces I have seen. Using capacitive resistance fields and americas favorite summer salad ingredient, gelatin, they create novel instruments using Jell-o molds. The Arduino reads in the capacitance data and is passed on to Max/MSP to generate the sound.
Check out this JelTone toy piano:
A video of one being performed live can be seen here.
Not weird enough for you? What’s the strangest or most novel instrument you have ever seen?
I – and my kids – have been fascinated with 3D films and effects lately. In fact, we have been making silly paper-based anaglyphs with red and blue markers just to see what we could do.
Some time ago, Andrew Benson put up Jitter Recipe #43 that shows how to do anaglyphs in Jitter. The opened the door to Max-based 3D that would go beyond the typical stick-poking-you-in-the-eye graphics.
David Butler, author of the imp.* tools, also did some experiments with anaglyphs created in Max. His Microfiche project is a great example of something generative and eye-bending. So don those red/blue 3D glasses and take this for a whirl!
You can’t schedule musical inspiration. Nothing is more frustrating than having an idea in your head, but not being in a position to record it before it slips away.
Since the iPhone came into my life, I do a lot better job of capturing the ephemeral. At first I just used the built-in “Voice Memos” app to quickly document the moment as best I could, tape recorder style. I have since tried a whole host of audio capture tools, including my latest love, Loopy, a cute little app that lets me quickly layer a bunch of loops.
The idea of course is that later I’ll take the captured audio and import it onto my computer for further processing with Max, Live, and other tools. But this is invariably where other audio capture apps all fail; they have great interfaces for capturing and basic authoring, but clumsy, inconvenient systems for exporting the audio to the computer.
What mobile audio apps do you use? Are there any that do a great job of integrating with a computer-based workflow?