During the month of November, I took a little journey into a new programming area: creating content specifically for the Ableton Push control device. This hardware has a unique place within the Max community due to its tight integration with Ableton Live (and therefore Max for Live), but it is also a powerful control surface in its own right.
With help from Mark Egloff of Ableton, I started with a goal: to create a device that would be a usable performance tool, but would “take over” the button grid on the Push to make it easy to manipulate in real time. I chose an 8-band EQ-like device that I called the Frequency Mixer, and created the code necessary to run it solely from the Push. See the result (along with some video).
Next up was to work directly with the Push in Max – completely outside the Live environment. Based on some information that Mark (Egloff) provided, I was able to determine the values needed to update the Push button matrix RGB values, and created an interesting, if rather useless, 8×8 image display. I can imagine using this to modify a program based on the display values, but have left this as an exercise for the willing Push student!
Finally, based on feedback received on YouTube, I modified the first (Frequency Mixer) project to act on other tracks in a Live set. This way, you could either mix multiple channels, or (by inverting the values) crossfade multiple tracks from a single instance of the Frequency Mixer. This is based on the use of send and receive objects that share a specific name, which is propagated through the entire Live set. See the result — a fun extension to the original device.
While I create some specific devices and projects, the implication should be much greater – that the Push, like many other controller devices, is an interesting playground for the creative coder. Hopefully you will find tips and techniques that can help you get more extensive use out of your Push!
“One of my early desires as a musician was to sculpt and organize directly the sound material, so as to extend compositional control to the sonic level – to compose the sound itself, instead of merely composing with sounds.”
A strange loop arises when, by moving only upwards or downwards, one finds oneself back where one started. The concept of a strange loop was proposed and extensively discussed by Douglas Hofstadter in Gödel, Escher, Bach. In it, he describes a beautifully-structured framework for exploring the question of how a sense of self arises out of something that has no self; to go from a state of meaninglessness to something that can refer to itself.
One kind of audible strange loop is called a Shepard tone. This illusion was invented by the psychologist Roger Shepard in 1964. He used a computer to create a series of tones that seems to rise forever. Jean-Claude Risset created a version of the scale where the tones glide continuously. The tone appears to rise (or descend) continuously in pitch, yet return to its starting note.
These pieces were the result of several years of collaboration with Max Matthews at Bell Labs, realized with Music V. Music V was an extension of Music III, (including Music III’s innovative concept of generating units which pre-dated voltage control as a formal protocol), rewritten in Fortran, with added support for analog to digital conversion so one could manipulate digital audio directly. Music V was also distributed free of charge, at the request of Max Matthews, to stimulate research and the production of computer music. During his time at Bell Labs, Risset also compiled a catalog of computer synthesized sounds, including FM and additive examples, for a synthesis course he gave in 1969 with John Chowning at Stanford University.
I set about executing a Risset glissando in Max without referencing existing implementations. I deduced that one would need a master phasor, subdivided in 90 degree phase offsets to act as the control system for the effect. Each subdivided output controls the pitch of an oscillator that moves four octaves, thus the distance between the quadratic outputs is always an octave.
The second part of effect is controlling the output level of each oscillator. The most logical way to do this is to use the existing phase-offset phasor output. The output of half of a cosine function from 270 to 90 degrees produces the correct shape for our purposes. So, if we reduce the magnitude of our phasor output by half, and shift the phase by three quarters, we’ve achieved our goal. Now when the ramp controlling the output pitch is at its most extreme (at either edge), the oscillator output is inaudible.
My first implementation of the Risset glissando resulted in a new Beap oscillator type. These oscillators are designed to accept a 1v/oct input, so, as long as all oscillators are connected to the same master phasor output, they can be stacked and played like normal oscillators. In other words, one could think about musical structures using the normal rules of harmony.
In addition, I produced a Quadrature Risset Generator module, which, when given a 0-5v phasor input, will produce eight control voltages corresponding to a pitch and amplitude pair for each quadrature output. This module can be used with any Beap oscillator to produce Risset glissandos, or used in conjunction with quantizers to produce chromatic or diatonic Risset figures that endlessly rise or fall.
At this point, I migrated to MaxForLive to produce a couple polyphonic Risset synthesizers, one based on subtractive synthesis and one based on a simple two operator FM group.
Here is a generative Risset Ableton Live set. This uses one Aleatoric module to generate the notes, followed by a mutating MIDI delay, followed by the FM Risset Synthesizer set to a period of 32 measures.
Some example output:
These devices have been added to my Live 9 Inspiration Suite along some related devices like an aleatoric generator and a couple new delay effects. (click the Download .zip button). As always, the latest version of Beap, including the new Risset modules, can be found at the Beap site.
Computer music is still in its infancy and there is so much area to explore. Risset’s work has inspired me to challenge some of my assumptions about music that I thought were fixed.
This all began as a joke within the “Material Team” – since we do a visual programming language, we should have an audio podcast! We could do virtual patching with phrases like “You really need to connect the second outlet of the umenu, because that’ll give you what you really want for the midiout object.” Yuck, yuck, yuck.
But, just for fun, I decided to give it a try anyway. You know what? It turns out it is really fascinating. The reasons is that you don’t talk programming; rather, you talk about inspiration, creativity, hard work, personal backgrounds and future visions. In fact, behind every stack of code is a very interesting person, and chatting with them for a podcast turns out to be very interesting.
I’ve put up the first three episodes (since that’s what is required to begin the iTunes process), and will be doing one new podcast each week until I exhaust myself. If you are interested in being interviewed, or if you know someone that should be interviewed, drop me a line and let me know.
The web page that libsyn maintains for the podcast is here: The ArtMusicTech Podcast
Many of us at Cycling ’74 are gardening enthusiasts. When this stunning project was brought to our attention, two of our favorite worlds came together. Wish we could be there to see and hear it in person. Thanks for sharing, OFL Architecture!
Francisco Colasanto, who works as assistant director at Centro Mexicano para la Música y las Artes Sonoras (CMMAS) in Morelia, Mexico, has released the first two modules of a new ebook about Max/MSP on the center’s web site CMMAS.org. The first module is free (after registering with the site) and covers the Max interface. The second module covers additive synthesis and is available for $5.00. The book will soon be available for iPad and Android.
This publication is unique in that the text is illustrated with short videos that demonstrate the concepts being described. When talking about something visual and dynamic such as a Max patcher, this makes perfect sense. I could use words to describe right-to-left ordering until I’m blue in the face but once you see it in action, you’ll understand it immediately.
Max/MSP: A programming guide for artists is off to a great start and I highly recommend you check it out and continue to watch as Francisco adds new modules.
Looking to brush up on Ableton Live skills and learn more about Max for Live? Coming up this Friday, September 27th, Seattle’s Decibel Festival will be hosting Ableton Day at the Broadway Performance Hall (on the SCCC campus). Max in the Morning will kick off Ableton Day at 11am. Clint Sand will join Ableton Certified Trainers James Patrick and Chris Schlyer in presenting a whole range of material, from entry-level to advanced.
Chris Petti of Dubspot will give a workshop from 12:45-1:30. If you’re in Seattle, this will be a great chance to pick up some new tricks and get immersed.
I just received an email from our friend Lippold Haken. Here’s a video of his amazingly expressive musical controller, with software written entirely in Max.
You can find out more about this incredible musical instrument on Lippold’s website.
Two months ago, Mira was born. Of course, that doesn’t mean that development has stopped–far from it. Since the moment it came into the world, Mira has continued growing steadily. At this point you might well be wondering just how little Mira is coming along. Well, according to babycenter.com, at two months of development “your baby will begin to move beyond his early preferences for bright or two-toned objects toward more detailed and complicated designs, colors, and shapes. Show your baby — and let him touch — a wider variety of objects.” How’s that for good news? Even better, it turns out that occasional vomiting is quite common for babies at two months old. So if Mira has been throwing up on you, that’s apparently nothing to worry about.
As for me, as a developer and new dad I’m feeling somewhat sentimental. So last week I decided to go back and take a look through the old family photo album that is the Internet. Much to my surprise, instead of cats playing the piano and women falling out of grape barrels, I actually found a slew of quite impressive videos. Turns out people have been using Mira to make some rather interesting content.
HeRunsHundreds = MIRA test drive
First, a little amuse-bouche. MrNedRush aka HeRunsHundreds offers a 4×4 drum pad built into a Max for Live device. He’s added some higher-level controls for subdividing into 1, 2, 4 or 1/2 bars (making maximally interesting patterns with minimal effort), as well as a timer bar above the buttons. He’s also added an orphaned dial off to the right, apparently connected to absolutely nothing, as a silent ode to French minimalism.
scratching in maxmsp and mira (featuring laser sounds)
Now MrNedRush gets serious. Forget all that warmup and drum pad nonsense–it’s time for some real music. It’s time, in other words, for laser sounds. There’s an awful lot of expressivity to be had here, for nothing more than a button and a slider. If there were some kind of award for most sound with the fewest objects, this man would be the clear winner. There is, of course, no such award.
HeRunsHundreds = The Knobulator in Mira
Don’t try to understand this interface. There are two giant knobs, that much is clear, but beyond that I’m at a loss. From what I can gather based on the accompanying text, the knob on the right is more of a meta-control than a control proper. Tweaking the rightmost knob rapidly jumps between different ways of shaping an audio effect. As for the knob on the left, the most we can say is that it’s labeled knobulator. So it controls knobulation, obviously, whatever the hell that is. In summation, as a logical exercise, this patch is absolutely impossible to understand. As a tactile exploration, however, it’s a glitch-groovy road trip and an absolute blast to play.
He Runs Hundreds = skinny hands wrists arms (live jam)
See–this is what I’m talking about. So often the debate around the iPad as an interface devolves into nothing more than touchsceen-bashing bloodsport. “Oh no no no,” the hardware elitist say, one hand on an APC 40, the other clutching (with extended pinkie finger) a champagne glass filled with Monster energy drink, “an iPad simply won’t do. A man must feel the knobs, he must enjoy the physicality of the slider.” And that’s fine, I can respect that. But no one ever said the iPad had to replace the hardware. Ebony and ivory, baby, why can’t we all work together? The knobs are good at being knobs, the iPad is good at being a display. As this excellent video demonstrates, the two complement and ennoble each other.
Reflections. The performer’s hand reflected on the immaculate surface of the iPad. The audio interface reflected on the desk’s polished surface. And, if you’ll excuse the painfully stretched metaphor, a certain reflection across time as well. SugarSynth powers the audio, which is an updated version of the original MSP Granular Synthesis patch by Nobuyasu Sakonda’s. The original patch is, by technological standards, ancient, dating all the way back to the year 2000. Forget the iPad–this predates even the iPod, so to see Mira driving the new and improved patch seems like a fitting way to celebrate the sugarSynth and to tie a neat ribbon around a little chunk of Max history.
MIRAnome64, a virtual monome for MIRA/iPad and Max6
Is anyone really surprised to see Julien Bayle’s name here? Outside of actual Cycling ’74 employees, the man may be the single most prolific Max contributor of all time. His work includes externals, Max for Live devices, articles, workshops and, just to cement his total dominion, a three-hundred page book. MIRAnome64, a virtual but fully functional Monome64, is his first project using Mira. The video is more of a demo than a performance but he’s nice enough to show us a bit of how it works. A very clever trick makes the magic possible: by using a mira.multitouch object in conjunction with an array of toggles, he’s able to track touches from toggle to toggle, allowing for sweeping gestures across the whole array. Nice.
Rungler—a chaotic approach to step sequencing
Thirty-seven seconds. Right when the overtones start to kick in, that’s when I know that I’m going to spend the remaining six minutes of this video in a state of ear-drugged catatonic ecstasy. The Rungler, as this video is called, is based on something called the Blippoo Box, which you can think of as similar to an analog step sequencer. There is one small difference: a step sequencer is something that you can understand and control, whereas the Blippoo Box is a living animus of fire and whim that inhabits the very bounds of human comprehension. The result, as I’m sure you will appreciate, is complex, chaotic and highly listenable.
MIRA controls x3 Machines (including Windows)
Come on, that’s pretty cool. One iPad, three machines?
Max6 and Mira on iPadで演奏
Finally, for little dessert, Yasuhiro Otani demonstrates his own Mira + patch. My Japanese is more than a little rusty, but from the website at eleclab.tumblr.com it looks like this patch was made as part of a workshop called the U::Gen Laboratorium. Said workshop has a mascot (apparently workshops need mascots) and that mascot is a girl holding a knife and fork. This, presumably, makes sense. Again, my Japanese is more than a little rusty. My Max, on the other hand, is quite strong, so instead of trying to figure out why this patch got made I’ll focus on how cool it sounds.
Check out this research project and exhibition that features the sonification of electrical activity from a colony of microbial fuel cells.
This past Friday Google released the source for two of its Chrome Web Lab projects that have been running at the Science Museum in London for the past year.
One of the projects, the Orchestra, makes use of Max along with a host of web technologies. For those interested in techniques for controlling Max patches via web sites, Google and user experience developers Tellart are generously providing a valuable resource. This is a great opportunity to peek behind the curtains of a Max project designed to run both online and in a high-traffic environment.
Thursday, August 29, 2013, 7-9PM at 450 Bryant, Suite 100, San Francisco, I’ll be presenting an introduction to programming in Max for Live for the Ableton User Group Meeting.
I’ll have about an hour to explain what Max is, show how it works in Live, and offer some tips on how to start building your own devices. Should be fun!
Our ever-popular Cycles series of sound libraries, produced by Ron MacLeod, is now available for download in our online store for a reduced price. Previously only available on DVD-ROM, the Cycles libraries are a unique collection of high-quality samples created with Cycling ’74 software.
Learn more and listen to excerpts from Cycles.
Tomorrow (20 July) at 4:30pm, Tom Hall and I will be presenting a 45 minute demo of Mira, the iPad controller app from Cycling ’74.
We will be discussing the design ideas behind Mira, show Mira working in different contexts, and highlighting the ability of Mira to facilitate collaboration. There will be a raffle for a free copy of Mira, too.
More details about the event can be found here:
Want to learn to create custom devices? Austin-based collective Bit Voltage is offering a new video course to help get you deeper into Max for Live. The course, developed by Nate Crepeault, introduces the basics of Max for Live from the ground up with a focus on using the Live API in a Max Device. The course comes with a Live set that includes all the course material.
For more info, visit the Bit Voltage site
Cost: $19.99 until July 14th, at which point the price goes up to $29.99
Here is a sample of the videos from this course
Today we released our first product in the app store, officially known as Mira Controller for Max (but hereafter referred to as Mira). It works on any iPad and we’re selling it for around $50. A couple of years ago, we contacted the main developer of Mira, Sam Tarakajian, on the basis of his creative tutorials that I highly recommend you check out. We asked, “Dude837 — if that’s your real name/number — what would you want to do?”
Sam expressed interest in something mobile.
Mira is the result of a long conversation about the process of using mobile devices with Max. The result of this conversation is the simple idea that your iPad should just give you back your patch. (It’s yours after all.)
There’s no separate UI to build, no OSC messages to deal with, no networking to configure (OK, sorry, it’s networking, there will always be something to configure).
The initial release of Mira is aimed at people interested in creating new Max projects with mobile control. In addition to the iPad app, there is a package of Max objects for our recently released version 6.1.3 you’ll need to download. The process is simple: you arrange UI elements (sliders, buttons) on top of a new object called mira.frame that represents the screen of your device. Every mira.frame in every patch gets its own tab on the iPad. In the coming months, we’ll be adding more Max UI objects and optimizing Mira support for Max for Live users.
There’s a lot more going on here than I can explain in a few paragraphs. We’ve made a couple of videos for two perspectives on the Mira workflow. Internally we’ve been referring to these as the East Coast and West Coast videos. See if you can figure out which is which.
When most people get ready to make a presentation they turn to Powerpoint or Keynote. If they’re of the fixed-gear bicycle persuasion, maybe they reach for something like Prezi. As would become increasingly obvious over the course of his keynote address, Bill Verplank is not most people. The NIME community is known for enthusiastically embracing new technologies. For them, anything older than a Leap Motion is a relic from the Stone Age. Faced with the challenge of holding the attention of such a techno-ravenous group, Bill opted for a revolutionary piece of high-resolution, force feedback presentation hardware. You may have heard of it: it’s called the pen.
Bill’s talk, called Motors and Music, covers all the things that you would never expect to hear about during a discussion of computer music. Things like physicality. Things like emotive content and embodiment. Things like the ‘80’s. He starts by taking us on a tour of early experiments in computer music interaction. Bill narrates over videos of Max Mathews wiggling a string to trigger scanned synthesis and Perry Cook shaking a chain of bend sensors. It’s amazing to see just how advanced the interfaces were then, and how little has changed since. Sure, these days we might use a MacBook Air instead of a tape reel, but at the end of the day it’s almost as if we’ve lost more than we’ve gained. When I sit down to interact with a computer today, I’m lucky if I’m given a real keyboard as opposed to a simulation under glass. Watching Max Mathews and Perry Cook dancing in front of a wall of mainframes, creating music out of a chain, a string, a pen, and a coffee cup, it’s hard not to feel like we’ve lost something.
Bill now moves from archival footage to the present day. He introduces us to the Plank, one of many haptic toys crafted at the Copenhagen Institute of Interaction Design. Think of it as the atomic unit of force feedback interaction. Built out of a strip of wood and a re-purposed hard drive, it’s a piano key that pushes back.
He goes on to show us some of the projects his students have worked on using the Plank. You might think that there wouldn’t be much you could accomplish with one sensor and a single degree of freedom, but then you’d be wrong. Clearly, you’ve never played Angry Birds with a force-feedback slingshot, touched a quantum-entangled pendulum, or played Prosthetic Golf.
As Bill brings his presentation to a close, I fight the urge to rush the stage and take the Plank home for myself. I can’t remember the last time I got so excited about a piece of hardware. I didn’t feel this way when the iPad came out, instead I remember sinking into a fog of disappointment as I realized computer interfaces were moving away from physical interaction, not towards it. We’ve lost a lot of ground since the days of Max Mathews. When the iPhone first appeared people expressed frustration at having to type by tapping on a piece of glass. It doesn’t feel real, they said. I miss having buttons I can touch, they said. Now, Siri and spellcheck have taught our fingers laziness, complacency.
So what happened to our nuanced, multimodal interaction paradigms? If you ask me, the same thing happened to interface design that happened to digital music: convenience beat quality. Given the choice between watching a poorly encoded YouTube clip and patiently downloading a high quality, DRM-laden audio file, most people would rather not wait. In the same way, people are more interested in checking their email 500 times a day from their smartphone than they are in having a two-hour jam session with a force feedback joystick. Researchers like Bill may push advances in interaction design, but it seems to me that makers of consumer electronics will always be more focused on portability and power consumption than on haptic feedback.
Of course, I can’t help myself but wonder what would happen if some company suddenly decided to throw their whole weight behind a new gestural controller. What would they come up with? II feel the first step would be to fortify the woefully impoverished language that we currently use to talk about gesture. Think about it: when it comes to sound we’re able to address all the nuances of spectrum, waveform, frequency domain, timbre, loudness, pitch, attack, envelope and decay, just to name a few. What language do we have for talking about gestures? Slow versus fast, maybe?
Before he leaves the stage, Bill offers us one last quote:
“Grab a hold of something and feel it push back at you and make music”
At this point one of the audience members, inspired by Bill’s august presence, asks a question about toilets.
Welcome to Daejeon
On my last day in Korea, I wouldn’t even have noticed. Nothing about staring at a heaping bowl of pickled cabbage first thing in the morning would bother me. After one week of eating kimchee three meals a day, every day, the part of me that needed two chocolate croissants and a double americano just to feel normal would finally have acclimated. On day one, however, I’m not quite there yet. After sputtering out a weak, black broth, the coffee machine advises me to “Have a nice time”. I’m giving it my best, but looking down at my bowl of rice with seaweed broth and trying to see oatmeal is more that I can manage. To any outside observer I’m sure I look like what I am: a coffee starved software engineer very much out of his element.
“Excuse me,” I hear someone say, “you must be here for NIME.”
Getting to NIME from the Toyoko Inn requires a fifteen minute taxi ride, circling the government complex at the center of Daejeon and crossing the river into KAIST campus. Staring out the window on the way over, I’m not entirely sure what to make of my surroundings. Based on the gnashing juxtapositions all around me, I’d say the city of Daejeon seems to think it can shock me out of sleep deprivation with bewildering choices in urban planning. The humble government complex, for example, rises no more than three stories high in the center of a large public park, yet towering apartment blocks housing thousands flank the complex to the east and the west. I decide that the squat government building must be nothing more than a gateway, and that beneath the park extends a labyrinth sprawling hundreds of miles underground. Also, peering into the distance beyond two apartment buildings, I notice a strange, metallic spire. It looks like a spaceship from my vantage point, but that would be crazy, my mind must be playing tricks on me. Of course, as we get closer it looks more and more like a spaceship, until it turns out that’s exactly what it is. In an attempt to clash maximally with the drab apartment units on the south side of the river, northern Daejeon sports a giant amusement park.
I give up on trying to understand the city and opt for conversation instead. My first traveling companion is Simon Hutchinson, who when he complains about being fatigued and confused does so in a voice both energetic and lucid. He explains that he’s come to NIME to perform a piece called Shin no Shin, using an iPad to turn touch and acceleration into music. I’m tempted to spout off about Mira, but I decide that there will be plenty of time for that later at my poster session. We exchange a few notes about the iPad as a performance instrument. I wonder if Mira will be useful for musicians like Simon, or if the tools that already exist are good enough.
I’m also fortunate enough to ride with me Adam and Liam of Alphasphere, who tell me about their spherical music making gadget of the same name. My description can’t do it justice, but you can think of the Alphasphere as an overgrown buckyball with aftertouch. For a more precise picture, imagine plastic rings arranged in a ball, with flexible, pressure sensitive fabric stretched over each one. You play the instrument by distorting the fabric, which Alphasphere translates into MIDI and OSC data. As Adam describes the hardware I notice a strange tension in my fingers, my first taste of what I’m now calling NIME Complex Sigma. It’s a debilitating condition that I will encounter several times throughout the conference, characterized by acute mental anxiety and muscle twitching. The cause comes from listening to the description of a revolutionary new instrument; really, really wanting to play it and then not getting to play it.
NIME doesn’t officially start until the next day, but people like me who chose to show up early get to attend one of several workshops. I want to go to all six, but somehow the conference organizers expect me to pick just two. Of course, the whole question of which workshop to choose becomes moot when it turns out that none of us can find the building where we’re supposed to register. Each of the buildings on KAIST campus has a letter and number associated with it, which would in theory make finding a given building an easy task. However, at the center of campus the correlation between number, letter and proximity approaches zero–building E16 is right next to N4. Naturally, asking for directions is an exercise in futility, as what little Korean I know comes from watching Arrested Development. Eventually by walking in ever-widening circles we manage to find the right building. We’re a bit worried about showing up several minutes late, until we notice two crucial facts. Fact one: there is a giant mob of not-at-all-Korean looking people standing outside the building. Fact two: the man who is supposed to lead the first workshop is among them.
Sometimes people who make NIMEs forget to bring keys
When at last we manage to enter, the first thing I discover is that black coffee is not as easy to find as I would have hoped. Canned coffee drinks come easy, with vending machines at every corner offering sugary, undrinkable swill with names like Joy and Yes. But it seems I’m going to have to wait a bit longer to get a taste of something fresh roasted. Not having caffeine impairs my decision making process, which makes my second discovery all the more significant. As it turns out, two of the workshops are free, whereas the other four very much are not. So in the end, I opt for the NIME orientation workshop and for the one on making music with Web Audio.
KAIST poses an anatomical conundrum
Michael Lyons, a NIME veteran and researcher in musical interaction, leads the first workshop. His presentation does a great job of filling in the gaps in my knowledge on NIME related topics, subjects like primary versus secondary feedback (secondary feedback is the sound an instrument makes, primary feedback is everything else it does). He also provides a thought provoking overview of why people make NIMEs in the first place, which I find particularly interesting. Beyond techno-fetishism and fascination with the human-machine relationship, he posits that the #1 reason that people are interested in building new instruments is because of an insistence on cultural fluidity. People want new ways to make sound because they want their own tools–they don’t just accept what’s given to them. No wonder so many NIME builders use Max.
As Michael brings the presentation to a close, my mind is humming with new ideas to take back to the Cycling ‘74 think tank:
- Mapping (between input gestures and sound output) is the heart of NIME, and indeed of instrument building in general.
- MIDI is plug and play, OSC isn’t because there’s no standard
- Programmability is a curse, and it’s important to have long-term versions of things
- Primary feedback (lights, vibrations) is critical for intimacy
- Music is becoming increasingly process oriented as opposed to artifact oriented. People who are not virtuosos are willing to go out in public and make music, and are eager to find a forum to do so.
Important links from that talk include:
After the workshop, we stumble back outside on legs made weak from six hours of sitting. Since I’ve gone over ten minutes without complaining, I seize on the opportunity to mope about the cloudy weather. No one around me seems to pay any attention, perhaps because they wisely understand that overcast and humid is probably an overture for rainy and chilly. For now, we take advantage of the warm weather to restore circulation to our feet and converse about all things NIME. As we chat, we’re treated to the first appearance of the chair of KAIST 2013, Woon Seung Yeo, better known as Woony. I had been told at some point (by someone very foolish) that Koreans have a cultural lacuna when it comes to sarcasm. I suspect that Woony knew this and made it his personal mission to wipe away my misconception. “I encourage you to visit the famous KAIST goose crossing, especially since I know NIME participants are all great lovers of animals,” he says. “Not in that way,” he adds. Woony’s dry and biting wit would only desiccate in the days to come.
The nicest day of the whole conference
Drawing his short opening remarks to a close, Woony directs our attention to the area behind us, where a seemingly infinite quantity of food seems to have materialized out of nowhere. “And now, enjoy the banquet,” he says. “And of course the free beer.”
Well, there you have it. Free beer and unlimited food. No points for guessing how long it took me to fall asleep after that one.
Recently, I went to the NIME (New Interfaces for Musical Expression) conference in at KAIST in Daejeon, South Korea. Over the course of five days, I attended workshops in Web Audio, absorbed paper presentations on digital laughter and watched what could only be described as a pneumatic zombie duet. I also attended not one but three banquets. For those interested in the gaps between banquets, I offer this story.
I step off the plane. Location: Incheon. Body: Exhausted. Mind: Blank. Between the 12 hour flight, the 15 hour time difference and repeated exposure to the in flight movie, A Werewolf Boy, I can already feel my grip on reality starting to slip away. I make my way through the airport, down to baggage claim and onto the express train for Seoul. As far as I can tell the train was constructed in the year 2040 and brought back in time to the present day. The oleophobic seats conform exactly to every contour of my exhausted body. A flatscreen television unfolds from the ceiling above, presenting a promotional ad for a nearby civic development project. BUILDING, it promises, in blaring, positivist capitals. CIVIL. PLANT. HOUSING. Depictions of enormous glass and steel buildings, assembled by swarms of tiny robots, rise before me. Outside my window, we pass row upon row of small scale farms, sometimes running all the way up to the train tracks. Eventually the train comes to a small bridge connecting Incheon to the mainland. Rising up out of the water I can see huge mounds of dirt and grass, looking like the backs of giant turtles lumbering towards Seoul. I am very sleepy. I decide that they probably are turtles, and I write the following poem:
POEM FOR THE TRAIN TO SEOUL
The fog helps me see the tortoises
Grinding out low channels
And the speculative egrets on long stalks
The tortoises are my cold cows
Ruminating on the countryside
And other fictions
They roar silently
Like old men, or magma
Train tracks are humming
The sound of soft gray wool
And my eyes are as heavy as the tortoises
I decide that this poem is very good, then I fall asleep. When I wake up, we’ve arrived in Seoul, where I must have boarded another train for Daejeon, though I honestly can’t remember. Neither do I remember arriving in Daejeon, finding my hotel, or making my way up to my room. Probably all these things happened, but whether they happened to me or to someone who looks a lot like me I will never know. In the morning a straight line connects my backpack to my suitcase, to a pair of shoes, to where I fell asleep, face down on a still-made bed.
–> NIME, Day 1
A few months ago we made the decision to trim down our office size and send some physical merchandise like shirts, audio libraries, and music releases to Amazon for fulfillment. We plan to be adding some new, fun products in the future, too.
For those of you who love [have] Amazon Prime, you know what this means. There is nothing stopping you… sort of.
Youtube user Naoto Fushimi has been steadily posting some great videos demonstrating advanced, audio reactive Jitter / GL techniques.
Follow here if you like seeing pixels move!
The good folks over at VDMX just posted an excellent video tutorial series, detailing the steps necessary to create communication links between Max and VDMX. In the video, a texture generated in VDMX is sent to Jitter, via the Syphon plugin, analyzed with jit.3m, processed with jit.gl.pix, and sent back to VDMX.
Very cool to see these two apps playing so nicely together!
Using only a single stereo S/PDIF output from your audio interface, you can access up to five ES-4 gate expander modules – each of which supports eight gate outputs. That’s 40 outputs! All this flexibility is easily accessed with Expert Sleeper’s new native Max es4encoder~ object. It couldn’t be simpler.
The five eight bit outputs can be used in a number of different ways. Instead of eight gates, an output can send another single 8-bit message like pitch CV or velocity.