Thanks for your help. We've received your bug report.

blog

I just received an email from our friend Lippold Haken. Here’s a video of his amazingly expressive musical controller, with software written entirely in Max.

You can find out more about this incredible musical instrument on Lippold’s website.

Two months ago, Mira was born. Of course, that doesn’t mean that development has stopped–far from it. Since the moment it came into the world, Mira has continued growing steadily. At this point you might well be wondering just how little Mira is coming along. Well, according to babycenter.com, at two months of development “your baby will begin to move beyond his early preferences for bright or two-toned objects toward more detailed and complicated designs, colors, and shapes. Show your baby — and let him touch — a wider variety of objects.” How’s that for good news? Even better, it turns out that occasional vomiting is quite common for babies at two months old. So if Mira has been throwing up on you, that’s apparently nothing to worry about.

As for me, as a developer and new dad I’m feeling somewhat sentimental. So last week I decided to go back and take a look through the old family photo album that is the Internet. Much to my surprise, instead of cats playing the piano and women falling out of grape barrels, I actually found a slew of quite impressive videos. Turns out people have been using Mira to make some rather interesting content.

HeRunsHundreds = MIRA test drive

MrNedRush

First, a little amuse-bouche. MrNedRush aka HeRunsHundreds offers a 4×4 drum pad built into a Max for Live device. He’s added some higher-level controls for subdividing into 1, 2, 4 or 1/2 bars (making maximally interesting patterns with minimal effort), as well as a timer bar above the buttons. He’s also added an orphaned dial off to the right, apparently connected to absolutely nothing, as a silent ode to French minimalism.

scratching in maxmsp and mira (featuring laser sounds)

MrNedRush

Now MrNedRush gets serious. Forget all that warmup and drum pad nonsense–it’s time for some real music. It’s time, in other words, for laser sounds. There’s an awful lot of expressivity to be had here, for nothing more than a button and a slider. If there were some kind of award for most sound with the fewest objects, this man would be the clear winner. There is, of course, no such award.

HeRunsHundreds = The Knobulator in Mira

MrNedRush

Don’t try to understand this interface. There are two giant knobs, that much is clear, but beyond that I’m at a loss. From what I can gather based on the accompanying text, the knob on the right is more of a meta-control than a control proper. Tweaking the rightmost knob rapidly jumps between different ways of shaping an audio effect. As for the knob on the left, the most we can say is that it’s labeled knobulator. So it controls knobulation, obviously, whatever the hell that is. In summation, as a logical exercise, this patch is absolutely impossible to understand. As a tactile exploration, however, it’s a glitch-groovy road trip and an absolute blast to play.

He Runs Hundreds = skinny hands wrists arms (live jam)

MrNedRush

See–this is what I’m talking about. So often the debate around the iPad as an interface devolves into nothing more than touchsceen-bashing bloodsport. “Oh no no no,” the hardware elitist say, one hand on an APC 40, the other clutching (with extended pinkie finger) a champagne glass filled with Monster energy drink, “an iPad simply won’t do. A man must feel the knobs, he must enjoy the physicality of the slider.” And that’s fine, I can respect that. But no one ever said the iPad had to replace the hardware. Ebony and ivory, baby, why can’t we all work together? The knobs are good at being knobs, the iPad is good at being a display. As this excellent video demonstrates, the two complement and ennoble each other.

MIRA

Giovanni Di Stefano

Reflections. The performer’s hand reflected on the immaculate surface of the iPad. The audio interface reflected on the desk’s polished surface. And, if you’ll excuse the painfully stretched metaphor, a certain reflection across time as well. SugarSynth powers the audio, which is an updated version of the original MSP Granular Synthesis patch by Nobuyasu Sakonda’s. The original patch is, by technological standards, ancient, dating all the way back to the year 2000. Forget the iPad–this predates even the iPod, so to see Mira driving the new and improved patch seems like a fitting way to celebrate the sugarSynth and to tie a neat ribbon around a little chunk of Max history.

MIRAnome64, a virtual monome for MIRA/iPad and Max6

Julien Bayle

Is anyone really surprised to see Julien Bayle’s name here? Outside of actual Cycling ’74 employees, the man may be the single most prolific Max contributor of all time. His work includes externals, Max for Live devices, articles, workshops and, just to cement his total dominion, a three-hundred page book. MIRAnome64, a virtual but fully functional Monome64, is his first project using Mira. The video is more of a demo than a performance but he’s nice enough to show us a bit of how it works. A very clever trick makes the magic possible: by using a mira.multitouch object in conjunction with an array of toggles, he’s able to track touches from toggle to toggle, allowing for sweeping gestures across the whole array. Nice.

Rungler—a chaotic approach to step sequencing

Johan van Kreij

Thirty-seven seconds. Right when the overtones start to kick in, that’s when I know that I’m going to spend the remaining six minutes of this video in a state of ear-drugged catatonic ecstasy. The Rungler, as this video is called, is based on something called the Blippoo Box, which you can think of as similar to an analog step sequencer. There is one small difference: a step sequencer is something that you can understand and control, whereas the Blippoo Box is a living animus of fire and whim that inhabits the very bounds of human comprehension. The result, as I’m sure you will appreciate, is complex, chaotic and highly listenable.

MIRA controls x3 Machines (including Windows)

Tom Hall

Come on, that’s pretty cool. One iPad, three machines?

Max6 and Mira on iPadで演奏

Otani Yasu

Finally, for little dessert, Yasuhiro Otani demonstrates his own Mira + patch. My Japanese is more than a little rusty, but from the website at eleclab.tumblr.com it looks like this patch was made as part of a workshop called the U::Gen Laboratorium. Said workshop has a mascot (apparently workshops need mascots) and that mascot is a girl holding a knife and fork. This, presumably, makes sense. Again, my Japanese is more than a little rusty. My Max, on the other hand, is quite strong, so instead of trying to figure out why this patch got made I’ll focus on how cool it sounds.

Check out this research project and exhibition that features the sonification of electrical activity from a colony of microbial fuel cells.

This past Friday Google released the source for two of its Chrome Web Lab projects that have been running at the Science Museum in London for the past year.

One of the projects, the Orchestra, makes use of Max along with a host of web technologies. For those interested in techniques for controlling Max patches via web sites, Google and user experience developers Tellart are generously providing a valuable resource. This is a great opportunity to peek behind the curtains of a Max project designed to run both online and in a high-traffic environment.

Thursday, August 29, 2013, 7-9PM at 450 Bryant, Suite 100, San Francisco, I’ll be presenting an introduction to programming in Max for Live for the Ableton User Group Meeting.

Here’s the Facebook for the event.

I’ll have about an hour to explain what Max is, show how it works in Live, and offer some tips on how to start building your own devices. Should be fun!

Our ever-popular Cycles series of sound libraries, produced by Ron MacLeod, is now available for download in our online store for a reduced price. Previously only available on DVD-ROM, the Cycles libraries are a unique collection of high-quality samples created with Cycling ’74 software.

Learn more and listen to excerpts from Cycles.

Cycles Available for Download

Tomorrow (20 July) at 4:30pm, Tom Hall and I will be presenting a 45 minute demo of Mira, the iPad controller app from Cycling ’74.

We will be discussing the design ideas behind Mira, show Mira working in different contexts, and highlighting the ability of Mira to facilitate collaboration. There will be a raffle for a free copy of Mira, too.

More details about the event can be found here:

http://www.controllerconvention.org/

Want to learn to create custom devices? Austin-based collective Bit Voltage is offering a new video course to help get you deeper into Max for Live. The course, developed by Nate Crepeault, introduces the basics of Max for Live from the ground up with a focus on using the Live API in a Max Device. The course comes with a Live set that includes all the course material.

For more info, visit the Bit Voltage site

Cost: $19.99 until July 14th, at which point the price goes up to $29.99

Here is a sample of the videos from this course

Today we released our first product in the app store, officially known as Mira Controller for Max (but hereafter referred to as Mira). It works on any iPad and we’re selling it for around $50. A couple of years ago, we contacted the main developer of Mira, Sam Tarakajian, on the basis of his creative tutorials that I highly recommend you check out. We asked, “Dude837 — if that’s your real name/number — what would you want to do?”

Sam expressed interest in something mobile.

Mira is the result of a long conversation about the process of using mobile devices with Max. The result of this conversation is the simple idea that your iPad should just give you back your patch. (It’s yours after all.)

There’s no separate UI to build, no OSC messages to deal with, no networking to configure (OK, sorry, it’s networking, there will always be something to configure).

The initial release of Mira is aimed at people interested in creating new Max projects with mobile control. In addition to the iPad app, there is a package of Max objects for our recently released version 6.1.3 you’ll need to download. The process is simple: you arrange UI elements (sliders, buttons) on top of a new object called mira.frame that represents the screen of your device. Every mira.frame in every patch gets its own tab on the iPad. In the coming months, we’ll be adding more Max UI objects and optimizing Mira support for Max for Live users.

There’s a lot more going on here than I can explain in a few paragraphs. We’ve made a couple of videos for two perspectives on the Mira workflow. Internally we’ve been referring to these as the East Coast and West Coast videos. See if you can figure out which is which.

To learn more about Mira, head on over to the app store or check out our Mira product page.

NIME, Day 2


Bill Verplank takes the stage

When most people get ready to make a presentation they turn to Powerpoint or Keynote. If they’re of the fixed-gear bicycle persuasion, maybe they reach for something like Prezi. As would become increasingly obvious over the course of his keynote address, Bill Verplank is not most people. The NIME community is known for enthusiastically embracing new technologies. For them, anything older than a Leap Motion is a relic from the Stone Age. Faced with the challenge of holding the attention of such a techno-ravenous group, Bill opted for a revolutionary piece of high-resolution, force feedback presentation hardware. You may have heard of it: it’s called the pen.


Motors and Music

Bill’s talk, called Motors and Music, covers all the things that you would never expect to hear about during a discussion of computer music. Things like physicality. Things like emotive content and embodiment. Things like the ‘80’s. He starts by taking us on a tour of early experiments in computer music interaction. Bill narrates over videos of Max Mathews wiggling a string to trigger scanned synthesis and Perry Cook shaking a chain of bend sensors. It’s amazing to see just how advanced the interfaces were then, and how little has changed since. Sure, these days we might use a MacBook Air instead of a tape reel, but at the end of the day it’s almost as if we’ve lost more than we’ve gained. When I sit down to interact with a computer today, I’m lucky if I’m given a real keyboard as opposed to a simulation under glass. Watching Max Mathews and Perry Cook dancing in front of a wall of mainframes, creating music out of a chain, a string, a pen, and a coffee cup, it’s hard not to feel like we’ve lost something.

Bill now moves from archival footage to the present day. He introduces us to the Plank, one of many haptic toys crafted at the Copenhagen Institute of Interaction Design. Think of it as the atomic unit of force feedback interaction. Built out of a strip of wood and a re-purposed hard drive, it’s a piano key that pushes back.


El Planko

He goes on to show us some of the projects his students have worked on using the Plank. You might think that there wouldn’t be much you could accomplish with one sensor and a single degree of freedom, but then you’d be wrong. Clearly, you’ve never played Angry Birds with a force-feedback slingshot, touched a quantum-entangled pendulum, or played Prosthetic Golf.

http://ciid.dk/education/portfolio/idp12/courses/motors-music/projects/

As Bill brings his presentation to a close, I fight the urge to rush the stage and take the Plank home for myself. I can’t remember the last time I got so excited about a piece of hardware. I didn’t feel this way when the iPad came out, instead I remember sinking into a fog of disappointment as I realized computer interfaces were moving away from physical interaction, not towards it. We’ve lost a lot of ground since the days of Max Mathews. When the iPhone first appeared people expressed frustration at having to type by tapping on a piece of glass. It doesn’t feel real, they said. I miss having buttons I can touch, they said. Now, Siri and spellcheck have taught our fingers laziness, complacency.

So what happened to our nuanced, multimodal interaction paradigms? If you ask me, the same thing happened to interface design that happened to digital music: convenience beat quality. Given the choice between watching a poorly encoded YouTube clip and patiently downloading a high quality, DRM-laden audio file, most people would rather not wait. In the same way, people are more interested in checking their email 500 times a day from their smartphone than they are in having a two-hour jam session with a force feedback joystick. Researchers like Bill may push advances in interaction design, but it seems to me that makers of consumer electronics will always be more focused on portability and power consumption than on haptic feedback.

Of course, I can’t help myself but wonder what would happen if some company suddenly decided to throw their whole weight behind a new gestural controller. What would they come up with? II feel the first step would be to fortify the woefully impoverished language that we currently use to talk about gesture. Think about it: when it comes to sound we’re able to address all the nuances of spectrum, waveform, frequency domain, timbre, loudness, pitch, attack, envelope and decay, just to name a few. What language do we have for talking about gestures? Slow versus fast, maybe?

Before he leaves the stage, Bill offers us one last quote:

“Grab a hold of something and feel it push back at you and make music”

At this point one of the audience members, inspired by Bill’s august presence, asks a question about toilets.

New Interfaces for Musical Excretion

–> NIME, Day 0
–> NIME, Day 1

NIME, Day 1

Welcome to Daejeon

On my last day in Korea, I wouldn’t even have noticed. Nothing about staring at a heaping bowl of pickled cabbage first thing in the morning would bother me. After one week of eating kimchee three meals a day, every day, the part of me that needed two chocolate croissants and a double americano just to feel normal would finally have acclimated. On day one, however, I’m not quite there yet. After sputtering out a weak, black broth, the coffee machine advises me to “Have a nice time”. I’m giving it my best, but looking down at my bowl of rice with seaweed broth and trying to see oatmeal is more that I can manage. To any outside observer I’m sure I look like what I am: a coffee starved software engineer very much out of his element.

“Excuse me,” I hear someone say, “you must be here for NIME.”

Getting to NIME from the Toyoko Inn requires a fifteen minute taxi ride, circling the government complex at the center of Daejeon and crossing the river into KAIST campus. Staring out the window on the way over, I’m not entirely sure what to make of my surroundings. Based on the gnashing juxtapositions all around me, I’d say the city of Daejeon seems to think it can shock me out of sleep deprivation with bewildering choices in urban planning. The humble government complex, for example, rises no more than three stories high in the center of a large public park, yet towering apartment blocks housing thousands flank the complex to the east and the west. I decide that the squat government building must be nothing more than a gateway, and that beneath the park extends a labyrinth sprawling hundreds of miles underground. Also, peering into the distance beyond two apartment buildings, I notice a strange, metallic spire. It looks like a spaceship from my vantage point, but that would be crazy, my mind must be playing tricks on me. Of course, as we get closer it looks more and more like a spaceship, until it turns out that’s exactly what it is. In an attempt to clash maximally with the drab apartment units on the south side of the river, northern Daejeon sports a giant amusement park.

I give up on trying to understand the city and opt for conversation instead. My first traveling companion is Simon Hutchinson, who when he complains about being fatigued and confused does so in a voice both energetic and lucid. He explains that he’s come to NIME to perform a piece called Shin no Shin, using an iPad to turn touch and acceleration into music. I’m tempted to spout off about Mira, but I decide that there will be plenty of time for that later at my poster session. We exchange a few notes about the iPad as a performance instrument. I wonder if Mira will be useful for musicians like Simon, or if the tools that already exist are good enough.

I’m also fortunate enough to ride with me Adam and Liam of Alphasphere, who tell me about their spherical music making gadget of the same name. My description can’t do it justice, but you can think of the Alphasphere as an overgrown buckyball with aftertouch. For a more precise picture, imagine plastic rings arranged in a ball, with flexible, pressure sensitive fabric stretched over each one. You play the instrument by distorting the fabric, which Alphasphere translates into MIDI and OSC data. As Adam describes the hardware I notice a strange tension in my fingers, my first taste of what I’m now calling NIME Complex Sigma. It’s a debilitating condition that I will encounter several times throughout the conference, characterized by acute mental anxiety and muscle twitching. The cause comes from listening to the description of a revolutionary new instrument; really, really wanting to play it and then not getting to play it.

NIME doesn’t officially start until the next day, but people like me who chose to show up early get to attend one of several workshops. I want to go to all six, but somehow the conference organizers expect me to pick just two. Of course, the whole question of which workshop to choose becomes moot when it turns out that none of us can find the building where we’re supposed to register. Each of the buildings on KAIST campus has a letter and number associated with it, which would in theory make finding a given building an easy task. However, at the center of campus the correlation between number, letter and proximity approaches zero–building E16 is right next to N4. Naturally, asking for directions is an exercise in futility, as what little Korean I know comes from watching Arrested Development. Eventually by walking in ever-widening circles we manage to find the right building. We’re a bit worried about showing up several minutes late, until we notice two crucial facts. Fact one: there is a giant mob of not-at-all-Korean looking people standing outside the building. Fact two: the man who is supposed to lead the first workshop is among them.

Sometimes people who make NIMEs forget to bring keys

When at last we manage to enter, the first thing I discover is that black coffee is not as easy to find as I would have hoped. Canned coffee drinks come easy, with vending machines at every corner offering sugary, undrinkable swill with names like Joy and Yes. But it seems I’m going to have to wait a bit longer to get a taste of something fresh roasted. Not having caffeine impairs my decision making process, which makes my second discovery all the more significant. As it turns out, two of the workshops are free, whereas the other four very much are not. So in the end, I opt for the NIME orientation workshop and for the one on making music with Web Audio.

KAIST poses an anatomical conundrum

Michael Lyons, a NIME veteran and researcher in musical interaction, leads the first workshop. His presentation does a great job of filling in the gaps in my knowledge on NIME related topics, subjects like primary versus secondary feedback (secondary feedback is the sound an instrument makes, primary feedback is everything else it does). He also provides a thought provoking overview of why people make NIMEs in the first place, which I find particularly interesting. Beyond techno-fetishism and fascination with the human-machine relationship, he posits that the #1 reason that people are interested in building new instruments is because of an insistence on cultural fluidity. People want new ways to make sound because they want their own tools–they don’t just accept what’s given to them. No wonder so many NIME builders use Max.

As Michael brings the presentation to a close, my mind is humming with new ideas to take back to the Cycling ‘74 think tank:

  • Mapping (between input gestures and sound output) is the heart of NIME, and indeed of instrument building in general.
  • MIDI is plug and play, OSC isn’t because there’s no standard
  • Programmability is a curse, and it’s important to have long-term versions of things
  • Primary feedback (lights, vibrations) is critical for intimacy
  • Music is becoming increasingly process oriented as opposed to artifact oriented. People who are not virtuosos are willing to go out in public and make music, and are eager to find a forum to do so.

My second workshop is with Charlie Roberts, the furiously talented man behind the Control app for iPad and iPhone that was in many ways the inspiration for Mira. This workshop was advertised as an introduction to the Web Audio API, so I’m imagining that we’re going to spend the afternoon talking about the complexities of working with audio in a high-level language like Javascript, and the challenges of getting audio to run in the browser on multiple platforms. As it turns out, Charlie has basically solved all those problems already, and so instead he takes on a three hour tour of Gibber. Gibber is a Supercollider-like wrapper around Web Audio that lets you build sample-accurate sequencer and synthesizers in Javascript. Oh, did I mention it’s runtime re-configurable? Anyway, it nearly melts my brain to think about how Charlie’s work could fit together with Max. Imagine a Max-like program running in the browser, with something like Gibber providing the backend to a patchable interface. Imagine using that interface to build and deploy interactive audio to the web. Or, switch your brain with me to Totally Unwarranted Speculation mode and imagine being able to turn any webpage into a programmable patch. It’s a bit of a pipe dream, to be sure, but why bother coming to NIME if you aren’t going to entertain impossible ideas?

Important links from that talk include:

After the workshop, we stumble back outside on legs made weak from six hours of sitting. Since I’ve gone over ten minutes without complaining, I seize on the opportunity to mope about the cloudy weather. No one around me seems to pay any attention, perhaps because they wisely understand that overcast and humid is probably an overture for rainy and chilly. For now, we take advantage of the warm weather to restore circulation to our feet and converse about all things NIME. As we chat, we’re treated to the first appearance of the chair of KAIST 2013, Woon Seung Yeo, better known as Woony. I had been told at some point (by someone very foolish) that Koreans have a cultural lacuna when it comes to sarcasm. I suspect that Woony knew this and made it his personal mission to wipe away my misconception. “I encourage you to visit the famous KAIST goose crossing, especially since I know NIME participants are all great lovers of animals,” he says. “Not in that way,” he adds. Woony’s dry and biting wit would only desiccate in the days to come.

The nicest day of the whole conference

Drawing his short opening remarks to a close, Woony directs our attention to the area behind us, where a seemingly infinite quantity of food seems to have materialized out of nowhere. “And now, enjoy the banquet,” he says. “And of course the free beer.”
Well, there you have it. Free beer and unlimited food. No points for guessing how long it took me to fall asleep after that one.

–> NIME, Day 0
–> NIME, Day 2

NIME, Day 0

Recently, I went to the NIME (New Interfaces for Musical Expression) conference in at KAIST in Daejeon, South Korea. Over the course of five days, I attended workshops in Web Audio, absorbed paper presentations on digital laughter and watched what could only be described as a pneumatic zombie duet. I also attended not one but three banquets. For those interested in the gaps between banquets, I offer this story.

I step off the plane. Location: Incheon. Body: Exhausted. Mind: Blank. Between the 12 hour flight, the 15 hour time difference and repeated exposure to the in flight movie, A Werewolf Boy, I can already feel my grip on reality starting to slip away. I make my way through the airport, down to baggage claim and onto the express train for Seoul. As far as I can tell the train was constructed in the year 2040 and brought back in time to the present day. The oleophobic seats conform exactly to every contour of my exhausted body. A flatscreen television unfolds from the ceiling above, presenting a promotional ad for a nearby civic development project. BUILDING, it promises, in blaring, positivist capitals. CIVIL. PLANT. HOUSING. Depictions of enormous glass and steel buildings, assembled by swarms of tiny robots, rise before me. Outside my window, we pass row upon row of small scale farms, sometimes running all the way up to the train tracks. Eventually the train comes to a small bridge connecting Incheon to the mainland. Rising up out of the water I can see huge mounds of dirt and grass, looking like the backs of giant turtles lumbering towards Seoul. I am very sleepy. I decide that they probably are turtles, and I write the following poem:

POEM FOR THE TRAIN TO SEOUL

The fog helps me see the tortoises
Grinding out low channels
And the speculative egrets on long stalks

The tortoises are my cold cows
Ruminating on the countryside
And other fictions
They roar silently
Like old men, or magma

Train tracks are humming
The sound of soft gray wool
And my eyes are as heavy as the tortoises

I decide that this poem is very good, then I fall asleep. When I wake up, we’ve arrived in Seoul, where I must have boarded another train for Daejeon, though I honestly can’t remember. Neither do I remember arriving in Daejeon, finding my hotel, or making my way up to my room. Probably all these things happened, but whether they happened to me or to someone who looks a lot like me I will never know. In the morning a straight line connects my backpack to my suitcase, to a pair of shoes, to where I fell asleep, face down on a still-made bed.

–> NIME, Day 1

A few months ago we made the decision to trim down our office size and send some physical merchandise like shirts, audio libraries, and music releases to Amazon for fulfillment. We plan to be adding some new, fun products in the future, too.

For those of you who love [have] Amazon Prime, you know what this means. There is nothing stopping you… sort of.

Mira. It’s coming. In fact, if you find yourself in Daejeon on Wednesday the 29th, then you’ll be able to play with it in person. I’ll be demoing Mira as part of NIME, the 13th international conference for New Interfaces for Musical Expression. Hope to see you there!

Youtube user Naoto Fushimi has been steadily posting some great videos demonstrating advanced, audio reactive Jitter / GL techniques.

Follow here if you like seeing pixels move!

Max and VDMX

The good folks over at VDMX just posted an excellent video tutorial series, detailing the steps necessary to create communication links between Max and VDMX. In the video, a texture generated in VDMX is sent to Jitter, via the Syphon plugin, analyzed with jit.3m, processed with jit.gl.pix, and sent back to VDMX.

Very cool to see these two apps playing so nicely together!

Using only a single stereo S/PDIF output from your audio interface, you can access up to five ES-4 gate expander modules – each of which supports eight gate outputs. That’s 40 outputs! All this flexibility is easily accessed with Expert Sleeper’s new native Max es4encoder~ object. It couldn’t be simpler.

The five eight bit outputs can be used in a number of different ways. Instead of eight gates, an output can send another single 8-bit message like pitch CV or velocity.

Max user Kevin Holland and his partner just launched a new music-making project called Luminth that you should check out.

Eric Lyon, author of the book Designing Audio Objects for Max/MSP and Pd, will be teaching a workshop on designing your own audio objects at Harvestworks in New York City on Saturday, May 18. For more information or to sign up, visit the Harvestworks web site.


The new (beta) Code Export feature of Gen has only been around for about a month, and is still sparsely documented, but that didn’t stop Varun Nair at the Designing Sound blog from digging in and trying it out. The tutorial goes through the process of creating and exporting a tremolo effect with Gen and then building the code into an Audio Unit plugin. It’s great to see such a clear and well-written tutorial.

Varun also gives a nice and simple overview of getting started with the Gen environment in Max. We look forward to seeing more experiments in this area, and are really excited about what people will do with Code Export. Have any experiences to share? Let us know in the comments.

Next week, a special event will be happening in Brooklyn at Roulette. Toni Dove’s “Lucid Possession” premieres April 25, 26, and 27th. Those lucky enough to be in the vicinity will have the opportunity to experience this unique stage production — a “contemporary ghost story” featuring robotics, gorgeous costumes, and stunning voices and music. There are many talented Max users involved, including Todd Reynolds, Luke DuBois, and Elliott Sharp. They, Toni and all the other artists and crew will make it a memorable experience. Don’t miss it!

Photo: Alison Hall

At the Code Control Festival in Leicester England this past weekend we gave attendees an advance peek at some of our mobile projects. Sam Tarakajian, our principal mobile developer, showed a new iPad app, the Mira controller for Max, that makes it possible, with as close to zero configuration as possible, to make your patch “touchable.” Mira presents a large set of Max user interface elements on the iPad exactly as they appear in your patch. It also provides access to multitouch and accelerometer data. We’ll be revealing more of this powerful addition to the Max universe as we prepare it for release in the app store later this spring.

As a possible companion to Mira, I revealed a new “hardware” project dubbed the MiraBox — in reality, nothing more than an 8 x 10 wooden picture frame stuffed with foam — that helps capture accelerometer and gyro data from the iPad. The software component of the project was prototyped entirely with Mira and Max 6. Like many others we’re interested in extracting higher-level gestures from accelerometer sensors, but in particular, we’re interested in tracking data when you touch your patch.

Matthew Davidson, the developer of the new Mono Sequencer device, gives us a quickstart primer on using this creative MIDI effect. Watch for new videos over the coming weeks!

Today we’re excited to release Max 6.1.

You can download Max 6.1 now to check out these new features:

64bit Application

  • Use more than 4GB RAM
  • Use high precision 64bit numbers in Max messages
  • Load 64bit Audio Unit and VST plugins

Live 9 Support

  • New devices
  • New Live API features
  • Performance and stability improvements

New Gen Features

  • Integrated operator reference
  • New operators and expression features
  • (Beta) Export Gen code to C++ (gen~) or GLSL (jit.gl.pix)

Performance Improvements

  • Faster application launch
  • Faster patcher load time
  • General optimizations

Complete Max 6.1.0 release notes are available here, and more discussion about what these features represent follows.

64bit Application

64bit application support is a big deal, and given how long Max has been under development in a 32bit world, it was no easy feat. Thank you all for your patience as we’ve worked to make this happen. 64bit applications allow users to take advantage of a much larger memory space and hence more than the ~4GB of RAM we are limited to under 32bit. We’ve also been able to make infrastructural changes to support 64bit numbers when passed via Max messages for higher precision calculations. These two things are features you have been requesting in Max for years, and finally those features are here.

However, we’d like to balance expectations here. Since this is our first 64bit release, we will not have all of the features of the 32bit version, especially regarding Jitter and QuickTime support. QuickTime is simply not available on Windows under 64bit, where we will rely on DirectShow for movie playback (to playback QuickTime files you will need a third party plugin for DirectShow). Apple’s QTKit API on Macintosh 64bit has fewer features than the 32bit version of QuickTime, and requires a dramatic rewrite of our code base. We’ve only implemented the most basic of functionality for movie playback at this time on both platforms. We will be continuing to work on Jitter video playback and other QT features in the 64bit version, but many features are not present and may not make it to the 64bit version ever.

Max and MSP should have nearly all the same features, except where it relies on QuickTime (e.g. PICT files are not currently supported under 64bit and instead we recommend converting to PNG or JPG). However, 3rd party developers will need to port their objects to 64bit for them to be able to run inside the 64bit version of Max. There is no loading of 32bit externals in 64bit version of Max.

We will be providing an SDK for 3rd party developers in the coming days, but it will likely take some time before any particular 3rd party external will be available to use. We would recommend that if you do want to use the 64bit version and you have 3rd party dependencies that you see if you can remove these dependencies by using core objects or abstractions to replace these dependencies, until your favorite 3rd party object is available.

On Macintosh, the application comes as a single FAT bundle, by default set to run in 32bit mode. To run in 64bit mode, select the application and “Get Info” from the finder. In the “General” tab there should be a check box which says “Open in 32-bit mode”. You can turn this off to run in 64bit. If you want to keep separate 32bit and 64bit versions, you can duplicate your max folder, select one of the applications and set it to run in 64bit as described. Externals are also FAT bundles–i.e. contain both 32 and 64bit code.

On Windows, there are separate 32bit and 64bit installers and applications, and externals are in separate .mxe (32bit) and .mxe64 files.

Live 9 Support

Max for Live users will need to use Live 9 in conjunction with Max 6.1. Live 9 will be released on March 5th, and as you may have heard, Max for Live is now included in Live 9 Suite. The factory content will look a little different than in previous versions and you will need to download and install the appropriate live packs for the content which previously was installed by default. In addition to the exciting features of Live 9, there are some great new devices in Max for Live, especially the drum synths and convolution reverb, but I will recommend you go to the Ableton.com website for more information regarding Live 9 and Max for Live.

New Gen Features

Gen has some significant additions and improvements in this release. Gen now has an integrated operator reference in the side bar to make learning and discovery easier than in previous releases. The operator set has grown, and the GenExpr language now supports recursive functions (for CPU, not GPU targets), calling gen patchers as functions, and defining functions with named parameters. But most exciting in this release is that we have a beta version of code export. This means that you can take your gen~ patchers and export them to C++ code and your jit.gl.pix objects and export them to GLSL code. This feature only has limited support in our initial Max 6.1 release, but over the coming months, we will be working to improve the generated code, template examples, and documentation to make this feature useful for those of you who have been waiting for this capability. Note that the code export feature will assume that you are familiar with C++ and working with a development IDE like XCode or Visual Studio. We will be adding more code export examples and documentation in the WIKI.

Thank you for continuing inspire us with your creativity.

Happy Patching!