Our ever-popular Cycles series of sound libraries, produced by Ron MacLeod, is now available for download in our online store for a reduced price. Previously only available on DVD-ROM, the Cycles libraries are a unique collection of high-quality samples created with Cycling ’74 software.
Learn more and listen to excerpts from Cycles.
Tomorrow (20 July) at 4:30pm, Tom Hall and I will be presenting a 45 minute demo of Mira, the iPad controller app from Cycling ’74.
We will be discussing the design ideas behind Mira, show Mira working in different contexts, and highlighting the ability of Mira to facilitate collaboration. There will be a raffle for a free copy of Mira, too.
More details about the event can be found here:
Want to learn to create custom devices? Austin-based collective Bit Voltage is offering a new video course to help get you deeper into Max for Live. The course, developed by Nate Crepeault, introduces the basics of Max for Live from the ground up with a focus on using the Live API in a Max Device. The course comes with a Live set that includes all the course material.
For more info, visit the Bit Voltage site
Cost: $19.99 until July 14th, at which point the price goes up to $29.99
Here is a sample of the videos from this course
Today we released our first product in the app store, officially known as Mira Controller for Max (but hereafter referred to as Mira). It works on any iPad and we’re selling it for around $50. A couple of years ago, we contacted the main developer of Mira, Sam Tarakajian, on the basis of his creative tutorials that I highly recommend you check out. We asked, “Dude837 — if that’s your real name/number — what would you want to do?”
Sam expressed interest in something mobile.
Mira is the result of a long conversation about the process of using mobile devices with Max. The result of this conversation is the simple idea that your iPad should just give you back your patch. (It’s yours after all.)
There’s no separate UI to build, no OSC messages to deal with, no networking to configure (OK, sorry, it’s networking, there will always be something to configure).
The initial release of Mira is aimed at people interested in creating new Max projects with mobile control. In addition to the iPad app, there is a package of Max objects for our recently released version 6.1.3 you’ll need to download. The process is simple: you arrange UI elements (sliders, buttons) on top of a new object called mira.frame that represents the screen of your device. Every mira.frame in every patch gets its own tab on the iPad. In the coming months, we’ll be adding more Max UI objects and optimizing Mira support for Max for Live users.
There’s a lot more going on here than I can explain in a few paragraphs. We’ve made a couple of videos for two perspectives on the Mira workflow. Internally we’ve been referring to these as the East Coast and West Coast videos. See if you can figure out which is which.
When most people get ready to make a presentation they turn to Powerpoint or Keynote. If they’re of the fixed-gear bicycle persuasion, maybe they reach for something like Prezi. As would become increasingly obvious over the course of his keynote address, Bill Verplank is not most people. The NIME community is known for enthusiastically embracing new technologies. For them, anything older than a Leap Motion is a relic from the Stone Age. Faced with the challenge of holding the attention of such a techno-ravenous group, Bill opted for a revolutionary piece of high-resolution, force feedback presentation hardware. You may have heard of it: it’s called the pen.
Bill’s talk, called Motors and Music, covers all the things that you would never expect to hear about during a discussion of computer music. Things like physicality. Things like emotive content and embodiment. Things like the ‘80’s. He starts by taking us on a tour of early experiments in computer music interaction. Bill narrates over videos of Max Mathews wiggling a string to trigger scanned synthesis and Perry Cook shaking a chain of bend sensors. It’s amazing to see just how advanced the interfaces were then, and how little has changed since. Sure, these days we might use a MacBook Air instead of a tape reel, but at the end of the day it’s almost as if we’ve lost more than we’ve gained. When I sit down to interact with a computer today, I’m lucky if I’m given a real keyboard as opposed to a simulation under glass. Watching Max Mathews and Perry Cook dancing in front of a wall of mainframes, creating music out of a chain, a string, a pen, and a coffee cup, it’s hard not to feel like we’ve lost something.
Bill now moves from archival footage to the present day. He introduces us to the Plank, one of many haptic toys crafted at the Copenhagen Institute of Interaction Design. Think of it as the atomic unit of force feedback interaction. Built out of a strip of wood and a re-purposed hard drive, it’s a piano key that pushes back.
He goes on to show us some of the projects his students have worked on using the Plank. You might think that there wouldn’t be much you could accomplish with one sensor and a single degree of freedom, but then you’d be wrong. Clearly, you’ve never played Angry Birds with a force-feedback slingshot, touched a quantum-entangled pendulum, or played Prosthetic Golf.
As Bill brings his presentation to a close, I fight the urge to rush the stage and take the Plank home for myself. I can’t remember the last time I got so excited about a piece of hardware. I didn’t feel this way when the iPad came out, instead I remember sinking into a fog of disappointment as I realized computer interfaces were moving away from physical interaction, not towards it. We’ve lost a lot of ground since the days of Max Mathews. When the iPhone first appeared people expressed frustration at having to type by tapping on a piece of glass. It doesn’t feel real, they said. I miss having buttons I can touch, they said. Now, Siri and spellcheck have taught our fingers laziness, complacency.
So what happened to our nuanced, multimodal interaction paradigms? If you ask me, the same thing happened to interface design that happened to digital music: convenience beat quality. Given the choice between watching a poorly encoded YouTube clip and patiently downloading a high quality, DRM-laden audio file, most people would rather not wait. In the same way, people are more interested in checking their email 500 times a day from their smartphone than they are in having a two-hour jam session with a force feedback joystick. Researchers like Bill may push advances in interaction design, but it seems to me that makers of consumer electronics will always be more focused on portability and power consumption than on haptic feedback.
Of course, I can’t help myself but wonder what would happen if some company suddenly decided to throw their whole weight behind a new gestural controller. What would they come up with? II feel the first step would be to fortify the woefully impoverished language that we currently use to talk about gesture. Think about it: when it comes to sound we’re able to address all the nuances of spectrum, waveform, frequency domain, timbre, loudness, pitch, attack, envelope and decay, just to name a few. What language do we have for talking about gestures? Slow versus fast, maybe?
Before he leaves the stage, Bill offers us one last quote:
“Grab a hold of something and feel it push back at you and make music”
At this point one of the audience members, inspired by Bill’s august presence, asks a question about toilets.
Welcome to Daejeon
On my last day in Korea, I wouldn’t even have noticed. Nothing about staring at a heaping bowl of pickled cabbage first thing in the morning would bother me. After one week of eating kimchee three meals a day, every day, the part of me that needed two chocolate croissants and a double americano just to feel normal would finally have acclimated. On day one, however, I’m not quite there yet. After sputtering out a weak, black broth, the coffee machine advises me to “Have a nice time”. I’m giving it my best, but looking down at my bowl of rice with seaweed broth and trying to see oatmeal is more that I can manage. To any outside observer I’m sure I look like what I am: a coffee starved software engineer very much out of his element.
“Excuse me,” I hear someone say, “you must be here for NIME.”
Getting to NIME from the Toyoko Inn requires a fifteen minute taxi ride, circling the government complex at the center of Daejeon and crossing the river into KAIST campus. Staring out the window on the way over, I’m not entirely sure what to make of my surroundings. Based on the gnashing juxtapositions all around me, I’d say the city of Daejeon seems to think it can shock me out of sleep deprivation with bewildering choices in urban planning. The humble government complex, for example, rises no more than three stories high in the center of a large public park, yet towering apartment blocks housing thousands flank the complex to the east and the west. I decide that the squat government building must be nothing more than a gateway, and that beneath the park extends a labyrinth sprawling hundreds of miles underground. Also, peering into the distance beyond two apartment buildings, I notice a strange, metallic spire. It looks like a spaceship from my vantage point, but that would be crazy, my mind must be playing tricks on me. Of course, as we get closer it looks more and more like a spaceship, until it turns out that’s exactly what it is. In an attempt to clash maximally with the drab apartment units on the south side of the river, northern Daejeon sports a giant amusement park.
I give up on trying to understand the city and opt for conversation instead. My first traveling companion is Simon Hutchinson, who when he complains about being fatigued and confused does so in a voice both energetic and lucid. He explains that he’s come to NIME to perform a piece called Shin no Shin, using an iPad to turn touch and acceleration into music. I’m tempted to spout off about Mira, but I decide that there will be plenty of time for that later at my poster session. We exchange a few notes about the iPad as a performance instrument. I wonder if Mira will be useful for musicians like Simon, or if the tools that already exist are good enough.
I’m also fortunate enough to ride with me Adam and Liam of Alphasphere, who tell me about their spherical music making gadget of the same name. My description can’t do it justice, but you can think of the Alphasphere as an overgrown buckyball with aftertouch. For a more precise picture, imagine plastic rings arranged in a ball, with flexible, pressure sensitive fabric stretched over each one. You play the instrument by distorting the fabric, which Alphasphere translates into MIDI and OSC data. As Adam describes the hardware I notice a strange tension in my fingers, my first taste of what I’m now calling NIME Complex Sigma. It’s a debilitating condition that I will encounter several times throughout the conference, characterized by acute mental anxiety and muscle twitching. The cause comes from listening to the description of a revolutionary new instrument; really, really wanting to play it and then not getting to play it.
NIME doesn’t officially start until the next day, but people like me who chose to show up early get to attend one of several workshops. I want to go to all six, but somehow the conference organizers expect me to pick just two. Of course, the whole question of which workshop to choose becomes moot when it turns out that none of us can find the building where we’re supposed to register. Each of the buildings on KAIST campus has a letter and number associated with it, which would in theory make finding a given building an easy task. However, at the center of campus the correlation between number, letter and proximity approaches zero–building E16 is right next to N4. Naturally, asking for directions is an exercise in futility, as what little Korean I know comes from watching Arrested Development. Eventually by walking in ever-widening circles we manage to find the right building. We’re a bit worried about showing up several minutes late, until we notice two crucial facts. Fact one: there is a giant mob of not-at-all-Korean looking people standing outside the building. Fact two: the man who is supposed to lead the first workshop is among them.
Sometimes people who make NIMEs forget to bring keys
When at last we manage to enter, the first thing I discover is that black coffee is not as easy to find as I would have hoped. Canned coffee drinks come easy, with vending machines at every corner offering sugary, undrinkable swill with names like Joy and Yes. But it seems I’m going to have to wait a bit longer to get a taste of something fresh roasted. Not having caffeine impairs my decision making process, which makes my second discovery all the more significant. As it turns out, two of the workshops are free, whereas the other four very much are not. So in the end, I opt for the NIME orientation workshop and for the one on making music with Web Audio.
KAIST poses an anatomical conundrum
Michael Lyons, a NIME veteran and researcher in musical interaction, leads the first workshop. His presentation does a great job of filling in the gaps in my knowledge on NIME related topics, subjects like primary versus secondary feedback (secondary feedback is the sound an instrument makes, primary feedback is everything else it does). He also provides a thought provoking overview of why people make NIMEs in the first place, which I find particularly interesting. Beyond techno-fetishism and fascination with the human-machine relationship, he posits that the #1 reason that people are interested in building new instruments is because of an insistence on cultural fluidity. People want new ways to make sound because they want their own tools–they don’t just accept what’s given to them. No wonder so many NIME builders use Max.
As Michael brings the presentation to a close, my mind is humming with new ideas to take back to the Cycling ‘74 think tank:
- Mapping (between input gestures and sound output) is the heart of NIME, and indeed of instrument building in general.
- MIDI is plug and play, OSC isn’t because there’s no standard
- Programmability is a curse, and it’s important to have long-term versions of things
- Primary feedback (lights, vibrations) is critical for intimacy
- Music is becoming increasingly process oriented as opposed to artifact oriented. People who are not virtuosos are willing to go out in public and make music, and are eager to find a forum to do so.
Important links from that talk include:
After the workshop, we stumble back outside on legs made weak from six hours of sitting. Since I’ve gone over ten minutes without complaining, I seize on the opportunity to mope about the cloudy weather. No one around me seems to pay any attention, perhaps because they wisely understand that overcast and humid is probably an overture for rainy and chilly. For now, we take advantage of the warm weather to restore circulation to our feet and converse about all things NIME. As we chat, we’re treated to the first appearance of the chair of KAIST 2013, Woon Seung Yeo, better known as Woony. I had been told at some point (by someone very foolish) that Koreans have a cultural lacuna when it comes to sarcasm. I suspect that Woony knew this and made it his personal mission to wipe away my misconception. “I encourage you to visit the famous KAIST goose crossing, especially since I know NIME participants are all great lovers of animals,” he says. “Not in that way,” he adds. Woony’s dry and biting wit would only desiccate in the days to come.
The nicest day of the whole conference
Drawing his short opening remarks to a close, Woony directs our attention to the area behind us, where a seemingly infinite quantity of food seems to have materialized out of nowhere. “And now, enjoy the banquet,” he says. “And of course the free beer.”
Well, there you have it. Free beer and unlimited food. No points for guessing how long it took me to fall asleep after that one.
Recently, I went to the NIME (New Interfaces for Musical Expression) conference in at KAIST in Daejeon, South Korea. Over the course of five days, I attended workshops in Web Audio, absorbed paper presentations on digital laughter and watched what could only be described as a pneumatic zombie duet. I also attended not one but three banquets. For those interested in the gaps between banquets, I offer this story.
I step off the plane. Location: Incheon. Body: Exhausted. Mind: Blank. Between the 12 hour flight, the 15 hour time difference and repeated exposure to the in flight movie, A Werewolf Boy, I can already feel my grip on reality starting to slip away. I make my way through the airport, down to baggage claim and onto the express train for Seoul. As far as I can tell the train was constructed in the year 2040 and brought back in time to the present day. The oleophobic seats conform exactly to every contour of my exhausted body. A flatscreen television unfolds from the ceiling above, presenting a promotional ad for a nearby civic development project. BUILDING, it promises, in blaring, positivist capitals. CIVIL. PLANT. HOUSING. Depictions of enormous glass and steel buildings, assembled by swarms of tiny robots, rise before me. Outside my window, we pass row upon row of small scale farms, sometimes running all the way up to the train tracks. Eventually the train comes to a small bridge connecting Incheon to the mainland. Rising up out of the water I can see huge mounds of dirt and grass, looking like the backs of giant turtles lumbering towards Seoul. I am very sleepy. I decide that they probably are turtles, and I write the following poem:
POEM FOR THE TRAIN TO SEOUL
The fog helps me see the tortoises
Grinding out low channels
And the speculative egrets on long stalks
The tortoises are my cold cows
Ruminating on the countryside
And other fictions
They roar silently
Like old men, or magma
Train tracks are humming
The sound of soft gray wool
And my eyes are as heavy as the tortoises
I decide that this poem is very good, then I fall asleep. When I wake up, we’ve arrived in Seoul, where I must have boarded another train for Daejeon, though I honestly can’t remember. Neither do I remember arriving in Daejeon, finding my hotel, or making my way up to my room. Probably all these things happened, but whether they happened to me or to someone who looks a lot like me I will never know. In the morning a straight line connects my backpack to my suitcase, to a pair of shoes, to where I fell asleep, face down on a still-made bed.
–> NIME, Day 1
A few months ago we made the decision to trim down our office size and send some physical merchandise like shirts, audio libraries, and music releases to Amazon for fulfillment. We plan to be adding some new, fun products in the future, too.
For those of you who love [have] Amazon Prime, you know what this means. There is nothing stopping you… sort of.
Youtube user Naoto Fushimi has been steadily posting some great videos demonstrating advanced, audio reactive Jitter / GL techniques.
Follow here if you like seeing pixels move!
The good folks over at VDMX just posted an excellent video tutorial series, detailing the steps necessary to create communication links between Max and VDMX. In the video, a texture generated in VDMX is sent to Jitter, via the Syphon plugin, analyzed with jit.3m, processed with jit.gl.pix, and sent back to VDMX.
Very cool to see these two apps playing so nicely together!
Using only a single stereo S/PDIF output from your audio interface, you can access up to five ES-4 gate expander modules – each of which supports eight gate outputs. That’s 40 outputs! All this flexibility is easily accessed with Expert Sleeper’s new native Max es4encoder~ object. It couldn’t be simpler.
The five eight bit outputs can be used in a number of different ways. Instead of eight gates, an output can send another single 8-bit message like pitch CV or velocity.
The new (beta) Code Export feature of Gen has only been around for about a month, and is still sparsely documented, but that didn’t stop Varun Nair at the Designing Sound blog from digging in and trying it out. The tutorial goes through the process of creating and exporting a tremolo effect with Gen and then building the code into an Audio Unit plugin. It’s great to see such a clear and well-written tutorial.
Varun also gives a nice and simple overview of getting started with the Gen environment in Max. We look forward to seeing more experiments in this area, and are really excited about what people will do with Code Export. Have any experiences to share? Let us know in the comments.
Next week, a special event will be happening in Brooklyn at Roulette. Toni Dove’s “Lucid Possession” premieres April 25, 26, and 27th. Those lucky enough to be in the vicinity will have the opportunity to experience this unique stage production — a “contemporary ghost story” featuring robotics, gorgeous costumes, and stunning voices and music. There are many talented Max users involved, including Todd Reynolds, Luke DuBois, and Elliott Sharp. They, Toni and all the other artists and crew will make it a memorable experience. Don’t miss it!
At the Code Control Festival in Leicester England this past weekend we gave attendees an advance peek at some of our mobile projects. Sam Tarakajian, our principal mobile developer, showed a new iPad app, the Mira controller for Max, that makes it possible, with as close to zero configuration as possible, to make your patch “touchable.” Mira presents a large set of Max user interface elements on the iPad exactly as they appear in your patch. It also provides access to multitouch and accelerometer data. We’ll be revealing more of this powerful addition to the Max universe as we prepare it for release in the app store later this spring.
As a possible companion to Mira, I revealed a new “hardware” project dubbed the MiraBox — in reality, nothing more than an 8 x 10 wooden picture frame stuffed with foam — that helps capture accelerometer and gyro data from the iPad. The software component of the project was prototyped entirely with Mira and Max 6. Like many others we’re interested in extracting higher-level gestures from accelerometer sensors, but in particular, we’re interested in tracking data when you touch your patch.
Matthew Davidson, the developer of the new Mono Sequencer device, gives us a quickstart primer on using this creative MIDI effect. Watch for new videos over the coming weeks!
Today we’re excited to release Max 6.1.
You can download Max 6.1 now to check out these new features:64bit Application
- Use more than 4GB RAM
- Use high precision 64bit numbers in Max messages
- Load 64bit Audio Unit and VST plugins
Live 9 Support
- New devices
- New Live API features
- Performance and stability improvements
New Gen Features
- Integrated operator reference
- New operators and expression features
- (Beta) Export Gen code to C++ (gen~) or GLSL (jit.gl.pix)
- Faster application launch
- Faster patcher load time
- General optimizations
Complete Max 6.1.0 release notes are available here, and more discussion about what these features represent follows.
64bit application support is a big deal, and given how long Max has been under development in a 32bit world, it was no easy feat. Thank you all for your patience as we’ve worked to make this happen. 64bit applications allow users to take advantage of a much larger memory space and hence more than the ~4GB of RAM we are limited to under 32bit. We’ve also been able to make infrastructural changes to support 64bit numbers when passed via Max messages for higher precision calculations. These two things are features you have been requesting in Max for years, and finally those features are here.
However, we’d like to balance expectations here. Since this is our first 64bit release, we will not have all of the features of the 32bit version, especially regarding Jitter and QuickTime support. QuickTime is simply not available on Windows under 64bit, where we will rely on DirectShow for movie playback (to playback QuickTime files you will need a third party plugin for DirectShow). Apple’s QTKit API on Macintosh 64bit has fewer features than the 32bit version of QuickTime, and requires a dramatic rewrite of our code base. We’ve only implemented the most basic of functionality for movie playback at this time on both platforms. We will be continuing to work on Jitter video playback and other QT features in the 64bit version, but many features are not present and may not make it to the 64bit version ever.
Max and MSP should have nearly all the same features, except where it relies on QuickTime (e.g. PICT files are not currently supported under 64bit and instead we recommend converting to PNG or JPG). However, 3rd party developers will need to port their objects to 64bit for them to be able to run inside the 64bit version of Max. There is no loading of 32bit externals in 64bit version of Max.
We will be providing an SDK for 3rd party developers in the coming days, but it will likely take some time before any particular 3rd party external will be available to use. We would recommend that if you do want to use the 64bit version and you have 3rd party dependencies that you see if you can remove these dependencies by using core objects or abstractions to replace these dependencies, until your favorite 3rd party object is available.
On Macintosh, the application comes as a single FAT bundle, by default set to run in 32bit mode. To run in 64bit mode, select the application and “Get Info” from the finder. In the “General” tab there should be a check box which says “Open in 32-bit mode”. You can turn this off to run in 64bit. If you want to keep separate 32bit and 64bit versions, you can duplicate your max folder, select one of the applications and set it to run in 64bit as described. Externals are also FAT bundles–i.e. contain both 32 and 64bit code.
On Windows, there are separate 32bit and 64bit installers and applications, and externals are in separate .mxe (32bit) and .mxe64 files.
Live 9 Support
Max for Live users will need to use Live 9 in conjunction with Max 6.1. Live 9 will be released on March 5th, and as you may have heard, Max for Live is now included in Live 9 Suite. The factory content will look a little different than in previous versions and you will need to download and install the appropriate live packs for the content which previously was installed by default. In addition to the exciting features of Live 9, there are some great new devices in Max for Live, especially the drum synths and convolution reverb, but I will recommend you go to the Ableton.com website for more information regarding Live 9 and Max for Live.
New Gen Features
Gen has some significant additions and improvements in this release. Gen now has an integrated operator reference in the side bar to make learning and discovery easier than in previous releases. The operator set has grown, and the GenExpr language now supports recursive functions (for CPU, not GPU targets), calling gen patchers as functions, and defining functions with named parameters. But most exciting in this release is that we have a beta version of code export. This means that you can take your gen~ patchers and export them to C++ code and your jit.gl.pix objects and export them to GLSL code. This feature only has limited support in our initial Max 6.1 release, but over the coming months, we will be working to improve the generated code, template examples, and documentation to make this feature useful for those of you who have been waiting for this capability. Note that the code export feature will assume that you are familiar with C++ and working with a development IDE like XCode or Visual Studio. We will be adding more code export examples and documentation in the WIKI.
Thank you for continuing inspire us with your creativity.
If you follow on the Max Gen forum, you might be forgiven sometimes for thinking that the only people using gen~ are command-line-codeophiles busy downloading stuff from DSP archives and dropping them into a codebox object. While that’s awesome, I have a particular “burden on my heart” – as we say in the part of the U.S. my family hails from – for those who love them some graphic patching. The ever-delightful Johan van Kreij may have excited you at some near-future point by showing you the process whereby he uses connect-the-box programming to make something amazing. The only way to tell whether or not you’ll be excited and grateful is to have a look at it for yourself, of course.
Max-enthusiast and Expo ’74 presenter, Jeremy Bailey, has a message for people who contribute to Kickstarter campaigns (for his own campaign).
You’re the best, Jeremy!
Code Control Festival is Europe’s biggest Max meetup. Phoenix Cinema and Arts Centre in Leicester will be hosting its 3rd international conference for artists, musicians, students and teachers to explore Cycling ’74′s Max software, a toolbox for developing unique sounds, stunning visuals and engaging interactive media.
This year Phoenix, in association with Cycling ’74, invites applications to its Code Catalyst Award fund. The Catalyst Award represents an excellent opportunity for artists and practitioners to design and create new work or get free tickets to the events. The deadline for submissions is Friday 8th February.
Guest speakers will include Cycling ’74 CEO and founder David Zicarelli, Cycling ’74 developers Sam Tarakajian and Jeremy Bernstein, and Eric Lyon with more to be announced.
Festival dates: 22nd – 24th March 2013
I got to spend a day at NAMM, and it was a great chance to spend some time with our friends. Here’s a little picture of the folks at the Livid booth, showing great excitement over their new Base product. This is a really nice controller – no moving parts, realtime positional feedback and just the right size for backpacking. They were also spotlighting the Alias 8 controller, which seems purpose-built for making live sing.
Spending time with Livid also reminded me how much I love the OhmRGB Slim, which seems to be the perfect combination of over-the-top features with grab-it-and-go size. This was a great opportunity to talk smart, have fun and feed the GAS (Gear Acquisition Syndrome)!
Contrary to what you may have heard, the 2013 NAMM show wasn’t entirely about the rise of beautifully dirtied analog in the form of the Moog Sub Phatty,Dave Smith’s marvelous Prophet 12 (which you should imagine as a hybrid cross of parts of the Tempest, the Poly Evolver, and the redesigned Prophet), the shrinking (in terms of size and price) of the Korg MS-20, or the return of the Buchla Music Easel (yes, really).
I’m a Max guy, so I prowled the trade floor looking for controllers I could repurpose and come to love. This year, it was sufficiently rewarding to actually lure me away from hardware synthesis fun, gawking at analog video modular systems and nifty modestly sized modeling amps. I have escaped from the Trade Show Floor to tell thee, if somewhat idiosyncratically.
In particular, the Controller Pilgrimage means that I did several things in no particular order: I nerded out about matters such as the ability to fluidly play two-finger trills and mordents on the Ableton Push grid pads – which feel really, really good. I enjoyed the near-perfect size and feel of the Livid Instruments Base. And I think that the QuNexus from Keith McMillen Instruments may offer the first good solution to something I’ve messed with for years in my Indonesian-influenced work – the ability to have a physical interface allow for different playing techniques related to metallophones (alternating striking and damping, grabbing the bottom of a saron key to “silence” it, etc.).
There was one “outside of the box” encounter I wanted to mention, since it might not get quite as much mention as the above – I guess that it really was an “outside of the box” encounter quite literally, since I ran across the object in the laser and LED-stuffed Arena area of the Convention center (Yes, I went to walk on the video floor, too). There, amid the fog and laser-drawn vector graphic squiggles, I met the Alphasphere.
We all love it that the internets bring us images of things we might desire, but there’s no substitute for the real experience of the real thing. Okay, maybe there is a sort of substitute: here’s a great video our pals at Sound on Sound shot at the last Musik Messe that ought to give you a sense of it.
While a quick walk-by on the way to see the video floor struck my inattentive eye as something like a scaled-down ball sensor with hard transducer pads, the real item was far more compelling. The biggest surprise was the surface of the circles that cover the sphere – rather than being some kind of dark hard surface as I might have expected – was a lovely and soft stretched membrane that felt as much like a slightly loosened drumhead as anything else. The sense of feel and control when this surface was stroked or hit or pressed upon was a great experience.
In addition to layout of the circle/drumheads as series of 8 differently sized pads wrapped horizontally around the spherical surface (meditate on that as a topology rather than a grid for a few moments and see if you don’t get some interesting ideas), I found myself using the feel of the “spaces” between the pads as a way to traverse the surface.
While the triggering demos they had in the booth to demonstrate the software that comes with the unit was a lot of fun, I was struck with the notion that the configuration of controllers on the unit – stripped of the intention of its creators and laid open as collections of MIDI-producing outputs – have some really compelling physical and tactile features that I’ve never encountered elsewhere (Oh yeah – the hardware design includes internal LEDS that you can control and turn on and off for visual feedback).
If you’re one of those people who worries that the dominance of the iPad in the control surface world de-emphasizes the aspect of real touch sensitivity as a part of instrument or controller design, I think you’ll be really intrigued.
P.S. You’ll never guess what software was used during the prototyping phase of their design….