In this final article in the DMX set, I’m going to walk through the process of creating the DMX system we used to make the video below.
We needed to write a very specific hardware–focused Max patch to get this to work. My hope is that some of the ideas and techniques used to create this system will be useful to you in your explorations of your own DMX hardware based projects.
For this system I used the following gear.
- LanBox LCX: The Lanbox is my go-to DMX interface, it’s bomb-proof and loaded with functionality.
- American DJ Revo 4: This is a DMX controlled 256 channel led projector. It can output 4 grids of 16x16 leds in RGB and white.
- Microsoft Kinect: The Kinect is great for detecting bodies in a 3D space.
Max 6 – of course!
If you use different DMX hardware, you’ll have your own system for this.
I also decided to use some more of Jean-Marc’s awesome software to manage the video data from the Kinect. In particular, I just wanted to light up one “blob” of activity in the camera’s view, in order to keep the show from getting all crowded out with lights. For this I used obje
cts from the cv.jit
I downloaded all of these objects and placed them in a folder inside my Cycling ’74 folder in my Max application folder.
The basic idea in this system is that we select a body in part of a 3 dimensional space, create a video of our detected body, then convert the 2 dimensional spatial data of the video into the 1 dimensional data which is the DMX packet controlling the projector.
Let’s start at the top of the patch and work our way down.
The master metro in the patch drives the whole show. It is set to 20fps, which matches the frame rate of my Lanbox. When working with DMX, there is never any reason to have an update rate faster than the frame rate of the system.
The Kinect specific code is addressed first. We want to be able to open our Kinect as a video input device to Max, and also adjust the angle the Kinect is viewing at. Most importantly, we wish to set up a range slider so that we can “focus” the Kinect on objects at a certain range from the device. Inside the kinect_code subpatcher, you can see how this all comes together.
Without going into extensive detail, this code adjusts the view from the Kinect to “see” at the range we wish to capture, then analyses this video for the presence of bodies, or “blobs”. We then label our blob, select only the first blob for viewing, and then create a video stream with only that blob in it.
Next up we need to convert the 2 dimensional video data into a 1 dimensional packet that is going to control the display of our LED projector. This is where the jit.iter object comes in handy. The code after this “spreads out” the coordinates of the 16x16 video into a 1 dimensional array with 256 elements.
The Revo 4 projector has a particular channel mapping which I figured out by reading the manual and goofing around with sending it various DMX messages ( welcome to the world of DMX programming! )
The channel mapping looks like this.
In the format_chans subpatch, you can see how I used the coll object to provide this mapping for 32 channels, and this mapping can be repeated right up to channel 256.
Finally, I use another coll object to set up the DMX packet that is going to be sent out on my DMX network. I’m using the out-of-sequence incoming number pairs to update the value for each channel with the nsub message to coll.
The project’s master metro also drives the dump message to the coll DMX_packet object, which sends the current packet out and on to the udpsend object via our LanBox specific packet-formatting object.
And there you have it! My thanks to Tom Hall and Janeva Zentz for coming over and making the video happen.