One of the most feared and respected objects in the Jitter collection, jit.expr arrived on the scene as part of Jitter 1.5.
Since a lot of people are interested in what the process of porting a Max patch for use in Max for Live looks like, I thought I’d take this tutorial as an opportunity to go over the steps I used to take my waveplayah patch and to convert it to a Max for Live device waveplayah.amxd.
In my last LFO tutorial, I took the basic LFO module I’ve been working with in the previous tutorials, added some new extensions, and created a nice little patch called the waveplayah that used a summed set of the LFO modules to drive the playback of the contents of a buffer~.
A while back, I wrote a series of four tutorials based around the idea of how you could generate and organize variety in Max patches.
While many people are looking at Max for Live as a great way to integrate their favorite hardware controllers, build really unique effects, and add variety to their productions, I was eager to explore what could be done with video inside of Max for Live.
Coming up with ways to get information about the physical world into Max is one of the most fun aspects of working with the software. Whether it is for video processing, sound creation, or any other type of output, physical interactions provide a space for much more interesting relationships to develop. Unfortunately, many ways to get this information into Max require the user to get comfortable with connecting wires to circuit boards and understanding basic (and sometimes not-so-basic) electronics. For this reason, camera-based interactivity can be pretty enticing. There is also a reasonably low startup cost and plugging a camera in is usually a pretty user-friendly process. In this article, I will share a couple of basic techniques for using affordable webcams to gather data in MaxMSP/Jitter.
In this installment of the Video Processing System, we're going to tackle two big hurdles that Jitter users often find themselves coming up against. The first thing we will add is an improved, high performance video player module based around the poly~ object. This will allow us to load a folder full of videos and switch between them quickly and efficiently. The other module we will add is a simple recording module to capture our experiments. Since we are using OpenGL texture processing to manipulate the video, it is a little bit more complicated than just using jit.qt.record, but not by much.
Lately, I've been working on some "classic" OpenGL programming within Jitter, and I've been using jit.gl.sketch to do that work; it is very close to the OpenGL syntax that you find in most books, and is fairly forgiving in terms of incoming data type. However, I got very tired of editing message boxes once the programs got a little bigger, but I wanted replaceable parameters like you get with a message box.
I'd like to share some really simple things that have worked for me that I hope you'll find useful, or that may provide a starting point for your own investigations.
In this installment, we'll be working on some more advanced ninja tricks - creating the beginnings of a control/preset structure with assignable LFOs, and building a GPU-based video delay effect. These two parts will bring our system to a much more usable level, and allow for much more complex and interesting results. Ironically, most of what we are really doing in this installment is just an extension of bread-and-butter Max message passing stuff.
In our last article, we began to create our processing system by putting the essential structure in place and adding our input handling stage. In this installment we are going to be adding a gaussian blur and color tweaking controls to our patch.
In this, the final episode of our guitar processing extravaganza, we are going to step away from making effects and focus on performance support. For a system as complicated as this, performance support means two things: patch storage and realtime control. Thus, we will learn to create a preset system and manipulate the various on-screen controls with an inexpensive MIDI footpedal system.
At this point, we have a pretty useful guitar processing "rack", but it could use a little spice. This spice will come from two additional processors: a looping delay unit, and a basic reverb system. Also, to help keep the output useful, we will drop a limiter on the back end of the entire rig.
Between the tutorials, Jitter Recipes, and all of the example content, there are many Jitter patches floating around that each do one thing pretty well, but very few of them give a sense of how to scale up into a more complex system. Inspired by a recent patching project and Darwin Grosse's guitar processing articles, this series of tutorials will present a Jitter-based live video processing system using simple reusable modules, a consistent control interface, and optimized GPU-based processes wherever possible. The purpose of these articles is to provide an over-the-shoulder view of my creative process in building more complex Jitter patches for video processing.
In the last article, we added some basic tonal effects: distortion/overdrive and EQ/filtering. This time, we will expand our virtual effects rack to include both a phase shifter and a full-featured modulating digital delay. As we add these effects, you will begin to see why a DIY effects system can trump any commercial product.
Now that I've got a nice generative patch and a way to hear it, I thought it'd be nice to make a few improvements and extensions that would let me begin to specify larger structures - to generate instructions to my generative patch, as it were. While I'm sure that the world is full of people who want ways to have the same thing happen again and again, I'd like to do this in ways that offer a little more freedom than that. This short tutorial will add a modest number of these kinds of changes.
In the last article, we did a lot of setup - we got input/output handling in place, and added a compressor to the processing chain as an example of an “effect module”. In this article, we will continue adding effects, including a dual overdrive module and a three-stage EQ/Filter module. With these additions we will further explore Max 5’s user interface options, as well as taking a look at some of the “tweaks” that make Max/MSP functions a little more guitar-faithful.
Last time out, we created the LFOur, a generative patch composed of a quartet of synchronized LFOs whose output we can use to make noise. While it's interesting to watch how the different LFO configurations make combinatoric waveforms and it's restful and instructive to watch the sliders flick and rock, it would be nice to have something to connect it to. This tutorial includes some patches that will do just that.
In an earlier article, Andrew Benson and Ben Bracken went through the process of connecting a guitar to a Max-based processing system, and creating a few guitar-oriented effects patches. In this series of articles, I will be building a Max-based guitar processing "rig", and will give you the opportunity to look over my shoulder as I design and implement this system.
I'm personally a lot more interested in the ability to synchronize processes in Max using time values that resemble musical note values to create control structures that can be easily time synced. This tutorial is about making one of those kinds of modules - a quartet of synchronized LFOs whose outputs I can sample individually for several kinds of data (triggers for waveform start, LFO outputs that I can sample at variably synchronized rates, and a nifty summed waveform I can use for more exotic kinds of control).