The Video Processing System, Part 4
In this installment of the Video Processing System, we're going to tackle two big hurdles that Jitter users often find themselves coming up against. The first thing we will add is an improved, high performance video player module based around the poly~ object. This will allow us to load a folder full of videos and switch between them quickly and efficiently. The other module we will add is a simple recording module to capture our experiments. Since we are using OpenGL texture processing to manipulate the video, it is a little bit more complicated than just using jit.qt.record, but not by much.
Download the patches used in this tutorial.
Download all the patches in the series updated for Max 7. Note: Some of the features in this system rely on the QuickTime engine. Using AVF or VIDDLL video engines will not support all functions. On both engines you may see errors related to loadram or have limited access to file record features.
Tutorials in this series:
A Better Video Player
Anyone who uses Jitter for video playback for a long enough time finds themselves building a poly~ or JavaScript based movie-player bank. The implementation we will look at here is based on one that I have used numerous times in performance situations. The reason for building such a module has to do with the drop in frame-rates that can happen when you send the read message to jit.qt.movie. By preloading a whole mess of jit.qt.movie objects with different videos, you can bypass the lag that is caused by loading a new file from disk. To do this, we will make a poly~ abstraction that includes a jit.qt.movie object and several messages to control the movies. To see what this abstraction looks like, open up the vps.pmovie file. This patch provides an interface to the jit.qt.movie object through various messages. When the "file" message is sent, it loads the specified file using an "asyncread" message. Once the file is loaded, it triggers a "getduration" message which allows us to calculate times for the "loadram" message. The way it is currently set up, we automatically load the first and last 2 seconds of each video into RAM (assuming a 600 timescale). This helps to prevent the framerate drop that often happens when a video loops back to the beginning.
To use our handy poly~ abstraction, we need a host patch that interacts with it in the right way. To see how that works, check out the vps.mbank file. Without getting too far into the details of how this patch functions, let's look at the basic idea. Clicking the "Load" button triggers an Open... dialog that allows you to choose a folder. Once a folder is chosen, this filepath is set as the "prefix" for two umenus that are set to autopopulate. The umenu on the right will be used to load each instance of our poly~ abstraction. The one on the left will be our performance control to choose which movie to play. The subpatch called "populations" controls the creation of the correct number of poly~ "voices" and then iterates through the file list, loading a movie into each voice. Finally, the "menu-controls" subpatch provides an interface for stepping through the umenu using triggers. All of this hosted inside a bpatcher in our main VPS-4 patch so that we can interact with the controls without having to see the entire patch.
Obviously, this module might not do exactly what you want, but it is fairly easy to add more movie controls by altering the vps.pmovie patch. This should provide the basics that you can build on.
Keeping a Record
Eventually, you'll get to a point where you are really liking some of the video that is coming out of your video processing patch, and want to record some to disk. Since the patch was begun as a live performance tool, there hasn't been much discussion about recording to disk. However, since this is Jitter, we can always add in another module. One caveat I must mention is that it will be difficult to get full framerates while recording the output of this patch to disk, especially since we are doing most of the processing on the GPU. That said, you can often get perfectly acceptable results if sync is not of vital importance.
In order to get our video from GPU back into CPU-bound matrix territory, we'll need a way to rasterize the texture. This is done easily by using the jit.gl.asyncread object. This object read the contents of an OpenGL scene asynchronously to provide a rasterized image matrix. We can then connect that to jit.qt.record to record our video. As an added bonus, I've stuck in a simple interface for choosing video codec and framerate for recording.
That concludes this installment of the Video Processing System. Stay tuned for more...
by Andrew Benson on September 23, 2009