Best Practices in Jitter, Part 1

    Jitter was first released in 2003. At the time of its release, it provided some of the most comprehensive and intuitive ways of working with video and image processing in realtime. But times have changed; in the years since the initial release of Jitter, the computing landscape has changed, and changed somewhat dramatically. This series will focus on ways to maximize performance and efficiency in Jitter by laying out some current best practices. In this first article we're going to look at the following topics:
    • Why textures are important and how to use them
    • How to efficiently preview an OpenGL scene or video process in your patch
    • When to choose a matrix over a texture
    • How to minimize the impact of matrices on system resources

    Matrices or Textures?

    One of the most important changes to computers since Jitter's release is the general move away from faster and faster CPUs toward faster GPUs - the graphics processing unit cards in your system. Jitter has the tools to leverage your machine's power, but you'll need to learn some new techniques to access them. Much of what was once handled by the CPU in linear sequence in matrix form can now be done in parallel on a GPU in a fraction of the time using textures. For most users, switching to textures will yield significant (and immediate) performance gains. However, there are certain kinds of processing — computer vision, for example — that rely on analyzing images to detect features or objects. Those kinds of operations still require the use of matrix data, so it’s important to understand both approaches going forward.

    Textures and Shaders

    What is a texture and why is it important? For the purposes of this article, you can think of a textures simply as images stored in buffers and processed by shaders on the GPU. To take advantage of textures you need to use specific Jitter objects. The following objects provides support for textures in Jitter:
    • The object is simply a storage space for a texture. It handles the uploading of matrix data to the texture’s buffer on the GPU, along with fast copying of textures sent to its input. You can think of the object as the GPU equivalent of the jit.matrix object.
    • The and objects provide the core of any GPU image processing patch. Both of these objects encapsulate a texture buffer and shader program — they can take a matrix or texture as input to fill the buffer and then use a specified shader to process it in a variety of ways.
    Shaders are simply image filters that are processed on the GPU. The object loads and runs Jitter shader files. Although the uses Gen-based patching to do its work, the and are otherwise identical in terms of performing GPU-based processing. It is possible (and rewarding) to write your own shaders or to do your own GPU-based processing using the object (as described in these introductory and advanced tutorials), many Jitter matrix processing objects have corresponding Jitter shaders or Gen patchers that you can use as drop-in replacements for the Jitter objects you know and love.
    A list of object equivalents can be found here.

    Using jit.playlist, and jit.grab with textures

    The jit.playlist, and jit.grab objects that you commonly use as input sources for your Jitter patches can all be configured to output textures rather than matrices by using the @output_texture attribute set to a value of 1. In many cases, the performance gains for switching to texture processing will be substantial. For example, on a mid-range laptop, playing back a 4K video through a simple jit.brcosa filter provides a startling difference in terms of matrix vs. texture processing.

    Vizzie modules and textures

    As of Max8, all Vizzie effect modules process and output textures automatically. Opening these up and looking at the contained shaders and Gen files can be a great way to deepen your understanding of texture processing.

    Reworking Matrix-based Patches

    So now you know why you’d want to use textures instead of matrices, the next question is how. For most matrix-based patches, all that’s required is replacing the jit.window object with, using the @output_texture 1 attribute on any video objects, and then replacing matrix objects with their or equivalent.

    Viewing your work

    [ edit - With the release of Max 8.1.4 the following section is no longer relevant. Users are advised to use jit.pwindow for all previewing needs. ]
    One of the most common elements in any Jitter patch is preview windows, usually in the form of jit.pwindow. We use them for everything from checking the state of the image at a particular point in an effect chain to having a preview of the final output during a performance. And if you are like most of us, they are probably scattered across your patchers and subpatchers. The problem is that without the proper settings these preview windows are often inefficient and have a strong impact on overall performance.
    We could walk you through how to add a second shared render context to efficiently view texture data, but there’s a much easier drop in solution — simply replace all jit.pwindow objects in your patch with a Vizzie VIEWR module. In many cases, this is all you need to do to considerably speed up an existing patch. The VIEWR object provides high performance texture preview functionality, whereas jit.pwindow object will require a matrix readback to display the textures. Another great thing about the Vizzie VIEWR module is that it’s simply a patcher abstraction, and therefore modifiable to suit your needs. To make a new customized preview window based on the VIEWR module, open the vz.viewr patcher (search the File Browser for vz.viewr.maxpat), edit and resave to your user Library folder, create a new bpatcher and load your customized patch, and save the bpatcher as a snippet. Here's an example


    Given all the benefits of using textures, you may wonder why the matrix objects are still included in Jitter. The reason is that certain techniques and operations are only possible using matrices. Things like computer vision (cv.jit family of objects, jit.findbounds) and analysis (jit.3m, jit.histogram) are only possible using matrix objects on the CPU. Matrices are also used extensively to store and process geometry data for There may also be cases where the matrix objects simply work better or match a desired look better than their texture equivalents. Therefore the trick is knowing how to use matrix objects effectively.

    Minimize Uploads and Readbacks

    There are two basic rules to follow to get the best performance out of your patch when working with textures and matrices:
    1. minimize data copies from CPU to GPU (uploads) and ensure they happen as early in your processing chain as possible. When possible, use the output_texture attribute.
    2. minimize data copies from GPU to CPU (readbacks) and ensure they happen on the smallest dimensions as possible.
    An example of a data upload is loading an image file with the importmovie message of jit.matrix, and sending that directly to a object. Once the image data is uploaded to the texture object, it should only be sent to texture processing objects ( / or sent to some geometry for display ( For a real world example, the following patch posted by user Martin Beck from this epic thread on glitch techniques from the Jitter forum demonstrates the concept.
    The matrix operations happen early in the chain, the results are uploaded to the GPU and from that point forward all processing is done using texture objects.
    For an example of best practices with readbacks check out the following example patch demonstrating color tracking with jit.findbounds object. In the patch, jit.grab is set to output a texture at full resolution. In order to perform the color tracking operation using jit.findbounds, the texture must be read back to a matrix. The optimal object for this is (asyncread stands for asynchronous readback because the output matrix is delayed by a single frame from the input texture). We take the additional step of downsampling the texture on the GPU using a object with the adapt attribute disabled (@adapt 0) and the dimensions attribute set to the smallest size necessary for accurate detection.
    Doing readbacks efficiently
    Doing readbacks efficiently
    An additional example of using readbacks efficiently is shown below. In this example of a basic luminance displacement effect we use the GPU to flip and downsample the input texture as necessary for our geometry. We also feed the same input texture unaltered at full resolution to a object to use as the diffuse color texture.
    An efficient luma displacement effect
    An efficient luma displacement effect

    • May 14 2019 | 8:41 pm
      Sweet. Fantastic summary and resource.
    • May 15 2019 | 5:41 pm
      Thanks for this really well explained resource!
    • May 15 2019 | 8:06 pm
      This is great
    • May 26 2019 | 10:39 pm
      This article is interesting but not as easy to understand. I don't get it with the cv.jit.shift.draw. The object doesn't work anymore with textures (inside the jit.matrixinfo and the jit. change). The object doesn't help here. I would like to see a much deeper example with the objects doesn't understand textures. Maybe somone could help me to understand what to do with this disliking texture objects.
    • May 27 2019 | 6:40 pm
      as mentioned in the article, some techniques are best left to matrix objects. that being said, the shift.draw abstraction is simply drawing lines with jit.lcd. this *could* be translated to if there is a desire, but considering this object is merely a utility and demo of using th output of the cv shift operator, probably not worth the effort.
      however the grab output and chromakey operation could absolutely benefit from a move to the GPU, especially if the output intends to utilize HD resolutions for display. in this case asyncread comes in play to transform back to matrix in order to process with the cv matrix. here's my translation of the cv.jit.shift help patcher for efficient processing on GPU where possible:
    • May 28 2019 | 3:19 am
      Useful and concise. Congrats!
    • Jul 02 2019 | 8:45 pm
      Is it possible to do parallel processing on multicore CPUs with jit.gen e.g. by using [poly~ JITTERPATCH @parallel 1 ] ?
    • Jul 03 2019 | 4:51 pm
      poly~ parallel only affects audio processing.
    • Jul 10 2019 | 6:52 pm
      Great thread and patches!
    • Dec 01 2019 | 5:37 pm
      This is really timely, thanks. So is it possible to create nurbs (as in as a texture vs a matrix?
    • Dec 02 2019 | 4:14 pm
      hi Bob, are you referring to the control-matrix to that must be sent as a matrix, so if you need to convert from a texture you would use as described in this article.
    • Feb 02 2020 | 8:05 pm
      The Luma displacement patch (rutt-etra.maxpat) yields an unmodified image for me. The Z displacement simply shifts the entire plane. The light can be seen affecting the scene. The only thing I changed from the example is swapping a playr for the grab.
    • Feb 03 2020 | 5:33 pm
      looks like this is broken with the gl3 engine correct?
      i believe it has to do with the @rect attribute of breaking the object with certain cases.
    • Feb 03 2020 | 6:53 pm
      Correct - Jitter Tutorial 34 also broken - you can't apply the texture to the gl geometry. Is there a pinned gl3 thread for bugreports ?
    • Jun 25 2020 | 9:17 am
      Could you confirm that since Max version 8.1.4 is better to use jit.pwindow than the vz.viewr for textures display?
    • Jun 25 2020 | 12:20 pm
      @RobRamirez. yes i was referring to "the control-matrix to". thanks for this elegant solution.
    • Jun 26 2020 | 10:49 am
      Rob could you also confirm that doesn't accept a matrix input anymore in current Max version and GL3?
    • Jun 26 2020 | 5:30 pm
      Thanks for the reminder Federico, I will update the doc to reflect the new 8.1.4 features (you are correct that pwindow is all that's needed now).
      I'm not sure what you're referring to with asyncread, why would you send a matrix to asyncread?
    • Jun 26 2020 | 7:06 pm
      Sorry Rob I got confused, I was thinking for some reason that was reading a matrix into a texture but it doesn't make any sense at all.
    • Sep 18 2020 | 1:40 pm
      I have a question about the color tracking patch. I'm curious if there is some reason the pix object is not a gl.pix and the asyncread just comes after that? Would this have a different result in the pwindow? I have done that and it seems to work the same, but I have been known to make mistakes and overlook things.
    • Sep 18 2020 | 3:12 pm
      seems like that would work fine.
    • Sep 18 2020 | 11:14 pm
      great thanks again, just wanted to get the official word.