Tutorials

Best Practices in Jitter, Part 1

Jitter was first released in 2002. At the time of its release, it provided some of the most comprehensive and intuitive ways of working with video and image processing in realtime. But times have changed; in the years since the initial release of Jitter, the computing landscape has changed, and changed somewhat dramatically. This series will focus on ways to maximize performance and efficiency in Jitter by laying out some current best practices. In this first article we're going to look at the following topics:

  • Why textures are important and how to use them

  • How to efficiently preview an OpenGL scene or video process in your patch

  • When to choose a matrix over a texture

  • How to minimize the impact of matrices on system resources

Matrices or Textures?

One of the most important changes to computers since Jitter's release is the general move away from faster and faster CPUs toward faster GPUs - the graphics processing unit cards in your system. Jitter has the tools to leverage your machine's power, but you'll need to learn some new techniques to access them. Much of what was once handled by the CPU in linear sequence in matrix form can now be done in parallel on a GPU in a fraction of the time using textures. For most users, switching to textures will yield significant (and immediate) performance gains. However, there are certain kinds of processing — computer vision, for example — that rely on analyzing images to detect features or objects. Those kinds of operations still require the use of matrix data, so it’s important to understand both approaches going forward.

Textures and Shaders

What is a texture and why is it important? For the purposes of this article, you can think of a textures simply as images stored in buffers and processed by shaders on the GPU. To take advantage of textures you need to use specific Jitter objects. The following objects provides support for textures in Jitter:

  • The jit.gl.texture object is simply a storage space for a texture. It handles the uploading of matrix data to the texture’s buffer on the GPU, along with fast copying of textures sent to its input. You can think of the jit.gl.texture object as the GPU equivalent of the jit.matrix object.

  • The jit.gl.slab and jit.gl.pix objects provide the core of any GPU image processing patch. Both of these objects encapsulate a texture buffer and shader program — they can take a matrix or texture as input to fill the buffer and then use a specified shader to process it in a variety of ways.

Shaders are simply image filters that are processed on the GPU. The jit.gl.slab object loads and runs Jitter shader files. Although the jit.gl.pix uses Gen-based patching to do its work, the jit.gl.slab and jit.gl.pix are otherwise identical in terms of performing GPU-based processing. It is possible (and rewarding) to write your own shaders or to do your own GPU-based processing using the jit.gl.pix object (as described in these introductory and advanced tutorials), many Jitter matrix processing objects have corresponding Jitter shaders or Gen patchers that you can use as drop-in replacements for the Jitter objects you know and love.

A list of object equivalents can be found here.

Using jit.playlist, jit.movie and jit.grab with textures

The jit.playlist, jit.movie and jit.grab objects that you commonly use as input sources for your Jitter patches can all be configured to output textures rather than matrices by using the @output_texture attribute set to a value of 1. In many cases, the performance gains for switching to texture processing will be substantial. For example, on a mid-range laptop, playing back a 4K video through a simple jit.brcosa filter provides a startling difference in terms of matrix vs. texture processing.

Vizzie modules and textures

As of Max8, all Vizzie effect modules process and output textures automatically. Opening these up and looking at the contained shaders and Gen files can be a great way to deepen your understanding of texture processing.

Reworking Matrix-based Patches

So now you know why you’d want to use textures instead of matrices, the next question is how. For most matrix-based patches, all that’s required is replacing the jit.window object with jit.world, using the @output_texture 1 attribute on any video objects, and then replacing matrix objects with their jit.gl.slab or jit.gl.pix equivalent.

Viewing your work

[ edit - With the release of Max 8.1.4 the following section is no longer relevant. Users are advised to use jit.pwindow for all previewing needs. ]

One of the most common elements in any Jitter patch is preview windows, usually in the form of jit.pwindow. We use them for everything from checking the state of the image at a particular point in an effect chain to having a preview of the final output during a performance. And if you are like most of us, they are probably scattered across your patchers and subpatchers. The problem is that without the proper settings these preview windows are often inefficient and have a strong impact on overall performance.

We could walk you through how to add a second shared render context to efficiently view texture data, but there’s a much easier drop in solution — simply replace all jit.pwindow objects in your patch with a Vizzie VIEWR module. In many cases, this is all you need to do to considerably speed up an existing patch. The VIEWR object provides high performance texture preview functionality, whereas jit.pwindow object will require a matrix readback to display the textures. Another great thing about the Vizzie VIEWR module is that it’s simply a patcher abstraction, and therefore modifiable to suit your needs. To make a new customized preview window based on the VIEWR module, open the vz.viewr patcher (search the File Browser for vz.viewr.maxpat), edit and resave to your user Library folder, create a new bpatcher and load your customized patch, and save the bpatcher as a snippet. Here's an example

MY.VIEWR.zip
zip

Matrices

Given all the benefits of using textures, you may wonder why the matrix objects are still included in Jitter. The reason is that certain techniques and operations are only possible using matrices. Things like computer vision (cv.jit family of objects, jit.findbounds) and analysis (jit.3m, jit.histogram) are only possible using matrix objects on the CPU. Matrices are also used extensively to store and process geometry data for jit.gl.mesh. There may also be cases where the matrix objects simply work better or match a desired look better than their texture equivalents. Therefore the trick is knowing how to use matrix objects effectively.

Minimize Uploads and Readbacks

There are two basic rules to follow to get the best performance out of your patch when working with textures and matrices:

  1. minimize data copies from CPU to GPU (uploads) and ensure they happen as early in your processing chain as possible. When possible, use the output_texture attribute.

  2. minimize data copies from GPU to CPU (readbacks) and ensure they happen on the smallest dimensions as possible.

An example of a data upload is loading an image file with the importmovie message of jit.matrix, and sending that directly to a jit.gl.texture object. Once the image data is uploaded to the texture object, it should only be sent to texture processing objects (jit.gl.slab / jit.gl.pix) or sent to some geometry for display (jit.gl.videoplane). For a real world example, the following patch posted by user Martin Beck from this epic thread on glitch techniques from the Jitter forum demonstrates the concept.

The matrix operations happen early in the chain, the results are uploaded to the GPU and from that point forward all processing is done using texture objects.

For an example of best practices with readbacks check out the following example patch demonstrating color tracking with jit.findbounds object. In the patch, jit.grab is set to output a texture at full resolution. In order to perform the color tracking operation using jit.findbounds, the texture must be read back to a matrix. The optimal object for this is jit.gl.asyncread (asyncread stands for asynchronous readback because the output matrix is delayed by a single frame from the input texture). We take the additional step of downsampling the texture on the GPU using a jit.gl.texture object with the adapt attribute disabled (@adapt 0) and the dimensions attribute set to the smallest size necessary for accurate detection.

Doing readbacks efficiently

An additional example of using readbacks efficiently is shown below. In this example of a basic luminance displacement effect we use the GPU to flip and downsample the input texture as necessary for our geometry. We also feed the same input texture unaltered at full resolution to a jit.gl.material object to use as the diffuse color texture.

An efficient luma displacement effect

BP1-patches.zip
zip
Download the three example patches


Learn More: See all the articles in this series

by Rob RamirezCory Metcalf on 2019年5月14日

slo ~|•'s icon

Sweet. Fantastic summary and resource.

Doug MeatLoaf's icon

Thanks for this really well explained resource!

Dante's icon

This is great

cri's icon

This article is interesting but not as easy to understand. I don't get it with the cv.jit.shift.draw. The object doesn't work anymore with textures (inside the jit.matrixinfo and the jit. change). The jit.gl.asyncread object doesn't help here. I would like to see a much deeper example with the objects doesn't understand textures. Maybe somone could help me to understand what to do with this disliking texture objects.

Rob Ramirez's icon

as mentioned in the article, some techniques are best left to matrix objects. that being said, the shift.draw abstraction is simply drawing lines with jit.lcd. this *could* be translated to jit.gl.slab/jit.gl.pix if there is a desire, but considering this object is merely a utility and demo of using th output of the cv shift operator, probably not worth the effort.

however the grab output and chromakey operation could absolutely benefit from a move to the GPU, especially if the output intends to utilize HD resolutions for display. in this case asyncread comes in play to transform back to matrix in order to process with the cv matrix.

here's my translation of the cv.jit.shift help patcher for efficient processing on GPU where possible:

Max Patch
Copy patch and select New From Clipboard in Max.

Jean-Francois Charles's icon

Useful and concise. Congrats!

Martin Beck's icon

Is it possible to do parallel processing on multicore CPUs with jit.gen e.g. by using [poly~ JITTERPATCH @parallel 1 ] ?

Rob Ramirez's icon

poly~ parallel only affects audio processing.

TConnors's icon

Great thread and patches!

Bob's icon

This is really timely, thanks. So is it possible to create nurbs (as in jit.gl.nurbs) as a texture vs a matrix?

Rob Ramirez's icon

hi Bob, are you referring to the control-matrix to jit.gl.nurbs? that must be sent as a matrix, so if you need to convert from a texture you would use jit.gl.asyncread as described in this article.

Shane's icon

The Luma displacement patch (rutt-etra.maxpat) yields an unmodified image for me.
The Z displacement simply shifts the entire plane. The light can be seen affecting the scene. The only thing I changed from the example is swapping a playr for the grab.

Rob Ramirez's icon

looks like this is broken with the gl3 engine correct?

i believe it has to do with the @rect attribute of jit.gl.slab breaking the object with certain cases.

Shane's icon

Correct -
Jitter Tutorial 34 also broken - you can't apply the texture to the gl geometry. Is there a pinned gl3 thread for bugreports ?

Federico-AmazingMaxStuff's icon

Could you confirm that since Max version 8.1.4 is better to use jit.pwindow than the vz.viewr for textures display?

Bob's icon

@RobRamirez. yes i was referring to "the control-matrix to jit.gl.nurbs". thanks for this elegant solution.

Federico-AmazingMaxStuff's icon

Rob could you also confirm that jit.gl.asyncread doesn't accept a matrix input anymore in current Max version and GL3?

Rob Ramirez's icon

Thanks for the reminder Federico, I will update the doc to reflect the new 8.1.4 features (you are correct that pwindow is all that's needed now).

I'm not sure what you're referring to with asyncread, why would you send a matrix to asyncread?

Federico-AmazingMaxStuff's icon

Sorry Rob I got confused, I was thinking for some reason that jit.gl.asyncread was reading a matrix into a texture but it doesn't make any sense at all.

Dante's icon

I have a question about the color tracking patch. I'm curious if there is some reason the pix object is not a gl.pix and the asyncread just comes after that? Would this have a different result in the pwindow? I have done that and it seems to work the same, but I have been known to make mistakes and overlook things.

Rob Ramirez's icon

seems like that would work fine.

Dante's icon

great thanks again, just wanted to get the official word.

Bill 2's icon

Does Apple's "unified memory" make any difference to the need to avoid matrix readbacks?