Poor Jitter/GPU performance using LFO for motion
I’m experiencing less-than-smooth Jitter motion when panning, rotating and/or zooming still images via an LFO at reasonably slow speeds. Even though I have a decent recent Mac system with a screaming graphic card, I would swear that the performance is little—if any better—than the 10 year-old system it replaced.
In the end, the images I want to process are larger than 1080p, however Photoshopping them down to 1080p or even smaller doesn’t really seem to make much difference in motion smoothness. I understand that real-time output might not be as smooth as frame-by-frame rendering, but even at very slow modulation speeds the behavior is inconsistent— even when running fairly smoothly there are sporadic catches or stutters in the motion.
Simple patch attached uses PLAYR > PANNER > TRANS4MR > jit.world > jit.gl.syphonserver, with OSCIL8R doing the slow mod. Output size is 1920 x 1080 @ 30fps. You can see an example of my results in this short test vid (Vimeo wasn’t kind to the compression but you can still see the motion anomalies): https://tinyurl.com/2dsw2xyf
System:
- Max 8.1.10
- Mac Catalina OS 10.15.7
- 2018 Mac mini (3.2 GHz 6-Core Intel Core i7) w/ 32 GB
- Plenty of USB 3.1 SSD space.
- Radeon RX Vega 64 (8 GB).
- Main monitor is up to 3840 x 2160, second monitor for jit.world full-screen display is 1920 x 1080
- Max video prefs are set for viddll (as recommended)
- Apple Software Renderer v 2.1 APPLE-17.20.22, GVSL v 1.20.
It seems that I get slightly smoother jit.world display on 1080p monitor 2 and Syphon recordings if I keep the main interface monitor at lower resolutions and/or hide the Max interface window (less than ideal workflow). I have a hard time believing this should even be a factor with this class of GPU?
Syphon-related docs advise that “the framerate you drive your jit.gl.syphonserver instance also controls the rendering speed of the attached clients. Lowering your framerate is beneficial to the whole system”. I’m sending “loadmess fps 30” to jit.world. Is this sufficient systemically? PLAYR and jit.world outputs to jit.fpsgpu still hover around 60 fps. Syphon recordings are usually 30.xx fps rather than exactly 30.
Research indicates that Mac GPU versions are tied to OS updates, and that Big Sur uses later GPU drivers. I’ve held at Catalina to let other software and plugins catch up. I’m probably willing to go to Big Sur now… however, I’ve also read of Max folks having issues with Big Sur. Advice on Big Sur GPU performance benefits versus any downsides appreciated as part of this smooth motion issue.
Also, LFO ramp waves seem to produce smoother motion than sine or triangle. Overall, should I be looking at some other LFO architecture with smoother output for this task?
Note that I’m reasonably comfortable with Max at large if I need to go outside Vizzie… just using these prefab modules as starting points.
Thanks so much!
something to try for still images is swapping PLAYR for a simple jit.gl.texture (reading image files via the file attribute). can't say for sure if it will make a difference but PLAYR may not be optimized for still images.
another thing I noticed is that OSCIL8R doesn't sync its output to the renderer, which can cause subtle stutters. I'll file a ticket for this, but in the meantime you can try using jit.time objects (e.g. jit.time.saw) for your LFOs. these objects will sync output to the render context.
Thanks, Rob... I really appreciate your response.
While I'm happy to know that the use of jit.gl.texture and jit.time are a more guaranteed bedrock, sadly, my motion tests of your patch mod using higher res images are only marginally smoother, if at all. I've posted a new syphoned 30fps vid using the same 1920x1080 image as input:
Related thoughts:
This recording was made with my main ui monitor set at 3000 x 1692, jit.world monitor at 1080. Tests w/ main monitor at 1080 and Max ui hidden appear to have smoother display, but no discernible improvement on recording.
Even though I have an attrui of fps 30 on jit.world, the jit.fpsgui still shows jit.gl.texture and jit.world at approx. 60 fps. That would seem to be great performance... unless that speed is causing motion to bog when I only need to display and record 30 fps. If practical, I'm unclear on how else to control the fps. (qmetro doesn't seem to apply to jit.gl.texture or jit.world.)
Still unclear if upgrading from Catalina to Big Sur might improve things... and if safe/recommended at this time in general.
Thanks again!
if on Mac you must disable displaylink (@displaylink 0) on your jit.world to decouple the fps from the display refresh rate. you may want to disable vsync as well (@sync 0).
hopefully that helps alleviate the stutters.
I doubt there will be any differences in performance between Catalina and Big Sur. The latest Max 8 is fully compatible with Big Sur.
@displaylink 0 and @sync 0 have made a big improvement. Still not sure I'm completely out of the woods re stutters, but will stress test for a while and report back re any further issues.
Thanks so much!
Thx. I’ve been doing a lot of testing. Setting jit.world @displaylink 0 and @sync 0 have improved performance smoothness. However, I still get jit.time-driven motion stutters and a drop in frame rate in very simple patches with 1080p source images and output… unless I hide the ui on the main ui monitor. In fact, when I open a test patch before loading an image, the default grayscale texture grid of 256 x 256 stutters significantly, even with jit.time set at a crawl.
So it appears that the ui display is either eating into GPU performance and/or isn't in sync with the output on other other display.
Is there a way to have Max utilize the Mac Mini’s onboard GPU for the main ui monitor and the external GPU for the full-screen 1080p output monitor? (I found a very old thread that said running a texture across two GPUs is a recipe for disaster… and, indeed, the ui ideally needs to have some small thumbnail source displays such as included in CROPPR.)
It seems like I’m still really missing something, as it must be standard practice to have a decent-sized ui display running along with a separate dedicated output display.
One thing trying to ring a bell is that different GPUs are optimized for different tasks, and my specific task is manipulating pixel-based source material rather than generating textures from polygons, etc.. My RX Vega 64 8GB is a pretty serious card, but I’m wondering if either it doesn’t excel at this task or if there’s some driver setting I haven’t found on my Mac that would better optimize it for the task.UNLESS
I found a thread and ran jitter_benchmark-v1.0121 with these results:
CPU: 414.3
GPU Geometry 1: 57.7
GPU Geometry 2: 93.1
GPU Pixel Shaders: 324.4
Thanks!OK... greatly
Hi, Jeff. Obviously, Rob should have a more definitive and trustworthy answer for you, but here’s my opinion based on previous experience and asking and reading here in the forums:
- Unfortunately, Jitter shares the main low-priority Max thread in which Max UI elements are also being drawn. I think it’s an understandable (when Jitter was built) but terrible design decision. As years pass by, and multicore processing is the norm, this problem is only getting more evident.
- The ideal situation would be for Jitter processing / rendering to have it’s own thread and/or to take out the UI drawing from the main thread. HoweverI know that this is easier said than done… in Max 9, Rob? ;-)
- Right now, the common workaround to this problem has been to have two separate instances (applications) of Max running, separating user interface and Jitter processing/rendering. Communication between the two applications could be accomplished through UDP or TCP packets (UDP/ Open Sound Control is usually sufficient).
ah you may be on to something re: VIZZIE GUI. I'm not sure how things work with external GPUs on Mac but yes, if your main patch is on one GPU and the main display context on the other, then things will get screwy with VIZZIE modules that display shared textures in the GUI.
If they are both using the same GPU then you're probably fine, but you are going to be running into the standard "UI and render context sharing the same thread" issues that all Jitter projects must deal with.
The "standard" solution to these issues (whether it's the same GPU or multiple GPUs) is to separate your UI from your Jitter engine, and run each in separate instances of Max, passing messages via OSC, and sharing textures (if needed) via Syphon/Spout. This will mean forgoing Vizzie modules for your own customizations, but fortunately the guts of Vizzie modules are easily extracted (for the most part).