Am I going down the wrong path here? (Is Jitter suitable for this project)

James Corley's icon

For two years I’ve been meaning to create a music video in the style of the Autechre Gantz Graf music video or the unofficial Plyphon video. I’ve dabbled with Audiovisual works at uni but we used the artmatic software rather than jitter, and it was pretty tedious to use especially for creating long complex animations like that of gantz graf.

Now for this project I was planning on using jitter and sending data over udp from Ableton Live for sequencing, and creating the whole sequence from extensively morphing a single patch. but the more I look into the approach the more I think it may be a totally nightmarish route to try (Like exporting the video, being able to export different layers to composite later, etc).

So I started looking at using blender or cinema4d and after effects - but I have no clue how to use blender and cinema4d costs a lot.

So I’m asking here to see if anyone here has any thoughts on this:

Is using jitter for this style of project doable or just insane? (I don’t mind getting technical with patching)

Otherwise, is something like blender or cinema4d more suitable to use for such a task?

I would be animating all elements by hand through key frames or automation rather than using audio reactive visuals.

Anyone here have any experience or even insight into this?

P.s. I'm not asking how to create such visuals in jitter, but rather how feasible such a project is over the long term.

Martin Beck's icon

After having a look at the video I would summarise my opinion as follows:

a) The main element in the video seems to be the audio reactive extrusion of a gridshape. Modeling this via key frames automation would not give the expected results. The extrusion by itself could be made like here e.g.
https://www.youtube.com/watch?time_continue=1148&v=5DdGRHDVJE0
https://www.youtube.com/watch?v=afZPymIXg20
https://www.youtube.com/watch?v=-_v7yEAWnwg
but may require some more sophisticated adaptions if the basic shape is as complex as in the Gantz Graf example.
Audio reactivity is quiet easy to accomplish if it is sufficient to only react on transients in the audio e.g. have look here - my post from 23. May 2019
https://cycling74.com/forums/how-to-make-those-stunning-videogame-glitches-!/

b) Automation via udpsend / udpreceive between Ableton Live and Max works good for me e.g. for automation of camera movements.

c) The shader for the lighting might be a little challenging. I think the Gantz Graf was made with TouchDesigner which ships with this kind of shaders by default. With Max especially with the GL3 package (currently Public Beta, allows to import shadertoy shaders) this should be possible, but may require some more extensive search. Max has everything on board to adjust lighting, but it lacks some basic world editor features that make Blender and Cinema 4D convenient. In those programs there is a world editor that lets you position your objects, cameras and lights and the properties of these objects are arranged in "tidy" windows next to this editor.The final render view has its ow window. In Max everything is boxes, wires and looks like code and you more or less only have the final rendering window, which is perfect for audio reactive patching, but may make you loose overview of your scene if the number of objects gets too big. But in my opinion Gantz Graf is below this threshold.

James Corley's icon

Thank you for such a detailed answer!
Some of my concerns did come down to the concept of the world view that blender or C4D may give, but my thinking is that despite the lack of a strong world and object overview, I'd much prefer to stare at max patches than trying to model and animate things in Blender!

So I'm hoping that whilst Jitter could definitely handle the gantz graf scene (the world seems small), in the plyphon video the world seems a lot larger, with the camera constantly moving across the scene, rather than the camera panning, cutting and jumping around the central shape as in gantz graf.

If camera movements can be automated from Live and udp as you say, then the camera movements dont seem to hard to acheive, and the large scene can seemingly be acheived in many ways, but the number of objects could cause any issue (although as I said, I'm more comfortable with max than anything else, as blender makes my brain melt from just a single shape).

In terms of the shaders, I'd love to do them in Max, but I was planning to export and do a bit of post processing in After Effects, so this leads to my other concern - exporting.

I know I can render the view output using Syphon, which Ive used before, but I've only used it only for short clips and low quality experiments. How successfully would I be able to render out the composition/animation from Jitter? In Blender the non realtime render allows for good quality and standards frame rates easily, but I worry that the realtime recording of the jitter output may introduce artifacts or stutters when recording to disk? And might there be a way to record multiple passes of the scene to capture different layers for compositing?

Martin Beck's icon

The complexity of the plyphon video is another story. Might be possible with several [jit.gl.multiple @glparams position scale] and a sophisticated way to create the position matrix and the scale matrix. The GPU instancing that comes with GL3 might also speed up several things. Regarding recording you presumably know this tutorial https://cycling74.com/tutorials/best-practices-in-jitter-part-2-recording-1 , that also covers non-real-time rendering.
But as explained with Gantz Graf video above I believe this kind of videos are made with audio reactive real time software. You can automate the slow transitions and camera by key frames, but the tight timing of appearing objects coincident with transients in the audio is not easy to align with non realtime techniques.
Regarding post processing and composition you can render the same scene or just a subset of objects by using several [jit.gl.node].

James Corley's icon

Ah I was familiar with the tutorial but it has now shone some extra light on certain areas, thanks!

Non-real-time rendering seems useful for certain things, and the creator of the gantz graf video (Alex Rutterford) says in this interview: https://warp.net/updates/alex-rutterford-on-the-creation-of-the-gantz-graf-video
that it was almost entirely keyframed by hand using spreadsheets of data and then rendered in non real-time - could be fun implementing the same process using Coll or dict! (Or trying to record the real-time data from Ableton into a format that can then be ran in nonrealtime...?)

Also as another interesting point, this is the creator of the plyphon video commenting on the process (from this forum ):

About the video, no automated work at all. Everything is modeled, animated and synchronized by hand, frame by frame, using ears for sound reverse engineering. A true pain in the ass but I'm happy within the final result, I don't know if it could be officialized or not, just trying to work at my best. I gladly dedicate it to Autechre

Martin Beck's icon

Thanks for the insights about the key frame technique involved. I stated above you might use UDP for sending automation data between Live (Max for Live) and Max. I use this mainly for slowly evolving parameters. If you really want to do key framing I would suggest to slice your audio in Ableton Live to Midi (I mean the build in function) and send Midi to Max. In Max you could trigger envelopes via the received Midi to do automation on small time scales like e.g. here https://cycling74.com/forums/mc-function-scrubbing-triggering-and-convenience or make use of the ease package.

Rob Ramirez's icon

if you're already familiar with jitter then the "keyframe by hand" technique is absolutely something you could achieve with Jitter, with your level of success being proportional to how deep down the rabbit hole you want to go. As Martin mentioned above, the recording best practices article provides a good starting point for non-realtime export from Jitter. you would simply replace the gesture recorder output with whatever data structure you are using to keyframe (coll, dict, pattrstorage are all possibilities). then the trick is charting the song at whatever frame rate you're recording at (typically 30 or 60 FPS).

I would start with a very basic short (few seconds) song segment, chart it out by framerate, and see if you can map to some basic parameters of a jitter patch, then using the example from the article export it out and see how you did. I'm sure it will take some iterations to work out the kinks, but I have no doubt it's achievable.

James Corley's icon

@Martin ah yes I was reading that thread the other day when looking at creating a keyframe system (for audio at the time) so thats definitely a way I'll consider. Will have to try a few tests to see how far I can get.

@Rob I'm comfortable with Max and MSP, but jitter (and gen) are like a dark art to me. So no matter which path I choose to take I will have a learning curve ahead of me. If you're referring to being able to score out keyframes in a text file (e.g. time (ms) param1 param2 param3), then that would make composition the visuals a lot more feasible, as it seems in jitter I will be working with much a lower level system than other graphics software? Again I'll have to do some tests to figure out how deep I want to dive.

@both My current thinking at this time is this (correct me if I'm wrong):

Jitter is relatively low level - shaders have to be custom and you're really working with data rather than graphics or models. Also you have to be careful about which objects you use because not all of them use the GPU for processing, and if you have realtime data sequencing the patch that has to be stored for non realtime rendering - a process that has to presumably be created/patched manually.

Blender is an absolute beast but terrifying to learn (for me at least - I prefer creating graphics with code and numbers!). Non-realtime but able to create scenes in a very intuitive environment, with the ability to script further in python, and use nodes for various processing options. My main reservation is that it is not Max based, and with most my creative and academic life being Max based (patching > social life), I'd love a Max-esque environment to do such a thing... then I found

Touch Designer, looks low level enough to interest me, but with enough internally built-in options to mean that I won't need to reinvent the wheel. Could be interesting...

But I digress.

I am definitely gonna to do some small experiments in each, but at this point in time I'm concerned I'll drive myself mad with programming shaders and shapes in Jitter, and then making sure the system works, whereas another option might make such a process simpler so my struggles are less technical.

One question I do want to raise though is the workflow/mindset people approach Jitter with. For example, when building in MaxMSP, I build patches which can then be used to make a variety of different results, its not so to speak a "piece-per-patch". Do you approach Jitter in the same way? Almost building systems/software to be used for multiple different pieces and projects? Or do you prefer to start a patch, and have that patch focus on a single piece?