Tutorials

Best Practices in Jitter, Part 2 - Recording

For our second best practices in Jitter article, we wanted to take a crack at answering one of the most frequently asked questions on the forums and in the socials: How do I record my Jitter output? Seems like a simple question, but as is often the case with Max, simple questions have complex answers. We’ll start with the basics, and work our way to more advanced (and costly) solutions.

What to Record

Before we dive into the many ways to record Jitter output, let’s take a step back and first identify what we’re trying to record. In most cases the answer is: “exactly what I see in my window.” OK, sure – but we need to locate exactly which object or patch-cord in our patch is drawing to the window. If the content is a chain of textures or matrices, it’s simply a matter of taking the output from the last object in the chain. However if the content is OpenGL geometry objects (jit.gl.mesh, jit.gl.gridshape, etc), or a scene post-processed with jit.gl.pass, it’s not always easy to identify. Fortunately the latter case has an easy solution, jit.world. If your patch’s render-context and window is handled by the jit.world object (which it should be), then you simply need to enable @output_matrix or @output_texture, depending on which recording technique you are using (more on that below). This will send the window content out the jit.world’s first outlet as a matrix or texture. Now might be a good time to brush up on the difference between matrices and textures, texture output, or even what is a texture.

Matrix Recording

Basic video recording in Jitter involves sending matrices to a recorder object. Choices include the Vizzie modules RECORDR and AVRECORDR, and the Jitter objects jit.record and jit.vcr. Which object you use depends on your needs. Let’s start with Vizzie.

As with most Vizzie modules RECORDR and AVRECORDR are simply wrappers around Jitter objects, jit.record and jit.vcr in this case. RECORDR is for video only recordings, and AVRECORDR is for audio and video. These objects are designed for simplicity and they succeed in that goal. They handle both texture and matrix inputs from any Jitter object, not just Vizzie modules. Plug in your inputs, set your recording directory (by default, the desktop), hit the record button and you’re off.

Basic recording with the RECORDR module

With the Vizzie modules the only user adjustable settings are codec, and if using RECORDR, fps. The codec setting will have a significant impact on the processing power needed and the size and quality of the output file. Mac defaults to prores4444, an excellent choice if quality is of primary importance or if an alpha channel is needed. Switching to prores422 will bring the file size down and lose the alpha channel, and switching to h264 will bring the size down even more at the expense of extra processing and quality loss. On Windows the default codec is h264. For alpha channel support huffyuv and animation are both options, but for high quality and low processing use huffyuv. This codec would be the Windows default but it's incompatible with most movie players and requires using the AVI container format rather than MOV. Converting to a more compatible format is a possibility and the steps to do this are described in the final section of this article.

The Vizzie modules will handle most basic recording needs, but there are some situations where you may want to use the underlying Jitter objects instead. For example to have more control over when and where recordings are written, or to change the video engine used. There are two points to consider when switching to jit.record or jit.vcr. First if your video source is a texture, then you must send it through a jit.gl.asyncread object to convert to a matrix, or simply use jit.world @output_matrix 1 depending on your patch needs.

The second point is understanding the interplay between real time mode, the source fps and the recording fps. If recording in non-real time mode (@realtime 0, the default for jit.record) then in most cases you will want the recording fps (jit.record @fps attribute) to match the source fps (the number displayed when a jit.fpsgui is attached to your video source). If these numbers don't match, then the recorded movie may play back in unexpected ways. For example if you are outputing frames at 60 fps, and recording the output at 30 fps, then the recorded movie file will playback at half speed (although this could be desirable in some cases). Use the jit.world @fps attribute with @sync 0, or the qmetro @interval attribute to adjust your input fps and the jit.record @fps attribute to adjust your output fps. If real time mode is enabled then the fps attribute has no affect on the output file and the recording will play back at the same frame rate as the source material.

Recording an animated gif with jit.record at 15 frames per second

Matrix recording using built-in objects is useful and convenient for many situations. Especially as a means to capture a short video of an idea. As our colleague Andrew Benson wisely says:

There's a bunch of ways to make this just right, but in the heat of the moment, you may just find yourself winging a jit.gl.asyncread into jit.record.

However, problems such as dropped frames, audio and video losing sync, compression artifacts, and sporadic recorder errors can occur due to limited processing resources, or limitations in the recording objects. If so you may get better results by outsourcing to external software or hardware that specializes in video recording.

External Software

Personally, 99% of the time I will use an external software to record my output. This may sound complicated to achieve, but is actually quite simple once you get the hang of it. We’re going to look at two workflows, one for Mac and one for Windows.

On Mac, the best tool I’ve found for recording is Syphon Recorder. To make recordings with Syphon Recorder you must first install the Syphon package from the Package Manager. With the package installed, you simply create a jit.gl.syphonserver object, enable @output_texture 1 on your jit.world, and connect the world’s output to the syphonserver’s input. You should also set the @layer attribute of syphonserver to some high value so that it draws last, ensuring that it captures all the objects in the scene. You should also ensure that the fps you are rendering at matches the fps in the Syphon Recorder preferences.

Basic recording with Syphon Recorder

Recording audio from Max with Syphon Recorder requires the additional step of installing Soundflower. Once installed open the Max Audio Status window and set Output Device to Soundflower. In the Syphon Recorder preferences set the audio input to Soundflower.

Sending audio to Syphon Recorder via Soundflower

As an added benefit, both Syphon Recorder and Soundflower are free software. If you require more features, ScreenFlow and Screenflick are two options I’ve seen recommended by Jitter users. With these solutions, the video source is pulled directly from the render window and therefore the syphonserver object is not required.

We can trace a similar workflow on Windows. Instead of installing Syphon from the Package Manager, we install the Spout package. Instead of adding a jit.gl.syphonserver to our patch we add a jit.gl.spoutsender. Now here’s where things start to get hairy. There are several software options for recording screen output on windows, mostly geared towards gamers. The one I’ve had the most success with is another free software called OBS Studio.

You can use OBS much the same as the Mac screen capture software mentioned above, but in order to decouple the capture from the render window via Spout, you need one more piece of software called SplitCam (also free). SplitCam will take the Spout source from Jitter, and broadcast it as a Webcam device which OBS can detect for recording. In the OBS Sources window, add a new Video Capture Device source. In the device menu select SplitCam Video Driver, and adjust the capture settings as desired. To capture audio from Max no additional software is needed, simply enable the Desktop Audio source in the mixer interface.
[edit - Install the Spout Plugin for OBS to record Spout texture streams from Jitter ]

[ edit - An astute reader notified me about Spout Recorder, which will record video streams directly from the spoutsender object. This may be a simpler workflow for video only recordings from Spout. ]

Recording a Spout source with OBS and SplitCam

External Hardware

If the above workflows leave you unsatisfied, it may be time for a dedicated hardware unit for recording your output. These devices allow your machine to offload the entire process for encoding and writing to disk. Perhaps equally as important, recordings are made directly from the display rather than from a texture capture, thereby preventing discrepancies in visual appearance that often occur between display and capture, especially with line and point drawing. I asked two of my colleagues to provide some insight into this pro workflow.

Here’s Tom Hall on the Atomos Ninja V:

Over the years I’ve used the same setup or similar setup to my colleagues, either directly recording in Jitter or by using syphon. This was until recently, I got a hold of an Atomos Ninja V. The Atomos Ninja V is actually something that gets widely used in the film industry for recording off camera b-roll and the likes, but how does it work with Jitter? - You can think of it basically as a HDMI Recorder (records to attached SSD). It has a HDMI input and a HDMI output (direct loop through from the input) and works just like an external monitor to your computer (albeit a bit smaller), simply take the HDMI output of your machine and connect via HDMI cable to the Atomos Ninja V, it’s that easy. When your jitter patch is all ready to record, drag the jitter window displaying your patch onto the Atmos “monitor” and set it to full screen, hit record on the Atomos and you’re away. You’ll find that because the graphics are off the machine you’re working on you can record at significantly higher frame rates and resolutions, and free up a lot of headroom for other Max work happening concurrently.

It’s not the most cost effective method if you purchase, but it does provide consistent results at showreel level quality. A nice alternative is that you can rent these via Kitsplit or similar for as little as $10 per day (24hrs) so if you have a Jitter piece you need to record in high resolution/frame rate, renting a HDMI recorder might be a good alternative.

Cory Metcalf adds BlackMagic HyperDeck Studio Mini and Intensity to the list:

A lot of my work relies on feedback and live input, making a frame-by-frame capture setup a non-starter. Similarly, the CPU expense of recording in realtime is something I’d rather spend on extra vertices or layers of processing. Probably my favorite way to go is recording directly to an Atomos Ninja or Blade or a BlackMagic HyperDeck Studio Mini. These outboard recorders can handle pretty much any resolution or framerate you throw at them and don’t cost you any more processing than outputting to a second monitor or projector. They even offer loop-through so you don’t have to use an extra port.

An added benefit is that they support multi-channel audio capture (depending on the model) and come in HDMI and HD-SDI flavors. For live performances, all I do is pass my video through to the capture box and send a submix or program out from the sound board and I get a frame-accurate recording of the sound and image, exactly as it appears to the audience. The obvious downside is cost, as most of them cost more than your Max license. Another similar option is to use a second computer with a capture card like a BlackMagic Intensity. You can then capture into Max or directly into your favorite video editor.

If dropping several hundo on outboard gear isn’t in the cards right now, you might be interested in the cheaper options. Typing something like “HD video capture” into Amazon or Ebay will turn up several options, such as this HDMI capture device for $60. You get what you pay for, but these devices should work as advertised and give you many of the benefits described above for a fraction of the cost.

Non-Real Time Recording

In the previous section I described jit.record’s realtime attribute and why it’s useful. For our final workflow exploration we will use jit.record in non-real time mode to write a 4K render to disk. This solution can produce stunning high definition renders without the need for external software or hardware, but it is significantly more complicated to execute.

If your patch is only processing a movie file through some static effects, then non-real time recording is simply a matter of sending the framedump message to a jit.movie. A frame is output, processed, encoded, and saved to the output file, rinse and repeat as fast as the machine can go. However if your source material is generative, or if you want to record dynamic parameter changes, then a different approach is needed.

The basic idea is that we record parameter changes in real time at some fixed rate (e.g. 30 fps). Our output image size is set to some manageable dimensions (e.g. 1280 x 720) that allow the CPU and GPU to process our patch without hiccups at that fixed rate. After the parameter recording is complete, we increase our output image dimensions (e.g. to 4k), enable jit.record with @realtime 0, and start playback on the parameter recording made in the previous step. For each frame in the parameter recording, the parameter data is output and sent to the relevant objects, the image is output, processed, encoded and written to disk, and then the next frame of parameter data is output, rinse and repeat as fast as the machine can go. The end result is an Ultra HD rendering of a Jitter scene with real time patch parameter manipulations.

The included patch is intended as a proof-of-concept, and contains 3 modules (RECORDER, PARAMETER-RECORDER, and PARAMETER-PLAYBACK) that can be dropped in to an existing patch. Jit.matrixset is used to record parameter data, and pattrstorage is used to expose parameters for recording. Only single value parameters are supported in the current state (e.g. number boxes, sliders). The patch content is simply an animated jit.gl.bfg. The process for rendering a 4k animated scene is as follows:

  1. Open the PATCH-AND-PARAMETERS sub-patcher to expose the controls. Play around and get a feel for the patch and what you want to record.

  2. When ready to record an animation, enable the toggle on the PARAMETER-RECORDER sub-patcher. Animate the parameters and then disable the toggle to stop the recorder. The patch defaults to a maximum of 5 minutes of recording at 30 FPS.

  3. Click the toggle on the RECORDER sub-patcher. This will do a few things: set the image resolution to 4K, disable the jit.world renderer, and enable playback of the most recent parameter recording.

  4. Once the parameter playback is complete, disable the toggle and your 4K file will write to the Desktop.

Non-real time 4K recording

Please be aware that the default Mac/avf codec is prores4444 which will create huge files at 4K resolution. On Windows/viddll the default codec is h264. However if high quality renders are desired, I recommend the huffyuv codec (although this will also create a huge file on the desktop). As an optional final step, compress the output file to a more manageable size and codec. To trigger this step from your Max patch, the following steps are needed:

Some form of the following command will convert the source file to an h264 encoded mp4 file:
ffmpeg -i source.mov -pix_fmt yuv420p -c:v libx264 converted.mp4

FFmpeg is a powerful tool that will greatly enhance your abilities to manipulate video and audio files. The multitude of commands and options can be daunting, so a cheat sheet is essential. Encoding time can be significantly decreased by using hardware acceleration if supported by your machine. The included patch suggests two h264 hardware encoders to try, one for Mac and one for Windows with Nvidia GPUs.

Converting a movie file with FFmpeg and the shell object

bp2-patches.zip
application/zip 27.15 KB
Download the example patches used in this tutorial

Learn More: See all the articles in this series

by Rob Ramirez on 2019年6月26日 04:51

Creative Commons License
Dante's icon

Dante

6月 26 2019 | 1:26 午後

This series is really helpful, thank you!

Spa's icon

Spa

6月 26 2019 | 4:43 午後

On Windows10 64, i just capture directly the max gl window in OBS, and OBS records it on disk in mp4. It can even scale it. It works fine and very simple...

Herr Markant's icon

Herr Markant

6月 26 2019 | 5:40 午後

No need for SplitCam, you can record directly from the "SpoutCam" into OBS . And there is also a new proper Spout Recorder https://www.lightjams.com/spout-recorder.html

yaniki's icon

yaniki

6月 26 2019 | 6:23 午後

Nice tips. Thank you very much. And I didn't know that Soundflower is alive... this was really surprising ;-)

Btw: on new Macs, when using ScreenFlow, it is not a bit complicated to record sound from MASP chain - ScreenFlow's driver doesn't work really good nowadays. So the trick with Soundflower is even more useful ;-)

Rob Ramirez's icon

Rob Ramirez

6月 26 2019 | 7:50 午後

thanks for the feedback everyone.

it's (briefly) mentioned in the article, but the point of the spout->splitcam->OBS workflow is to decouple the window size from what's being recorded. I was having troubles getting a 1080p recording on my laptop since that is larger than my actual display size. therefore the need to use Spout.

I did not know about Spout Recorder, so that's great news! I'll add a mention of this in the article. although it does not seem to record audio, so I think the OBS workflow is still useful.

zipb's icon

zipb

6月 26 2019 | 8:05 午後

Quite helpful. Thanks!

Daniel Maruniak's icon

Daniel Maruniak

6月 30 2019 | 4:19 午前

I love this article. Very helpful. I am getting an error message whenever I instantiate jit.spoutsender with this message. Any insights?

cerval's icon

cerval

6月 30 2019 | 7:48 午前

Thanks Rob - great explanations for an all time subject.

I got a question, maybe somebody can help me out: I am on Windows 8.1 and would like to know if there are any positive experiences using MAX together with a Blackmagic Intensity shuttle USB 3.0 - I would prefer using this standalone gear instead of a second computer with a Blackmagic capture card, Cory mentioned.

https://www.blackmagicdesign.com/de/products/intensity

Reading the horrible user experiences for this piece of hardware on Amazon tell me just one thing: Do not buy this capture device under any circumstances.

Are there some opinions around on our forum, backed up by own practical attempts to use it in combination with MAX?

Thanks!

Rob Ramirez's icon

Rob Ramirez

6月 30 2019 | 6:17 午後

@DANIEL MARUNIAK, make sure that Max and any additional apps you are using Spout with are set to use your dedicated GPU: https://www.addictivetips.com/windows-tips/force-app-to-use-dedicated-gpu-windows/

Daniel Maruniak's icon

Daniel Maruniak

7月 01 2019 | 6:52 午前

@ROB RAMIREZ that would be the high performance GPU vs. the power saving or system default?

Daniel Maruniak's icon

Daniel Maruniak

7月 01 2019 | 6:59 午前

and also @ROB RAMIREZ thank you. Honestly I have a feeling that the computer I am using might be somewhat outdated. perhaps I can upgrade it.

Rob Ramirez's icon

Rob Ramirez

7月 01 2019 | 3:31 午後

yes some windows machines include an integrated gpu and a discrete / dedicated / high-performance GPU, usually Nvidia or AMD. On my laptop apps use the integrated GPU by default, so users must override that setting using the gpu-specific settings software (Nvidia control panel in my case).

if you want to simply test spout capabilities on your machine, then download the Spout software and open the SpoutControls app.

whether or not your gpu(s) support Spout is one thing, but you must make sure all Spout sharing apps are using the same gpu.

Julien Bayle's icon

Julien Bayle

7月 03 2019 | 11:20 午前

Nice series, indeed.

I'm always and still annoyed by syphon recorder way as it seems that I have to put my rendering window at a HUGE size if I want a big texture to be captured.

I mean: if I use a reasonable window size 800 450, for instance, even if I use dim 1920 1080 and that I output the texture to syphonserver, syphon captures only 800 450.
I attached a snapshot: I reduce my max rendering window, and here is what syphon sees.

I knew abut the @matrix_mode_async attributes but here it doesn't seem to work as I understood and expect.

I'm sure I made some mistakes somewhere.

Rob Ramirez's icon

Rob Ramirez

7月 03 2019 | 4:32 午後

the included patch (and patch image in the article) demonstrates how to decouple window size from capture size using jit.world @output_texture 1 @dim capture_width capture_height. alternatively use jit.gl.node @capture 1 @adapt 0 @dim capture_width capture_height.

if that's not working then it's either a patching error or a bug. no idea which without seeing a patch. matrix_mode_async has nothing to do with this since that affects matrix output from jit.world, and in the case of syphon recorder you should be outputting a texture.

Pedro Santos's icon

Pedro Santos

7月 03 2019 | 4:52 午後

Julien, when I use it, I don't see the problem you're describing. The only "connection" seems to be the aspect ratio changes, not the resolution...

Max Patch
Copy patch and select New From Clipboard in Max.

Rob Ramirez's icon

Rob Ramirez

7月 03 2019 | 4:59 午後

and if these aspect ratio changes are undesirable, the solution is to add a jit.gl.camera to the same context that is capturing (the jit.world in the patch above), thereby overriding the root jit.gl.camera which is bound to render-window size / aspect ratio.

Julien Bayle's icon

Julien Bayle

7月 04 2019 | 10:32 午前

So, here are 3 patches.
I'm using a global jit.gl.node capturing my jitter objects I need to render, then I'm capturing this group and using a post processing on this. This last part, obviously, is required in my process.

first way

Max Patch
Copy patch and select New From Clipboard in Max.


second way

Max Patch
Copy patch and select New From Clipboard in Max.


third way

Max Patch
Copy patch and select New From Clipboard in Max.


each way drives to the same issue I have. Actually, I only need to record for festival directors, curators and eventually documenting my work. As soon as it works in live, that's okay. but I think I miss something and not use the right way

Pedro Santos's icon

Pedro Santos

7月 04 2019 | 11:23 午前

In the example I posted, jit.world's internal jit.gl.node was the one capturing the scene.
In your example you're using an external jit.gl.node object that doesn't know the resolution at which you want to capture the scene to texture. By default, it uses the rendering window's resolution. Your examples work if you explicitly configure the intended resolution by adding "@adapt 0 @dim 1920 1080" to your jit.gl.node.

Julien Bayle's icon

Julien Bayle

7月 04 2019 | 11:42 午前

I can't do differently about an external jit.gl.node except if I do all my post-processing (pixel processing) treatments between my world + syphon ; can be ok for video recording purpose only but not for the live performance and I'd prefer, indeed, keeping my system exactly as the one for the live performance and just enable/disable the syphon part when I need it.

I understand your explicit resolution setup, that totally makes sense and I didn't need it before.
I'll try but I know it will work. Thanks :-D

Julien Bayle's icon

Julien Bayle

7月 04 2019 | 3:43 午後

I can confirm it works
That default (and very logic) behaviors tricked me.

Here is the patch for further reference for people.
Thanks Pedro

Max Patch
Copy patch and select New From Clipboard in Max.

Now I'd like to know which format you use for capturing ?
The idea is to use the one that is the more friendly with the cpu during the recording.
Uncompressed seems to be ok (as soon as we have storage space)

Herr Markant's icon

Herr Markant

7月 06 2019 | 9:03 午後

I just bought this "AGPtek HDMI HD" device, just for testing purpose, but I very like it!
It records in 1080p 30fps, h264 with aprox 17mbit/s (Baseline Level4) & 192kbit/s 48khz Audio.
It feels very cheap, but it does exacly what it should, without any performance loss!

So thanks for this tip!

Linden's icon

Linden

9月 12 2019 | 12:05 午後

Hello i have a question about the paramter recording: how can i include for example the data of jit.gl.camera?
if i work with anim.drive i can see the position of the camera in the attrui. but i dont find a way to include any of this in autopattr cause anim.drive doesnt spit out the position... great series and helpful!!

Rob Ramirez's icon

Rob Ramirez

9月 12 2019 | 4:50 午後

hi Linden, anim.drive modiefies the transform attributes of a jit.gl.camera (position and orientation, i.e. rotate, rotatexyz or quat).

you simply need to grab those values with getattr and store them in parameter enabled number boxes in the recorder, and then output them back to the camera in the player.

basic patch to get you started:

Max Patch
Copy patch and select New From Clipboard in Max.

Linden's icon

Linden

9月 14 2019 | 1:28 午後

many thx

srokasonic's icon

srokasonic

11月 24 2019 | 8:21 午後

Thank you Rob for these amazing tutorials, they are really helpful. After reworking my patch in a number of ways, I finally made a decent gl version for my simple needs and now to render video I'm trying to incorporate your 'parameter recorder to 4k' it all works perfectly... except when I try to use [jit.gl.pix @gen xfade] with a umenu loading file between two matrices, it constantly scrambles the order of the files I load, sometimes loads them but they don't start, it's quite baffling. I've been banging my head against a wall trying to figure out what I've got wrong, so I'd love any suggestions.

srokasonic's icon

srokasonic

11月 25 2019 | 2:43 午後

So it seems like I cannot get the jit.movie to record the play back correctly for the parameter recording and then for the recording of the parameter recording. I'm a relative tyro in jitter so I'm sure there's something simple I'm missing, but with this patch I tried to use [gettime] to dump the time and then read back into jit.movie when playing back the parameter recording. It didn't exactly work. How can I record the frames and have the frame $1 bang it out on the playback

Max Patch
Copy patch and select New From Clipboard in Max.

Rob Ramirez's icon

Rob Ramirez

12月 02 2019 | 4:48 午後

here's how I would record and playback the current frame of a jit.movie using the parameter-recorder system:

Max Patch
Copy patch and select New From Clipboard in Max.

the patch queries the framecount and multiplies this with the position value to get the current frame when parameter-record is enabled. then for playback this frame value triggers the frame_true message to output the desired frame at the current time (frame_true is used instead of frame for cases when the encoded file doesn't have keyframes every frame, as is the case with the chickens movie).

srokasonic's icon

srokasonic

12月 10 2019 | 7:50 午後

Thanks Rob! I attempted this in a few ways and I think I was having problems because of a loadram chain I had and some other things, but in the end got it sort of working with this patch

Max Patch
Copy patch and select New From Clipboard in Max.

It seems like my processor is still having trouble handling the load though, even in non real-time . It goes back to the CPU in your recorder patch right? Anyway, your patch looks a bit more elegant, so I'll try incorporating that and let you know how it goes.

srokasonic's icon

srokasonic

12月 18 2019 | 1:49 午前

Two questions Rob:
1. why do you have the parameter recording playing the video back at rate 0.25 ?

2. you did not include the dim change, I'm assuming a dim change won't help improve a video file already recorded at a lower dim?

Sabina's icon

Sabina

5月 12 2020 | 4:02 午後

Hi, just wanted to comment that I have been using Blackmagic assistant 12G as an external hardware device having excellent results. The feature I love the most : Ideal for audio -reactive stuff, perfect sync audio-visuals, no audio -delay. Even 8 audio channels at a time . No pain, so far.....

Martin Beck's icon

Martin Beck

10月 28 2020 | 9:41 午後

Hello Rob,

I am trying to use the proof of concept PARAMETER-RECORDER patch with several hundred parameters and I want to use it to record multi sliders, which currently fails as described in this thread
https://cycling74.com/forums/parameter-recorder-for-multislider-non-real-time-rendering/replies/1#reply-5f9867ca053b1714c857715a .
I would like to know
a) Is it better especially regarding recording performance to do a javascript implementation instead of patching lots of [iter] objects? What will limit the number of parameters that can be recorded with let's say 60 frames per second (my target is something like 500 float values per frame including the multi slider values)?
b) What are the benefits of using a jit.matrixset containing 1 dimensional matrices over using a 2 dimensional jit.matrix?

Rob Ramirez's icon

Rob Ramirez

10月 29 2020 | 3:48 午後

hi Martin, I don't have any definitive answers for you. The design in the article was chosen to get something working simply that demonstrates the basic concept of writing parameters to some storage mechanism at a fixed rate, to be played back non-realtime with high-res image output and capture.

I think javascript would be a great path to explore. I don't see any benefits a 1D matrix has over a 2D for data storage, if it fits with your design.

the redesigned mtr object may be another avenue to explore for this.

Martin Beck's icon

Martin Beck

11月 02 2020 | 10:43 午後

Hi Rob, [mtr] looked very promising but besides that [mtr] in Max 8.1.8 has some very confusing characteristics like these

and I think I found the next problem that looks like a show-stopper for replacing the PARAMETER-RECORDER. I have several [pattrstorage] objects and a [autopattr] that manage approx. 200 parameters in my Jitter patch. The [pattrstorage] are used for parameter interpolation experiments and preset management. When I use [mtr @bindto param1 param2 ........ param108] to record a subset of these parameters it looks like [mtr] steals the clients from the [pattrstorage] objects. If I open the pattrstorage client window all parameters that were bound to [mtr] are missing. If I delete the [mtr] they appear again in the client window. Does this mean [mtr @bindto] can not be used in parallel with [pattrstorage]?

Edit: added patch

Max Patch
Copy patch and select New From Clipboard in Max.

Antoine Goudreau's icon

Antoine Goudreau

2月 03 2021 | 9:19 午後

Hiya, this is somewhat related and maybe a little unnecessary, but has anyone managed to start syphon recording from within a max patch, so sending a message to start instead of manually clicking on record? Might be useful for timing sequences!

Rob Ramirez's icon

Rob Ramirez

2月 04 2021 | 5:53 午後

in the past I experimented with controlling syphon recorder via applescript (triggered via the shell external), it worked well enough but I remember their being some glitches that kept me from pursuing it fully.

I think a better route to explore is OBS and the obs-websocket plugin

I've been meaning to explore this a bit, so let me know if this is of interest.

Antoine Goudreau's icon

Antoine Goudreau

2月 05 2021 | 5:16 午前

Hiya Rob, thanks for the reply.

I've found an Apple script to start Syphon also, though I'm very surprised there aren't any API calls we could do to the Syphon server.

I would definitely be interested in an OBS solution!

Rob Ramirez's icon

Rob Ramirez

2月 05 2021 | 8:44 午後

Ok here it is, interested to hear your experience - https://cycling74.com/forums/control-obs-recorder-via-n4m-script

Antoine Goudreau's icon

Antoine Goudreau

2月 19 2021 | 5:33 午前

Hi Rob, realized I never gave you any feedback!
Just wanted to say, I used this on a project recently and I was pleasantly surprised ! Very organized patch with many useful widgets, worked perfectly and obs running simultaneously was still lightweight enough to be useful. :)
Recording started a few frames late to the initial trigger of the animation within Max, but I think that's to be expected and can easily be edited out anyway.

Thanks for your work!

Antoine Goudreau's icon

Antoine Goudreau

2月 19 2021 | 5:35 午前

Have developed my own little patcher to easily insert into other projects also!

Rob Ramirez's icon

Rob Ramirez

2月 19 2021 | 4:52 午後

would love to see what you came up with if you feel like sharing

Antoine Goudreau's icon

Antoine Goudreau

2月 20 2021 | 6:23 午後

Here you go!

Kept most of your core patcher intact, you send all messages to the first inlet of the patcher. I added an "open" message as an option to open my ~/Movies folder, though this is set to my personal folder, don't know if there is a generic user path I can use ;max launchbrowser with.

Two outlets: 0/1 for checking status of the recording and the very useful file checker that acts as a kind of dump.

Obviously, all the relevant files have to be in the same folder as this patcher to be able to read the Node file.

obsnoderec.maxpat
Max Patch

Rob Ramirez's icon

Rob Ramirez

4月 18 2023 | 3:35 午前

Posting some bits here from Fragment Flow creator Paul Fennell on recording realtime output for those that desire more helpful hints. First, check out the recording methods guide.

And here are some updates Paul recently posted on the facebook group:

I'm getting pretty pristine results with OBS using the Spout2 plugin and "Custom Output (FFmpeg)" to record DnxHR/DnxHD files - but I'm on quite powerful hardware so your results may vary.

There’s actually some misinformation in that guide which I need to correct. You can in fact record using DnxHR/DnxHD in OBS using “Custom Output (FFmpeg)” as the recording type, which is the method that I’m using now when opting for screen capture (on a Ryzen 9 5950x + RTX 3080). The file sizes are huge, but again the quality of the recordings is visually flawless.

OBS Settings

These are the encoder profile options for DnxHR - I'm using dnxhr_sq (2) for the smaller file size:
-profile <int>
dnxhd = 0
dnxhr_444 = 5
dnxhr_hqx = 4
dnxhr_hq = 3
dnxhr_sq = 2
dnxhr_lb = 1

dnxhr_hqx (4) would be broadcast quality and is probably the one you'll want to opt for, but they're all much better than H.264 or H.265 to my eyes, and they perform much better in editing. Just be prepared to invest in some HDs.

I'd still opt for my Atomos Ninja for professional production work though, for performance reasons etc.

srokasonic's icon

srokasonic

10月 29 2023 | 3:54 午後

Hi Rob, thanks again for all this great info. I've been using the non-realtime recorder with the jit.movie patching you shared for a few years, but only with 1280 and 1920 dim videos which works perfectly. Recently I start using actual 4k videos and so when I record the parameters at 1280 and then try to render at 4k it naturally changes the xy dims quite substantially. I solved this by simply multiplying the dims by 3, but I found that doesn't work when using jit.rota. It seems like there's a more complex equation for extrapolating dims between 1280 and 4k with jit.rota but my simple brain can't figure it out. I've searched the forums for answers and came up short handed, so I'm asking here for help...