Articles

Streaming Tips - The Jitter Edition

In this article we’re going to continue exploring some options for streaming content from your Max patch to the internet. This time we’ll look at how to use Jitter texture output as a streaming input source, and go over some techniques for creating and sending to a virtual camera device for use in webcam apps.

Check out the other streaming article - Tips for Streaming

streaming-patches.zip
application/zip 52.68 KB
Download the patches used in this article

Basic Video Streaming

On both platforms we will again let OBS manage the streaming I/O. OBS is a fantastic open source project with tons of functionality either built-in or available via plugins. Additionally, we’re going to use Syphon/Spout texture sharing to send our output to the streaming app. OBS on Mac supports Syphon sources natively, and on Windows there is a plugin to support Spout sources.

The Setup

Note - On Windows machines with dual graphics cards, care must be taken to ensure all apps are set to use the high performance GPU rather than the integrated GPU.

Previously we looked at how to stream to a single destination (Twitch) via OBS. I was recently clued in by C74 homie Chris Martin on how to turn that into multiple destinations via a service called Restream, which the Coaxial crew uses for their Experimental Half Hour show.

Syphon/Spout usage is detailed in the Best Practices for recording article and we’ll use many of the same concepts in order to achieve streaming Jitter content. In both situations we must decide which patch elements to send down the pipes, and how they should be composited, and then capture those elements to a texture that feeds the server object (a jit.gl.syphonserver on Mac, or a jit.gl.spoutsender on Windows).

In my recent lockdown related streaming experiments I implemented a very basic background separation and monochrome effect that I streamed out of OBS to Restream which went live on Facebook and YouTube simultaneously. Let’s look at the steps necessary to make that happen.

A quick note before jumping in, HD video streaming requires a beefy machine with a beefy gpu and internet connection. Mileage will vary wildly depending on these specs.

The Patch

The recording article mentioned above starts by asking “What to Record”, and we must again decide on the what before moving to the how. In the provided patch I’ve mashed a few different sources and effects together just to keep things spicy. The core of the patch is a simple foreground video echo effect.

We start with a jit.grab camera source (output_texture enabled of course) run through a background subtraction algorithm and then piped through to some Vizzie effects. We also implement a mesh-shatter effect to demonstrate how geometry can be captured and composited with texture streams via jit.gl.node @capture 1. Throw in a glitchy little ISF transition and some parameter animation via jit.mo and an ATTRACTR module to polish off. To run the patch make sure you've installed the ISF Max package and ISF library files, or substitute the transition object with a Vizzie XFADR.

Patch running almost entirely on the GPU

Now that we’ve got a sparky live video patch to play with, let’s send it off. The basic rule of thumb with Jitter texture streaming is any object that outputs a texture can be patched to a syphon/spout server object. If you want to send off geometry, capture it to a texture with jit.gl.node, or if you want to just send the entire contents of your jit.world window, enable output_texture and connect the server object to its leftmost outlet.

The Stream

Once your patch has passed off its texture through the sender server, any client software running on your machine can grab the stream. Max is pretty much out of the picture from this point. OBS streams out to Restream and Restream handles sending to any other desired platforms. After logging in to Restream client services can be added to the streaming session in the Destinations section.

Adding destinations via Restream web interface

For OBS setup follow the instructions outlined in the previous streaming article. Copy the RTMP URL and stream key from Restream and enter in the OBS Settings window Stream section. Adjust any stream and video settings (e.g. bitrate, dimensions) as desired for your setup. Add a video source to your OBS scene by clicking the plus menu under the Sources section and select Syphon Client on Mac or Spout2 Capture on Windows and setting Max as the source (making sure you’ve installed the plugin linked above if on Windows). If more than one sender server is running on your machine, open the source properties by clicking the gear icon to select the appropriate server.

OBS running a Syphon client source on Mac and a Spout client source on Windows

Note that it is possible to stream from Max via an OBS Window Capture source instead of using Syphon or Spout, but in my testing the frame rate dropped significantly. It also requires the Jitter window dimensions to match the streaming dimensions, so a less viable solution in most cases.

One last note, OBS makes it dead simple to make a local disk based recording of your stream, which generally is a good idea. By default it will match the recording settings to the stream settings, and therefore run more efficiently.

Virtual Cameras

Ok so we’ve achieved our goal of streaming Jitter content through some different streaming services, but we live in the age of the Zoom meeting, how does one liven up their daily teleconferencing? For this we are going to need to create a virtual camera device to pipe Jitter through. Fortunately there are free programs on each OS that manage this device and receive Syphon and Spout textures streams.

The setup is exactly the same as above except instead of sending to OBS, we will send to our virtual camera device. On Mac with Camtwist make sure you’ve set the video size and framerate in the preferences window before enabling your input. This should match the dims and framerate of your texture output.

Once you've updated your preferences, select Syphon as the source in the section titled Step 1, add effects if desired in Step 2, and then select your Max server as the source in Step 3. That’s it, you can now select Camtwist as your video device in your teleconferencing app (e.g. Zoom, Skype, Google meetups, etc).

Sending Jitter to Skype via Camtwist and Syphon

On Windows we're going to run the Spout installer (note this is different from the Spout Max package), which will install a utility called Spout Cam. After installing, you will be able to select Spout Cam as the camera in Zoom. Spout Cam should match the dimensions of your texture input by default. As mentioned above, you must make sure all apps are using the high performance GPU for this to work.

Sending Jitter to Zoom via Spout Cam

Not all webcam apps are able to detect and use Spout Cam directly, e.g. Skype did not in my testing. For these cases we can install the SplitCam app to act as a shim. SplitCam creates its own virtual camera device, and is also able to receive textures from Spout Cam. So with SplitCam running and its input set to Spout Cam, we are able to send directly to Skype.

Sending Jitter to Skype via SplitCam and Spout Cam

Background Subtraction

In this final section let’s go even further down the rabbit hole and investigate how to improve the background subtraction. The patch above does a decent job of this, but it requires controlled conditions in order to be effective. Any changes in lighting and shadows, or any small movements of the camera cause the effect to break down. Modern day camera apps typically use smarter algorithms, likely built on machine learning and extensive datasets to handle the segmentation. To up our game we can install the popular (and free) Snap Camera app, and search for a chromakey lens.

Before we dive deep into this let me say, in my experimenting I encountered different problems on both platforms. The following is presented primarily as an exercise in problem solving, rather than a practical solution. If that's not interesting to you feel free to skip this section.

What you'll need:

First, launch Snap Camera and enable the chromakey lens of your choosing. On Windows, jit.grab can directly receive the output from Snap Camera, but I was unable to get it working directly on Mac (not sure why). To workaround we can again make use of OBS, but this time instead of piping Jitter to OBS, we pipe OBS to Jitter using something called NDI. Install the NDI plugin and runtime linked above, then in OBS create a new Video Capture Device and set the device to Snap Camera. Click on the Tools menu and select NDI Output Settings and click the checkbox next to Main Output. Next launch the NDISyphon app and in the NDI Clients section you should see your OBS source. Click on that and enable it. Finally in your max patch you can receive the chromakeyed image with a jit.gl.syphonclient object.

Snap Camera to OBS to NDI to Jitter

If you want to take this monstrosity one step further, we can actually use OBS to both send from Snap to Jitter via NDI, and from Jitter to a streaming service. The trick is to enable Studio Mode by clicking that button in the lower right section of the interface. Then in the NDI Output Settings select the Preview Output instead of the Main Output. Finally you will set the Snap source as the preview and the Syphon source as the program.

Using OBS Studio Mode to both send NDI out to Jitter and receive a Syphon stream from Jitter

Despite the ridiculousness of this setup, it does actually work and as proof I live-streamed out a little test clip using my patch above and replacing the background subtraction sub-patcher with the Snap Camera chromakey lens output. Check it out here.

In Practice

You might be wondering how all this streaming stuff works out in practice. For that I'm going to turn it over to our good buddy Sam Tarakajian, who recently participated in an "online micro festival" called More Kicks Than Friends (Sam's bit starts around minute 38) presented by Max for Live master Ned Rush.

I wanted to perform something with my friend Alex, but of course the challenge was that we couldn't jam together in person. Our solution might sound a bit convoluted, but I was very pleasantly surprised by how well it turned out. First, Alex streamed his musical improvisation to me over Twitch. I kept that stream tab open in my browser, and fed the sound into Max using Loopback. If you don't want to buy Loopback, you could just as easily use Soundflower or BlackHole.

Once I had the sound in Max, I did some audio analysis to drive a couple of envelope followers, adding some audio reactivity to my Jitter patch. Then, so Alex could get a sense of what I was doing (albeit with some latency), I used exactly the technique that Rob describes: Max to Syphon to OBS to Twitch. Since Ned was asking for prerecorded video, I also used Syphon Recorder to record video locally (Rob describes using OBS for this, which would have been pretty much the same result). After we finished, we edited Alex's local audio together with my local video, and sent the result off to Ned. Luckily Syphon Recorder can record audio as well, which helped Alex edit the video properly.

On my machine I was simultaneously generating live visuals with Jitter, performing GPU accelerated face tracking, streaming video from Twitch, streaming video to Twitch, and recording video to disk. I'm amazed it was able to work at all, let alone as well as it did. But far from being a huge technical headache, all of the streaming tools we used were surprisingly easy to use. I'm looking forward to trying more streamed performance in the future.

by Rob Ramirez on April 21, 2020

Pedro Santos's icon

Hi, Rob (and Cyling74).
Thank you for this article. It's a useful and extremely relevant topic currently.
I've been testing approaches like these for a while, and while it is feasible to do some of the audio and video routing (much more than in a recent past!), it's still somewhat cumbersome, unfortunately, both in Mac and Windows and some of the tools we depend on seem to be abandoned (for instance, the last blog entry on CamTwist's sitee is from April 2013).

I think much of the blame here resides in Apple and Microsoft, which don't cater to some out of the ordinary situations, like sharing audio streams between applications easily, or routing certain inputs devices to different outputs, etc. Concerns in newer OS's over security and privacy subjects even aggravated some of these problems. The thing is, nowadays most people teaching via videoconferencing are trying to circunvent some of these bottlenecks. For instance, it's ridiculous the amount of work one has to have in MacOS in order to record a quicktime desktop video or realtime videoconference with the audio that's being played by the computer (the output is not available as an input). We have to install a separate virtual driver and then, if we also want to record/stream with the microphone, aggregate both inputs in another virtual device.

What does Cycling74 have to do with this? I understand that it's not its responsability or even one of its priorities, but I would love to use and recommend Max to others as a sort of essential tool when it comes to routing different control protocols like MIDI or OSC, audio or video/texture streams. And I think it is well positioned to occupy that space, and that there's a market for that. I mean, if we go to Rogue Amoeba's site and buy some of their useful utilities (Audio Hijack, Loopback, Farrago), we would probably end up paying around 200$ alone, and even agreeing that they're great products, they don't compare to Max's broader range of features and programmability. So, these are some ideas about the features that I wish Max would have in order for it to be positioned as the de facto multimedia patchbay:

  • Be able to access several audio drivers/devices at the same time, without the need to aggregate devices in a virtual device outside Max. I would like to be able to do exactly the same as we do with a MIDI device (even including letter abbreviations), and have something like "adc~ device_x 1 2". Altough I understand the additional problems with audio (samplerate, clock synchronization, ...), it's already done when separating input and output driver, and other applications are able to do it.

  • Incorporate something like your "old" Soundflower in the Max installation and integrate it seamlessly in its environment. You were one of the first companies realizing the need for these kind of features and developing solutions for it. Apple never expanded their implementation of the IAC Bus (MIDI) when it came to an audio routing system and Max could be the ideal patchbay (and much more).

  • In video, Syphon/Spout were a godsend, but unfortunately, they're not a solution to every situation. The world is using Skype/Teams/Zoom videoconferencing all day long, and these platforms are able to access video/webcamera drivers. It would be very useful to have a "webcam out" Max object that would expose the video stream to the OS and applications through a webcam virtual driver.

  • Additionally, more support for certain popular hardware devices, not relying so much on the community to eventually develop externals would be desired. As great as the community is, many of these externals end up not being updated after a while (wacom tablets, wiimotes, the list goes on...) and official support would have a greater chance of assurring those updates. A middle ground would be for some of these externals made by the community that proved to be popular to later be taken care of by Cycling74, depending on the will of the original developer. I'm sure most of them wouldn't mind and even like that option.

Once more, these suggestion come from my opinion about what I think Max could also be. Most of these features would directly benefit its current users but also have the potential of acquiring new ones, with a different profile.

Cheers, Cycling, keep up the good work (and stay safe)!

Dante's icon

awesome article, I almost missed this one, learned stuff from the patch too 🦦

Fred Mercure's icon

Hello, thanks for compiling all those ways to stream from within jitter.
Do you or anyone reading this have a way to properly use "syphon"?
Ii can't get it installed? .
Got the patches though but... Thanks again.

Rob Ramirez's icon

hi Fred Mercure, use the Package Manager to install Syphon - https://cycling74.com/articles/introducing-the-max-package-manager

Fred Mercure's icon

Thanks rob.

Rob Ramirez's icon

Another useful tool from our friends at Troikatronix - https://community.troikatronix.com/topic/6742/syphon-virtual-webcam

More options for sending from Jitter to Zoom or Google via Syphon.

Fab Max's icon

Unfortunately I tried for hours recording the spout output with obs. first tried it with splitcam, got only a black screen, either from the capture windows as well as the jitter windows. Then installed the obs spout plugin, and as well, am able to select the spout source, but the screen on obs stays black. Any hints how I could solve this? (I am only trying to record the video with sound, the spoutrecorder freeware only gives a maximm 640x320 resolution; Max4Live)
Thanks in advance!!!

Oni Shogun's icon

very , VERY nifty indeed.
all four thumbs up.