Limits to jit.matrixset?

Jan 5, 2011 at 9:08pm

Limits to jit.matrixset?

So I’m working on a program that captures a series of images and plays them back continuously while capture is on-going.

I am using jit.matrixset to capture the frames, and then using a counter -> outputmatrix $1 to playback.

In theory, and for small sets, it seems to work OK. However, this is a long duration application, and I am concerned about memory handling. How realistic is it to have tens of thousands of frames stored in a jit.matrixset?

Also, how reasonable is it to resize the jit.matrixset on the fly, by sending it new matrixcount values? Is this something that can be done every time a new frame is added?

– Pasted Max Patch, click to expand. –
#54269
Jan 6, 2011 at 4:53am

You can probably sort of work out how big a frame of video in memory and then extrapolate that to see how many frames you could fit in memory..but it’s probably going to be way less than that calculation…im not savvy enough to tell you why though. I’d say past 10 seconds or so you’re probably pushing it..so into the minutes might be too much.

Changing matrixcount on the fly doesn’t seem to work..I think i tried it in something I was trying and it sort of reset the entire cache of images..so you cant sort of expand it and contract it, although that would be cool

#195296
Jan 6, 2011 at 5:18am

I wonder if there is a better way…

I am basically trying to allow playback of a timelapse movie as it is created. I’ve tried saving an image sequence, and then loading the images in sequence using a umenu, but that is very slow.

#195297
Jan 6, 2011 at 9:20pm

your only options are storing the frames in memory (jit.matrixset or 3D jit.matrix), writing the frames to disk and re-reading, or a combination of the two.

my guess would be a combination of the two is the way to go.
obviously it’s faster to store in memory, but this is limited your available memory (or 2GB, whichever comes first).

#195298
Jan 7, 2011 at 2:00am

I’m trying jit.matrixset, and it seems to work fine, but am having a hard time getting my head around the others.

Is using a 3D jit.matrix more efficient than a matrixset?

One thing I was thinking of trying was converting the incoming RGBA stream into YUV – then presumably I can use a jit.matrixset with half the horizontal resolution – is that correct? That would double the amount of frames I could store.

How would I go about about writing individual frames into a 3D matrix? I think I understand how I could use jit.submatrix to read out frames…

Finally I don’t quite understand how a combination would work, bearing in mind that I want to be able to playback frames that may have just been recorded.

#195299
Jan 7, 2011 at 4:17am

interesting thread, am looking forward to seeing what comes from it…

For the 3D matrix, I think Robert means that you set the first two dimensions (x and y) to the video resolution, then make the z dimension as big as the total number of frames you want to store (or “plenty big” to accommodate). Then you use jit.fill and jit.spill to store/recall each frame, where each one is one z index. So you wouldn’t use jit.submatrix, you’d just read matrix “slices”. Remember that the matrix is still 4-plane char, but in 3 dimensions rather than the typical 2 used for video.

If there’s a faster way to store and recall them than jit.fill/spill, that would be good to know.

A combination of the two might be to use jit.matrixset to store a sampling of the frames, then read from there, but that seems like an extra step that’s not needed (?)

Using an image sequence with [umenu] is definitely slow, every time you load using jit.qt.movie there is a noticeable pause, even if it’s a single image. The exception is to load a movie of frames, then use “frame $1″ to jump around…this won’t stutter or lag at all.

#195300
Jan 7, 2011 at 12:41pm

Robert can I assume from your post that MAX is limited to using 2Gb of RAM?

#195301
Jan 7, 2011 at 6:13pm

@strontiumDog – yes

#195302
Jan 7, 2011 at 6:30pm

i’m not sure if jit.matrixset is “more efficient” than a 3D matrix. you would have to benchmark, but my hunch is that they’re pretty equal.

you can write frames using @dstdimstart/end and access them using jit.submatrix.
i’m sure there are other ways as well.
here’s a basic example, which creates a 10 frame delay, but can be modified to achieve your goals.

– Pasted Max Patch, click to expand. –
#195303
Jan 9, 2011 at 2:20am

I found this thread: http://cycling74.com/forums/topic.php?id=24099 and specifically an incredibly helpful post by Andrew Benson, that seems to be the way to go.

Basically writing each frame as a .jxf file and asynchronously reading it for playback. If you have enough disk space, you can store a *lot* of video and access it easily.

#195304
Jan 10, 2011 at 6:01pm

Thanks for digging that up, Gian Pablo. Reading over this thread, I kept trying to find that example patch on my local drive and couldn’t. You’ll definitely need a reasonably fast external drive to make this work right. You might also experiment with writing out standard image formats if you need to eventually export your buffers to QT. My testing was pretty limited.

#195305
Jan 11, 2011 at 5:48am

Thx Andrew, it’s working really well.

The good thing is that for timelapse, the framerate on playback is not critical, 15fps looks good.

I get the impression that read/write of jxf files is much quicker than jpg.

#195306
Jan 11, 2011 at 8:49pm

yeah, jxf files are uncompressed binary matrix data, so it should be pretty snappy if disk bandwidth/speed isn’t an issue. Raw image files might also be cool, but still have to be imported and converted to matrix data. I’m going to bet that disk speed and bandwidth (not CPU) are going to be the bottleneck with this approach, but if you can manage at whatever FPS you are getting, cool. Like I say, this is all pretty theoretical, as I’ve not really done many practical experiments.

#195307

You must be logged in to reply to this topic.