Limits to jit.matrixset?
So I’m working on a program that captures a series of images and plays them back continuously while capture is on-going.
I am using jit.matrixset to capture the frames, and then using a counter -> outputmatrix $1 to playback.
In theory, and for small sets, it seems to work OK. However, this is a long duration application, and I am concerned about memory handling. How realistic is it to have tens of thousands of frames stored in a jit.matrixset?
Also, how reasonable is it to resize the jit.matrixset on the fly, by sending it new matrixcount values? Is this something that can be done every time a new frame is added?
----------begin_max5_patcher---------- 1149.3ocyY0zjiZCD8r8uBUT43DWHP7UNksx47KXqs1Bi05QaAHuBQlIYq8+ d.IvC1Cfa.aMbXfxBglW+5tepawO2twZO+UZgE5OPeFsYyO2tYiZn5A1z76M VYwuljFWnllUFsnH9H05I8yjzWkpwKnxp4g9Mb6SxKy3kxTpT8d3lQYGTylu +6+tSP6T0yS9umnZjXYg9Ryi9FOWlGmodf0mDr3zNqOKuc4c5L8B1+olN1Ym cyvmhkIOyxO9UAMQp+eD3XW8XjSnW8Me+5q3vc1nuT+J+Z615KOAjTpvxdpn eK2oGK2aPKmkKsdBYsON+3jIA7zIAak06Fnt4oYD6YRBYkoRVQJ6vTXB21oV G+vxqVPMxTPAGTCkACRdpafBWvn4xXIimW+b6wHnjJB5PrLt6XivNtZv3E8F llA87clb2oWX4G3u.mdbrfY8mDzhVyuiE3FRTNVh5lcykyuUujSODgajsdYz AH5aX+4xD4zWpLt2ofjQkBdUPHXADe+AYmYk+zqHBdmWjGNLXX1okVzJIdgK JIRxOdLkBmBBFUIYxd5Vaw2o0JVfsLQUQ+PyoJBvqFD0gIVnz39RorRWBrWM BXf8Dcqt1Zaxq857z4GnBfJrdprpH.ofMs5.vej0AzPMDcrdj6xpCneksDdY tjJPXz.rB48rBN5VYCs+cQhNPdxat7jqu5Vf6xJUXfPnpcGoSJ1AG9QF6zru X61iKqFxg3D36Gh8+HIChairhtnImcdlqfZLYkTPcaDA9dTP83pIvURbVWJI MbTS8RKTIY.NRhRQ6gG9fGsLaCE6f0sa33zIAZ17xb51.aCraCnkcf61c4ha aXhEQcuqgh3o6BTG7haudOiZqcZM0OkDeRVJnn+T9rnDYCuH5As5pE+q5E+R m4iNdFGp86Z9a1AzI7rLZsb0Ub1eUAcQLptkdAOsnehxtm1jGHrw8bby2DG2 2XIJCn6kyrWMNR3obwMm4v77PMu2Tytl8zxjA2p68KbOgi5dHmOOiqV+E215 +DKNaoIJ+yeW2a+3ot.7M.a1cT9Lv65NampnlN2WyaNOjc30mDhqKXsNOCbP HimqiUASjf6vQfLh33Oj6NJh2i7I1HRHbUQm0mpntZYrcvCoQJ9IZ9hS2bWT 6DCjE52IKzy4sChn4s1eriZp8YgT6qUSuc75nhe5MiHuCE2QWPRJuftXe.4Q 3CvNccBjwcB3tGc7c0InEjaOa4E4DtYAUETYkYGhHnjmiESV.AudDPZZehzT VUP+hsJbXkxxu9q6oVw5wujMK3khjVap8aTgdyUefVHY4miT+7aGcWmI8L6v A5Ee9kL1gS7pMma.w.9VvXJ.BlbMJlpO5gaio.yxStPvjmY4IHPxwnPp9X1u IlvQlESPBwq+vWlDS9Pvjg4oPn7D1bocPhwMqhINZ8ohCRwTAbh4vzJz0ABR 1lMqCBj9.nI7svjY2XAGt9hl.kzEZVLAJbp104ZNMbHaqXVVBBjtx89nwTzp ik.UwqQQDY0gHuUGhfTr6RZSo5G+Z6+yymjsI -----------end_max5_patcher-----------
You can probably sort of work out how big a frame of video in memory and then extrapolate that to see how many frames you could fit in memory..but it’s probably going to be way less than that calculation…im not savvy enough to tell you why though. I’d say past 10 seconds or so you’re probably pushing it..so into the minutes might be too much.
Changing matrixcount on the fly doesn’t seem to work..I think i tried it in something I was trying and it sort of reset the entire cache of images..so you cant sort of expand it and contract it, although that would be cool
I wonder if there is a better way…
I am basically trying to allow playback of a timelapse movie as it is created. I’ve tried saving an image sequence, and then loading the images in sequence using a umenu, but that is very slow.
your only options are storing the frames in memory (jit.matrixset or 3D jit.matrix), writing the frames to disk and re-reading, or a combination of the two.
my guess would be a combination of the two is the way to go.
obviously it’s faster to store in memory, but this is limited your available memory (or 2GB, whichever comes first).
I’m trying jit.matrixset, and it seems to work fine, but am having a hard time getting my head around the others.
Is using a 3D jit.matrix more efficient than a matrixset?
One thing I was thinking of trying was converting the incoming RGBA stream into YUV – then presumably I can use a jit.matrixset with half the horizontal resolution – is that correct? That would double the amount of frames I could store.
How would I go about about writing individual frames into a 3D matrix? I think I understand how I could use jit.submatrix to read out frames…
Finally I don’t quite understand how a combination would work, bearing in mind that I want to be able to playback frames that may have just been recorded.
interesting thread, am looking forward to seeing what comes from it…
For the 3D matrix, I think Robert means that you set the first two dimensions (x and y) to the video resolution, then make the z dimension as big as the total number of frames you want to store (or "plenty big" to accommodate). Then you use jit.fill and jit.spill to store/recall each frame, where each one is one z index. So you wouldn’t use jit.submatrix, you’d just read matrix "slices". Remember that the matrix is still 4-plane char, but in 3 dimensions rather than the typical 2 used for video.
If there’s a faster way to store and recall them than jit.fill/spill, that would be good to know.
A combination of the two might be to use jit.matrixset to store a sampling of the frames, then read from there, but that seems like an extra step that’s not needed (?)
Using an image sequence with [umenu] is definitely slow, every time you load using jit.qt.movie there is a noticeable pause, even if it’s a single image. The exception is to load a movie of frames, then use "frame $1" to jump around…this won’t stutter or lag at all.
Robert can I assume from your post that MAX is limited to using 2Gb of RAM?
@strontiumDog – yes
i’m not sure if jit.matrixset is "more efficient" than a 3D matrix. you would have to benchmark, but my hunch is that they’re pretty equal.
you can write frames using @dstdimstart/end and access them using jit.submatrix.
i’m sure there are other ways as well.
here’s a basic example, which creates a 10 frame delay, but can be modified to achieve your goals.
----------begin_max5_patcher---------- 858.3ocyX00aaBCE8YxuBKz1SKMBa9Hj8T+UrWlppffap6B1YfYMqU6+9veP BoCRIXpWTUMhKNly83y83axqybbSY6wktfuB9Nvw40YNNxPh.N56cbyS1uda RobZtT7yrzmbmqdDGumKCuKY8O.D.o4AzpbVEeKlK+TPcTUH9u2gUuRWWvc5 GQxjqS8Zei+plU4AFkSSxky18a3hrDZRyy1kvW+Hgt49B7ZtZ4PddK7lCPKk WBWIFg0iGdIh0qj7hb8fhIc.rDZCVQhX+Y1LwvbyHka.vKjOHTd2TRrgTRn7 Rfm0njzJNmQuvzOMgto67O7b4nJ4hUY5gwCqyI4Bb51d+bMgMU6u9ib+02Wl u9pz1e4+aI+WlNIOxPJwy1RdNayls3oJ8gmIEgJstR2iPMie3J9bLufUWbMU k0nw5zqI.MMXQUeUdJtn6zGcts34miEBMToG1RMbwzvXjG43xxjM3+Qevd3g RLG3U+2mfSQq.HS8EUJjkgRpYoETH8QMYk7LRdIOongelCTwvzrZgLHpN1AK usDJdMqhxaKrLhHgl0.g1MEpOd0F0ZOQ3K18Lglwd9BK3DUaGYgcE3RLkmvI LZ6DCFISrnXwEO8PWbGL5L7iRfErrkeTzj17QeBpBbRFHi8BtXQN6WSQw1n6 wrsYbbr0J054PJgtorJMOgWP1CtstBSWdAA21xfx6BEU0K68p07T4UKBLxrS yBzEWHn0LxmpZrVjP3.RzOtpkdzDRuTbAPHB5qo8f2476l+6qisfwV9Do21U Nrdi6X7voir3fsfzQHG5+auYVcg9xpUVqr3LNK+jKraI3o28.YHKEXj9YhoI s6a.X8iIEM9udfaqJwptdDtwusmHulPGaIB1YGQASN4CMi70sfCCjMZFFZ.4 K+fxz8M+tbx2tH9o6Hkrph0MYZyWbFb78mgK4Dpr6m1Sx+jI8HIKCKedC9xI Y6X0VcZP.tqS8wPwTvPfDxpPR7qL9tXRz170FMIvMzdzT7PwzUFMEaW0T3PT S1ER9CARgVESnUWezDbHPxtBbzP15htLH46uZQXcu2v.UOUGuyTvNHoucOuA MjMU65s6ehsc2HJv5H55hihF.hrqgEb5KDMUZOjcM6Bo3qNS8A0BrAHp9l+L 6uveKFi3 -----------end_max5_patcher-----------
I found this thread: http://cycling74.com/forums/topic.php?id=24099 and specifically an incredibly helpful post by Andrew Benson, that seems to be the way to go.
Basically writing each frame as a .jxf file and asynchronously reading it for playback. If you have enough disk space, you can store a *lot* of video and access it easily.
Thanks for digging that up, Gian Pablo. Reading over this thread, I kept trying to find that example patch on my local drive and couldn’t. You’ll definitely need a reasonably fast external drive to make this work right. You might also experiment with writing out standard image formats if you need to eventually export your buffers to QT. My testing was pretty limited.
Thx Andrew, it’s working really well.
The good thing is that for timelapse, the framerate on playback is not critical, 15fps looks good.
I get the impression that read/write of jxf files is much quicker than jpg.
yeah, jxf files are uncompressed binary matrix data, so it should be pretty snappy if disk bandwidth/speed isn’t an issue. Raw image files might also be cool, but still have to be imported and converted to matrix data. I’m going to bet that disk speed and bandwidth (not CPU) are going to be the bottleneck with this approach, but if you can manage at whatever FPS you are getting, cool. Like I say, this is all pretty theoretical, as I’ve not really done many practical experiments.