Timelapse, rapidly playback still frames from folder containing million images

Duffield's icon

Hey guys,

This is a semi-repost, but I'm really trying to figure this out and the process has been frustrating. I originally asked in this thread: https://cycling74.com/forums/playing-back-still-images-fast-without-loadram/ however, the answer of "compile them into a movie", is a bit ridiculous as at 24 fps it would be a twelve hour movie. I am doing a bit of random access memory based on some filtering (i.e., range of dates / times of images), so it's ideal that the images are in folders and not just compiled into a movie. There are once again, a approximately 1.3 million "frames" per channel, so no loading into RAM is not an option.

Adding to the complication, I'm trying to play back four channels of time lapse at once. Originally I thought I was working with tens of thousands of images, but after talking more to my client it's more in the millions. I'm trying to play back as quick as possible (ideally 24 fps), but if it has to be 12-15 fps to play it back as four channels so be it. Source images are 1280 X 720, .png files.

WHAT I'VE TRIED:
That being said, making JXF files for each frames is out of the question as at 3.3 MB per frame, I think I calculated that I would need like a 30TB hard drive...which isn't happening.

I have also tried the [importmovie] with a 4 char matrix, approach, that was really slow.

WHAT SEEMS BEST SO FAR BUT IS TERRIBLY FLAWED:
If I use a jit.movie with read and @output_texture 1(see patch below), two channels of fine. When it gets to three things get chuggy (see patch below).

NOTE:
I am using a Macbook Pro, 2.5 GhZ Intel Core i7, 16GB RAM NVIDIA GeForce GT 750M. My client may be running this on a Mac Pro, but what I worry about is spending $5000 on a computer that still won't run it right. If it requires two computers or this is recommended, so be it, just setting up a four channel installation on one computer vs two is less of a headache.

Any feedback would be sooooooo appreciated as I've been at this for a while.

Thanks in advance.

hz37's icon

You probably don't want to hear this, but I would turn these pictures into several movies. I agree that one 12 hour movie might be too much, but why not make a bunch of movies? You can still skip to certain parts in those movies. Your clients happiness is all that counts, and it will be cheaper and faster this way for sure and you can be sure the movies play back at 25 fps or what you may have. I have made an instructable about how to compile still frames into a movie in FCPX if that is any help:

Good luck!

Floating Point's icon

I have no experience in this sort of thing, but I do have an idea, perhaps naive. Make your images into a (bunch of) movie(s). Make a database of the images with their tags etc, referenced to the movie frames that contain the images, so you know where each frame is, that way the frames are pointers to the images you want to load. Sections of the movie(s), or different movies, would correspond to your virtual 'folders' that you would ideally want, so there is some relevance to the way the frames are ordered. You may want to use max's sqlite extension as a database...
OK just read your previous post -- this idea is probably too flaky to implement.

You may want to look into something like processing as an alternative to Max, as I believe many video programmers prefer that over Jitter. It may have what you are looking for...

Duffield's icon

This is a project where all my small scale tests but is blowing up as I'm scaling up, and I always start with foresight when I plan / take on a project.

@HZ37 Yeah it's not what I wanted to hear haha but thanks for your input. Thanks for the tutorial, although I'm familiar with FCPX and Premiere :D

@Floating Point
I have actually really tediously made a kind of virtual database that points to specific frame numbers based on parameters. I ended up using a bunch of zl objects as I found trying to build a SQL database with over a million entries a bit shoddy, especially as I would be looking for a way to automate the generation of said database. Admittedly never worked with SQL, so getting the data input in the data base was an issue, so I resorted to just naming the filename strings, breaking them apart and using that as the "filter" for certain parameters. I understand SQL, just even doing tests of trying to build a simple data base with over a million entries in Max via uzi and defer took over an hour to build the file (actually as we speak it's still building...so I haven't tested how fast it actually is at querying an entry when out of a million). I have a text file that I'm generating via a Bash script that grabs a folder, and outputs all files into a .txt. Annoyingly, I can't seem to generate a [coll] file with over a million entries from within Max, and have to use [text]...which I wouldn't care except the [text] object doesn't send any notification when it reaches it's last entry.

Technically my overly complicated zl it's all working, and works with videos, just I hit this stupid wall with the video file sizes which was a total oversight on my part. I worry if it has to switch between video's the time it will take to load as from my experience, whenever a video load, Jitter stops dead in its tracks. The client asked me to work in Max as she knows it best, however, I am really regretting not approaching it with something like Processing. Admittedly, my Max Fu faaaaar surpasses my Processing skill. Technically, maybe I could do the folder / playback in Processing and then send to something like Max via Syphon but this feels messy and also eliminates all the work I've done.

That also said, I'm still super open to any suggestions. Just upset that this is becoming abysmal this late in the game.

Floating Point's icon

it may not be too late to look at processing to do the pure video rendering engine stuff, and do all the interface in max, something like max_as_interface->osc->processing->siphon->max_as_display, so processing is entirely in the background (=happy client)
but i'm not the one to proffer advice as I've never used processing-- just bumping the thread for you....:-)

MrMaarten's icon

Hi Duffield

I can test your patch extensively at the moment but I can share some idea's and a patch of what has worked for me.

Max Patch
Copy patch and select New From Clipboard in Max.

Here is a patch I use to write and read sequences to and from disk:

I have almost the same MBPr and reading sequences is not a problem from the internal SSD. I can write one stream and read two sequences back without problem (on 640x480).

I haven't tried with 4 streams of semi HD as you describe. But I have tried two HD streams in the past: they were plenty fast (25fps). The bottleneck might be the resolution or some I/O access thing on the SSD or a bottleneck in MAX.

Some ways to reduce I/O are:
- use PNG or JPEG instead of JXF: the files are smaller and the work the CPU has to do should be light enough. (JPEG would be smaller, so faster to read from disk).
- use the fastest hard disk with the best connection to the computer. If you can, spread the load over multiple harddisks.
You can also put two SSD's in RAID (this way the I/O is spread and the speed is doubled). In Mac OS X this is easy to do with Disk Utility. You could try two fast disk over USB 3 or thunderbolt.
- make you pictures as small as possible and scale them back up again in Max. There is sometimes a point where you don't see a difference! It also depends on how fast the audience is from the screens, and the type of footage.
- try first to get 4 streams (with different access points) working reliably to see there the bottleneck is?

I don't know enough about Processing to shed any light on that...

Hope this helps...

dtr's icon

I was going to reply along the lines of Mrmaarten: what is the bottleneck exactly? Reading the files from disk? A patch inefficiency? Rendering?

To me it sounds like this should be possible if it's running of a fast SSD drive.

LSka's icon
Max Patch
Copy patch and select New From Clipboard in Max.

Have you tried loading images straight to a texture?
I made up this quick and dirty patch and, on my system (which is exactly like yours), it reaches 13 FPS.
A good optimization hint is that texture and rendering context size must be the same, so the GPU doesn't waste time resizing it.

there are a couple of "but"...:
1. i generated som random PNGs (about 500) at 1280x720, and each was about 3.7MB, so I guess your client's would be circa the same size, or they'll be compressed, which can cause some more CPU stress (and also urges you to load all images from some very fast drives..)
2. GUI gets completely crazy, even with qmetro turned off, I guess this is related to creating 4 rendering context in one patch. You could try building 4 standalones, each with its reendering context. That would also help optimizing the threads' distribution.

Another suggestion that comes to mind would be to create a sort of "buffer", in which a number of frames can be loaded to be later reproduced.
Maybe this other project of mine can help you in this approach: https://cycling74.com/forums/sharing-is-looping-simple-opengl-video-looper/

Andro's icon

Nice patch Lska.
Just a suggestion.
Using 4 jit.gl.videoplanes and a resolution of 5120 * 720
and then running that into madmapper could work faster, combined with a matrox triple head to go you can then send everything out into separate monitors. But still run everything with one jit.gl.render and one jit.window.

LSka's icon

Yes Andro, you're right. I was just concerned about the "4-channel installation" issue, and cooked up a quick solution that anyway (correct me if I'm wrong) could help in a multi-GPU setting ( if I understand correctly what I read here: https://cycling74.com/forums/rendering-multiple-gl-views-on-a-mac-pro-4-x-nvidia-geforce-gt-120/)

If you're going the single card->matrox triplehead way, then Andro's solution (with MadMapper, or all insid Max, using just the four videoplanes/cornerpins) is the way to go.

Andro's icon

Heres a "fake image buffer".
I think with zl.rot (only going forward ) it would be possible to use that number to have already loaded other images before the videoplane is enabled.

Max Patch
Copy patch and select New From Clipboard in Max.

Its just a part , hope it can help you out .

Duffield's icon

Hey Guys,
I'm eternally grateful for the help guys. I did a quick look into some of this, but I'm going to look into all this tonight in greater detail when I have a proper chance and will have a detailed response. :)

I just thought I'd toss a bit of a note out on a couple things:

@MrMaarten - I will try this out tonight but as with LSKA's comment, I'm wondering if I can make some sort of buffer either with the images, or with .jxf files. One thing I'm curious about is I thought .png would load faster than .jpeg as .png is not as lossy as a compression as .jpeg, so it takes less resources to decompress. I could be wrong though. I thought I did a comparison test and did notice an improvement with .png over .jpeg but I kind of forget at this point.

@dtr - The bottleneck is reading files.

@Andro - There are four jit.windows because it's going to be a 4 channel surround projection. I had considered something similar down the path, but hadn't really figured out a way to get one window over 4 monitors in fullscreen. Yeah I know you can scale a jit.window in Max etc, but without fullscreen and hiding the menu I have always found that to be a cumbersome (unless their is an approach I've missed). As always, if that's what it is I'll deal with it, but having a less complicated setup is always ideal.I'd also like to avoid using another piece of software like MadMapper as it's an additional cost and I haven't used it (although I hear it's fantastic and want to learn more about it).

@LSka Still running about 10 fps and have a unusable UI when it's running. I modified the patch a bit (pasted below just as an updated reference for these forums :) ) as the openfolder prefix wasn't connected to channels 2-4. I am SSD, and have a 2.5 GHz Intel Core i7, wheras I know the Macpro which this very well may run on is at 3.5 Ghz. Whether or not this is the difference I need is TBD. I was wondering about constructing some sort of buffer and that was the next stage of what I was about to start investigating.

"You could try building 4 standalones, each with its reendering context. That would also help optimizing the threads’ distribution."
By standalones do you mean export the Max patch into 4 standalone applications, or just separate patches? As a worst case scenario, once again I am entertaining the idea of networking potentially two computers, I just don't know whether that will be equally if not more expensive with a more complicated setup then having one work horse.

I was wondering about constructing some sort of buffer and that was the next stage of what I was about to start investigating.

I'll report back soon! I hope that whatever solution I find is helpful to others with similar projects in the future! As always, if there's somehow some magic solution always open for suggestions.

Max Patch
Copy patch and select New From Clipboard in Max.

LSKA's patch updated for reference:

Andro's icon

I would say that using one render context 4 jit.gl.videoplanes with a matrox and madmapper would be far easier than using two computers and cheaper.
Madmapper works incredibly well and is cheap for what it do's. Very powerful as well. Demo is free just has a watermark.
A new computer would buy you a few matrox units and a few copies of madmapper if you look at the math. ;-)
Plus theres no super headache from setting up multiple patches and making them work together.
You can also chain matrox units together. Configure plugged into the computer. Unplug 1 use an ipad charger and it'll be fully powered.
With two tripleheads you'd have 5 possible outputs lower resolution. 4 higher resolution.
Lskas patch would be 3 times faster with one render destination so i think its the way to go.

Andro's icon

Sending a texture to syphon then madmapper doesn't even need the jit.window to be visible. No full screen needed.
So you send your huge texture to syphon.
This comes into madmapper.
Madmapper sees your matrox as one screen ( the matrox hardware splits the signal)
Matrox 1 1 output to beamer 1. Output 2 to beamer 2. Output 3 to matrox 2.
Matrox 2 output 1 to beamer 3. Matrox 2 output 2 to beamer 4.
I have tested this with live vj shows 8 hours straight and it works perfectly.
You just create a capture square in madmapper per videoplane and send it out of each monitor. Capture zone can be specified with pixel measurements so its accurate.

Duffield's icon

Hey Guys,

Still cracking at it but I thought I would post my results so far:
Just to restate, I'm developing on my Macbook Pro, 2.5 GhZ Intel Core i7, 16GB RAM NVIDIA GeForce GT 750M but the final project can / will likely be on a MacPro 3.5 Ghz, 16 GB RAM, Dual AMD FirePro D500. I am still trying to optimize as much as possible so I don't depend on the power of the Macbook Pro in case it still can't handle it for some reason. The project will be in another country where they will be exhibiting, so I'm trying to eliminate as many variables as possible prior.

@Andro : It is looking like the client will be using one of the new MacPro's with six video outputs, so I think Matrox would be unnecessary (unless there's something I'm missing?). See my notes on tests for some info that relates to some of the approaches suggested. I'm doing the dev on a single card unit.

MY TESTS (which I like to post in case once again it helps someone in the future :) ):
I tried to do a test with my Macbook Pro using a single [jit.window], four [jit.gl.videoplanes] and a single [jit.gl.render], just to see if the performance got a boost from eliminating the three other [jit.gl.renders]. There was no or negligible boost.

I also did a test where assuming this did boost performance when switching to the more powerful machine with an external monitor plugged in via HDMI, just to see if I could extend a single [jit.window] onto an extended desktop. I could not, whenever I increased the size of the [jit.window] to the coordinates half way on monitor 2, it just re-positioned itself on that monitor. I also tried matching the resolutions of my Macbook screen to the monitor using [jit.displays] and the issue persisted. I understand if I am using Syphon to MadMapper, that this would not be an issue as the [jit.window] does not even half to be visible...I'm just weary of pitching another piece of $500 CDN (as great as it is / looks!) on top of a $6000 machine just to stretch a window over multiple monitors / projectors. That said if it was cheaper than two computers and provided an advantage, then I would definitely recommend it.

@LSka Just as a little performance comparison for everyone's reference, with a Macbook Pro between the patch you posted loading directly into [jit.gl.texture] -->[jit.gl.videoplane] versus my original approach of [jit.movie @output_texture 1]-->[jit.gl.cornerpin].

My Patch: Output at about 8 - 9 fps (which is shitacular),
Your Patch: Output at about 8.5 - 9.5 fps (which is shitacular but a slight boost)

As a general note, when I find I have a heavier patch, specifically with video, like this four channel one, once turned on, even after [qmetro] is turned off Max UI becomes slow to the point of uselessness (i.e., 5 - 10 seconds just to make a new object before typing the name in) until I restart the program. This does not appear to be an issue with Max 6...which is really frustrating.

Thanks again guys! On to trying the buffer tests to see if that helps at all...

Roman Thilenius's icon

why cant you first filter the files while they are still on disk and then load only the ones you want to use into RAM? wouldnt you have to have them in RAM anway at one point?

Andro's icon

How are you playing the movies ?
With jit.gl.hap ?
It sounds to me like your bottleneck is CPU related, shifting everything to the GPU should give you massive boosts compared to using 4 render contexts.
I still think your approaching this the wrong way with using one long jit.window or one other screen.
The system has to be built for how its going to be used.
If you have a new macbook with 6 outputs then thats great !
Madmapper should pull this off no problem. I use one render context with lots of heavy gpu processing at 5120 * 720 pixels. 50 fps.
This gets mapped onto over 45 surfaces real time.
Not using jit.window and only sending it to madmapper gives you a huge performance increase.
Another strategy is to replace every float box with the f object, and every integer box with a int object.
Drop message boxes like this " param $1" and replace with prepend $1. (message boxes updating are killer for the cpu !)
Umenu is also another Gpu object. Try replacing it with coll with your file names and just send a random int for the file to load.
Basically you want to replace every item in Max which needs to be redrawn. The moment you have over 50 of them the best patch will just stutter.

Do you also have this in presentation mode ??

I still believe this system doesn't need to be rendered with films (which requires more work for the cpu).
A buffer system for preloading images and just using a lot of jit.gl.videoplanes should work.

dtr's icon

If the bottleneck is in the reading of files, why is there all this talk of rendering methods? Is it confirmed that with a limited number of images loaded in RAM your patch runs smoothly? Then it can be concluded that the reading of files is indeed the bottleneck.

A piece of advice: test/develop this on the actual machine that is going to run the installation. It's pretty much guaranteed differences in system specs between your Macbook and the MacPro will throw up issues, for example in the rendering department (Nvidia mobile GPU vs dual-GPU AMD).

Andro's icon

After modifying your updated patch to run with one render context and 4 jit.gl.videoplanes I can confirm the bottleneck is purely the reading of the files AND populating the Umenu object with a large list.
I added a speedlim 160 object so that the random object doesn't cause output on every tick of the metro. Basically its running a quarter of the speed of the qmetro(40).
results so far
speedlim 160 = 23 to 25 fps
speedlim 80 = 7 to 8 fps

I also created 4 instances of the file loader to load files from 4 different folders on the computer, this gave a very slight increase in speed compared to all 4 umenus reading the same folder (even though they're reading from 4 different locations on the hard drive :P )

Max Patch
Copy patch and select New From Clipboard in Max.

Do's your client need a strobe effect ??
If not speedlim is a quick and far easier solution, still after running the patch for 2 minutes Max slows almost to a standstill if I even create a comment box and start typing into it.

Duffield's icon

Hey Guys,

Sorry for the slower reply, I'm doing the the multiple contract freelance hustle. I've been hacking / testing a couple things, but may take @DTR 's advice and see where I'm at when I get the MacPro as it has better specs and can output six video channels, four of which will be used. Will have to wait two weeks for the machine though :( but I will update everyone with my solution / results again in case it helps people in the future.

@DTR - I am using [jit.movie] with @output_texture 1 as I heard that it was pretty much comparable to [jit.gl.hap] as it outputs to the GPU. Perhaps I should compare the difference? Currently (as my patch below demonstrates), I'm just reading a file straight into a [jit.gl.texture].

@Andro - I'm hoping to use four render contexts vs one long window as I see no performance boost as of yet, and it just makes more sense. I'm really trying to avoid going into Madmapper, as mentioned before, having an extra piece of software is just kind of annoying, and costs more. It's kind of irritating that sending it to another program would give me a huge performance increase :( Thanks so much for the GUI tidbits, although I'm not using a crazy amount of messages. As a note, I have removed [umenu] from my final patch as I found with several thousand images, caused Max to hang up...whereas the final patch would have four channels, so if I had four [umenus] with over a million images in each, I can't imagine it working. As such, I am using a [text] object containing the names of all of the files for a specific folder, that way I avoid [umenu] all together. FOR THE RECORD: I originally was using [coll], but found that filling / dumping a million entries led to crash land, where as it didn't with [text]. Might be useful info for others trying sometime similar. Also, the effect is time lapse form still images, nothing stroboscopic.

@Roman Thilenius - I am trying to conceive of a way to load multiple still images into RAM, per channel. One thing I have noticed from [jit.movie] vs Max 6's [jit.qt.movie] is that the loadram message is not available due to the AVF engine, although I guess [jit.gl.hap] still does. I'm still working on a way to load several frames into RAM, by loading into separate video players and working with @Andro 's "Fake Video Buffer" patch. My implementation is still not totally working so I haven't posted it, but I may wait and see *how bad* it is when going to the more powerful machine. That also being said, I'm gonna finish my version of the Fake Video Buffer just out of sheer curiosity, and if it does work out I'll be sure to post it :)

Attached is the portion of a much larger patch that deals with playing back the images. The difference between this and my final patch is that I have a bunch of zl's filtering a .txt file per channel with the list of all the file names in the folder assigned to each channel. The project involves 4 cameras that have been taking still images for time lapse for about a year, from four different perspectives of a location.

Max Patch
Copy patch and select New From Clipboard in Max.

Thanks so much again guys for all your suggestions! I'll hopefully have an update in a couple weeks of my solution!

Rob Ramirez's icon

hi duffield, i would also recommend trying your patch in 32 bit mode with the QT video engine (either set @engine qt in the jit.movie box, or change the Max Preference Video Engine to qt). might be that qt engine reads still images faster than avf and very easy to test.