I’ve been working on a video step sequencer — one that works exactly like an audio step sequence, but uses short video clips instead of audio samples. There are up to 8 possible slots for loading videos — and four different ways to arrange those videos in the window (full screen, vertical stripes, horizontal stripes, and random rectangles).
I’ve attached a version without any video footage along with this post — to load footage into the sequencer, drop a couple folders with video clips in them into the footage folder; the sequencer should then load them automatically when it next opens. If you’d rather just jump in and work with some pre-supplied video, you can download this version from my webspace:
In general, the functionality is there, but I find there’s more lag in loading/playing back the video files than I would like. Does anyone have any suggestions as to how I can speed up playback? It becomes clear, particular when there are multiple videos going, that playback is laggy and irregular. Is it just an issue of having 8 QTs/Videoplanes going, and there’s no way around it? Or are there some steps I could take to streamline the process a bit? I’d love whatever thoughts you might have.
Maybe you could use matrices to store the video? Then it would be uncompressed in ram which should give you an advantage.
I guess Ram becomes an issue as there is a 3GB (or is it 2GB?) limit. Not sure how much video that is in minutes but maybe not enough for you. But maybe you could load it into the matrix asynchronously.
In my latest (m4l) project I store my video on a Ram Disk and frame dump it into a jit.matrixset when it is loaded.
If it is triggered before it is all dumped I use jit.qt.movie but once all the frames are dumped and the matrixset is full I use that as the performance seemed better (for scratching).
Seems a bit overcomplicated but its the best performance I could get. The framedumping was so I could load a video whilst playing another without the dreaded stutter – maybe this isn’t so much of an issue for you as the videos can be loaded in advance.
Similarly I found a ram disk avoided the stutter that ‘loadram’ was causing on my system. But not sure this is an issue for you.
I downloaded your file – it looks good! I couldn’t notice the lag myself as its by nature quite glitchy anyway. Maybe you don’t need to worry about it!
Or maybe you could combine all the videos together and just extract the relevant bits?
Just thinking out loud!
instead of using 8 snippets you could have these 8 videos in one file and just jump to the right positions.
if something causes a delay a when triggering it, so that it plays late, you can solve this problem by
implementing a logical delay into your sequencer routine, in order to compensate the time.
not sure how to do this for video, but eventually you´d just add 12 frames before each snippet (which
could be copies of frame 1) and start actually playing the video always 0.5 seconds before they are sent
to the display: think "tracks", and not "sampler".
gavspav — the framedump/RAM disk idea is interesting, but the problem is sync sound — I’m using both the audio and the video from the clips to create audio/visual rhythms. This is why accurate triggering is so important — I’m aiming to create on the fly audio/visual "beats" in the same way that you might with a normal audio step sequencer.
Roman — given that I’m working with a huge bank of video clips, I think combining them all into a single clip might be cumbersome. The other problem is that this wouldn’t allow me to play multiple videos at once, the way the current version of the sequencer does. However, the delay idea seems worth exploring — I’d have to create a second set of clips with the extra space, but jumping in a file that’s already playing might cut down a fair amount of the lag time. I’d be curious to know if anyone had played with this idea in video/Jitter before, and how well it worked.
thanks for the suggestions!
Yes I’m using sound too. I load the sound into a buffer object which can be driven with phasor~ or groove~.
I take the playhead info, divide it by the sample and frame rates, pass it through a change object and use this as the index of the jit.matrixset.
Provides ‘perfect’ sync.
I would suggest that if you are aiming for optimum triggering performance then you need to separate audio and video files anyway.
Mind you I’ve been wrong before……..
Hey good job MBroach, it looks great!
Definitely you should add a @vol 0. to your jit.qt.movies and deal with the sounds on a parallel circuit.
Even sfplay~ objects would work better. But useless to try using spigot~ for your project.
You could also try these optimisations methods using the @colormode uyvy and the @unique 1:
----------begin_max5_patcher---------- 1239.3oc6Y00biZCE8Y6eEZ3k8EuLHg3q9T5S8k1eAc5jQFTbTBHQ.YGmty9 eu5CriSVCVI0FyNsdl.FAQbtGe08dNv2lOyaoXKs0C7Kf+DLa12lOalYH8.y 5NdlWEYadIo0bYdRwpUkTuE1SwWWIVKKoRyIgciZGR9RM0NydLtzC7WcmslH yumwWcaCMWZu.bpevBPTbfdW39s6+WXEl6sX4CeEG6sedZHUTIs4VJmrrzbu BdEWL9dXoG66ymq2rvwvjSeVc21cqjzsFj58.S5+LiWHdFTstTxpKonuJpo7 e62ONkf5iRVR3q7V.7bfWRiLLBLFp2g5iXh1gf6DbIWQMlS8qMLR4gmok82l y.Q9WZ1RBVBV9I4Ey9g3l3H+X0mzjEfDjMkIdH9IYxQO5joUk9skjk+P1D3l 6XkTPdt+5W17BpY0RheISR8eXaK3FxZonhHY4f.vMErp1bh5hQ9.n+GjtUX3 1Uk2pwz5F5ISHg1LxD6ZTbLdHFOL87P4nyNkugUPE0kDN8HD+aHWYCg2dmno RQBsTI.8I3W0j0v15L2lXSlSRGjaylpoyMTdAsYLKOB6JONHggCllkGAk.ZC osmNpXmJSp4I0e144zDVrMCKAMHeAmb70Spt8MB0OzeP0GmrURGsDXnkgqog QWuZZUz1VxJ5OvKMTRwGjRFtPjs5dGeDlXVck1CeDNIqw+jzuRrgQUUyKH0R .DbyZN6o0T82xEkpB5hBJP2a8xUPuKsBanQT5v4U3q2xsbQUEUoP+8DYO85B 5OhsINQ1BxHa.mzSCrOpb0zKe.KumB1qlhHU+Hubsj1BjjGUayIMTf3Nf9p9 xePxUkhpYaokfMBQgP7kcSWIiSyEq4xCyiblBwo1UccYMclgvGmDg+TnoENs 0zhvFeCgHmzzN4J2YUQziMqPW0ObxdjH0BvzDHLJRSNpvJKNLQ8cbzfz0jMC 8HV.fWcK.n.apXB1EK.nqG2NVRQPYVO8Xyi9HLcHoHg+uTj94Qqj+c6FVKRH dRtn8cO2M3EzXIzxOYt7b2PA+j3DeDHrvLGbhihmnNwGvGty8QOsGbq9Nn8Y 5FmLHSk7eGO3czhUDV7fhJPWwmq3E40eXi8TGd6GnrQ4sezmAI3mzQH9fPqW CgAWOCg0JYf6uIKWY5J2UXSC4.+T8F7B8g6w9w4fkj7GW0nr+UbXJwwJYZS3 sOJtPy8AGk0yO66vFaEWnlhRV9iGN8mWBnVqykKU5eE7CArEiIYVJwtYOZGU Vqyj7onM7mk1LSlwH+6d2nFDoG+sbYqXcS9tU66pbCP6wTAsUx3FB8vKRUDE 7JvumUTP4GxJErV8RaS.EbzeU+P3IXhgG3Ivi173XgmTW3mjwCOYtfmzQCOg N.GsezwBNPWReFW3L0nmSU8Q6xarviSoOiHdvtfmwCNtT7ILazvC1ozmSU74 .4.A9vn3z3v8ZBhvYAn2pKnhUTKTBk6Z3lg7SCUehUc4UlKQY3zXn11Thezg C8JJOuge3IBe8qS18v+.ASCF0Pk7Fc3klo20Eq5CtTgYv+1v7LimSUSe75.i QtPOiWMKb3DCOtTCEOdJ3vwtfmwqGrSBbSGU3bwZwXsKQpq2PaZ6lSCRTlLe vVELdg4PF2dnYF8ZnaX6t9n45Y66y+GvVTf+E -----------end_max5_patcher-----------
Florent — Thanks for your suggestions. I’ll definitely give those a try, see how well it speeds things up.
gavspav — do you have a demo patch that shows your method? I’m sure I could figure it out with a little trial-and-error, but if you’ve got something rigged up already, I’d love to see it. Definitely sounds promising.
I’ve just put this together to at least let you know what I’m on about.
There is no framedumping here, just the a/v sync.
You need to load the video and corresponding audio/wav files manually I’m afraid.
For the demo I just used a pwindow – sorry!
FWIW I made a glitchy m4l video shuffler a few years ago before I knew this method!. It just uses jit.qt.movie as I remember.
----------begin_max5_patcher---------- 1533.3ocyZs0aaaCE9YmeEDF6gsgrLdS21asCnCCnEcXqnurNTHaw3vBYQCI pljVz9aeR7PoDmXWKRo35WjyQRw767wyc5Oe1r4KT2Hpli9Mz+hlM6ymMal4 Vs2XlUd170o2rLOsx7ZyWpVuVTnmeN7Ls3Fs49uoTtZknr69E0qkE4Bs4ehX u4lT8xqjEqdeoXoFVTJ6B74njf1qAj1qT7EXz+c2WipV288fs2UlYVR0hO7K AIcq3kpBcQ5Zg4QOqTlle+mTI+j4ID5E3169kyNq8x4iTq+GcZoF8F4ZARVf dUkqpeRRqJGiMjPnGpezQQ8avwB22aAkiPLaqA3CnczcncAGV6fuA8saDvhN W1tMgluHsX079kZB3fE0ZspvY66XfCLevocW2CGP1AGv1qlZUQe1OEW27c+H qY4N0N590tXtQ6hLtuL5EAeyc3cocDO2gmxc184b+RUZF5Y0YRE5ExbA5OKz Jzyqu7RQ46JdVQF5sxLw8e1qR0kxadWwO95KQ+tptrRfzWIqPUWopyyPKDnz ZsZcpVj8ScqVtrPrTUWXVRtiVWLt07xbMvD.Ij5R.Dd7203m+g7ihJzeopjZ op.81z7ZA54B80BQABiRaHYxNYJly4YLAin7.fvLocZYOGnpvuqT0qpy0xM4 2hVbK5khhU5qPpFyrb4FmiIEYhEyXXvrA6ddGN2ctnCPkhpF8KscCeufp6hW Q2FBC9hxFLi96FWQ2YOiaFKf4O6wNNYs2cTd8ctTCUmIgPpbFCbhNfJui.87 Sg.86gPxahyaxk5Jq.lr1BbBwdvJXOXkIuxlOH0Wr4ZYQl5ZmcFnFmAdBjGJ DRDwhcoJOFeupZagbSZENKupg7b2kmFYzRLcPUxx1gNx7XitU6sUydj7DpJR 2zTrh9qMpmq0BRgRcYwFafXl6dCLe7Ftrw6cZ4l0hppz6LR5ImlkdSsdsoFO zOPbmeBfp+M1RIP1iXW3GZrOFQGAyl1.H.uTIzMoGvHNpwSqDE19mwX2GJ.T dVHPRQwt25HMzCtpQOdOnG1.OO4L2O23oEbgmlRr.9ccU5lqF0mNqqjqJZd7 QfV9UDAiwNyKLHpCChSyi7fW3mDgf1q0h6VJbn07Xe6PmROQrT7ZDTrPXnZv G9LCJB6DZFT9QBbXvpLr2ChiPNkHg8L7phKUeEszi9gYgPBFJsa1.eauj3GS PIdG1nggxkUlO6uwP9iiRIOkhM4oKENGGFF2Yb7c1atUoS3IZgNKLC+ySqLh sGZXJcIt6ExGiQ1j6FJ9TV5RmSHACOgBm0ALLt1qCe1Ir4SX6NxUe02l9isi qxiZLNoK8ZUoR8Qw2vDmcPtgBV5Qgdjnwet47mDVZeQFyUpMH26AjDBFOFBh 6QOfjnu2gF2GiHWuQUpWq9nT3Q2wbhIVPLzbLwchg8jQLl2wb5CO3fxMpQ68 2lspT0kK6VgtiwDcmpjIpzxBy7uu+K0PYHR+KckLKSTb+XfqkYaTM0TYAAAG 2VlBIB3JbDzpLYuyLevnkL.zt86bPvl.kTYs9iR5kFKVYCkYc.rrn6Qr.xmB dscVuGlWibCqVtjvfRWCgS2ZaI3i1YFDXkFqlLDKDpqJBXNaOUEqZ0JcLncN YTztE6L53Q6VAA1MXSb0bNFb2L4bXQQ8RiDqrngXFD5FZ4TNz8EL7FaAEaKw femP7vcHwwcF6ORhAoXnD7ik3A1iG1LgDizHYmGn36lcZO5CWXmN7YmtNKnW ZrnM9IXujx3.aFtE2N9v9rgD1+Apz.PanAeTK2x6kFKZGRDnGnRC.sF+X6QR 0gc73i.QGRwJOPkFpk.yNIe.6FowhV5PrDBbEslj9L6uGQK1CFeI.sQtCODX cLj.O.BuCCagkP6kFK0xGB0RcqpUK9X1eBAvH5MRiDsjAj3jR8JhPGXsVES. XoSeE1cEUCy389RikXildvZqEk10zRRuzD3fcPrxbs9InhInacVTPuzQHZ.g 3WzfD3GrPHtWZjfcHUy3XjKFEZzBG2cDUcRSPrfClAyO2qXS513fdgsfZivW N6+gs.Mc5 -----------end_max5_patcher-----------
Oh I just found the basis of the framedump method in my email archives from Dave@Cycling74 Support.
Hope all this stuff is some use.
----------begin_max5_patcher---------- 861.3ocyWssbaBCD8Y6uBML8QWORxHtzmZ+N5jIC1n3nTC3BxMtMS92qjVnV NAG.AI0ODGuqvnyd3b1U7z7YdqKNxq7PeA8czrYOMe1LSJchY0wy7xRNtYWR k4x7x4OVr9AuEvRR9QoIsDsCgaxleHSjuiKM+BRcx8IxM2Kx2daIeiD1w.1R 1BDgfWhWfVQ0AT02Q2b59TbP1bin0YEolcTghOuJtYKuqHWlmjwMK8sRQxtl Uf6f7264vl54s.4IxUP3Fqeak3Ol0IzkXc1mmOW+whdxJY7ppjs7WQKh7T9Q zmHsxLzKyLgwZJgtxvLr.8mjnKxLjVXlPWXlojRdCgxZGEJgQNKTXNJTVmju 8ifV1TbHWxKQXUg0tOh0oZAyz+KxGBtH832B836.8n8PK.V5izRskKuqTgPC iMTe0JR.Hd.pJzAiE9J0XkxuiWtq3wg5sn9QVTBzq4MTOsPIzf+2TxkzJFgR 5gr8HhpILZvsga3FUKGsPAGMb8BkckpWJU6JGcxJgJ4IoNJdHQlAUDFqC0yp VHHrislM+8APTOHjK+obYVwuDb0vGUGZeL5qIGjEUxjRoyG6wOBHKZGbVKiy nNPYpx31rDYo33aPbjkrXFIJbB8fZYEp3dM+MT6GPTf1Jzgd0jfqTumVRAOK p3R8DejOZy8Ikn.kzxOxQEEMDbggQCWQQd2jTtQbZFZ+ipiOO7AZ0jQLzrlA bBMbPrwplMccxlerU0pLO0dWaueD5FqB0rYd6D4u70qLHVm+7pup3P4llaWy 61nFa0f4TdkTjmHEE41Wj5faH5+tn6Eooby535LYhz8EpilUChHncMLOiQLT ic.EC4nmhNqpV3VYf6nLnmcMcVEgLKfaELRnpaWzITeQ8zSrFGeBp5uORjR6 CPYNQpAQVjZPz3e9y5AVItI.hswZ7Df0v2ArBVNpOn.fVymEDDdpJLAisJ76 i5HzopfBu2zofoPczqlbCFq0pCqfwhU7z2IaEEdeKfWIg9uNp9r0PGjyC.uZ .90A99VNCcvXa3f6yyIraEOLywNZJPamOoBFFZorHKzZGMAMy6hZoXm.asnv NZrfkM8lfFtjQsMAmEccXB5y4CnL2JdvyaGM1SyzmCInOn6vG7RiIVirheoE PE777+hmH2Wn -----------end_max5_patcher-----------
gavspav — it took me awhile to get around to it, but I finally had a chance to implement your jit.matrixset and audio synch methods — and it works so, so much better now. Much less notable lag in trigger — particularly on the audio side, where it would be more noticeable, anyway. Thanks so much for your help!
Great. If you turn it into an M4L device and share it, let me know.