A couple questions about my 'Butterfly Net'
I’m the one who posted the ‘Can Jitter do this?’ post from before.. Now that I’ve gotten into this a bit, I have some deeper memory questions and some other things.
I spent 6 hours today going through the tutorials, and Jitter is pretty powerful, wow. Well documented too. I’ve had some fun playing, but I still have no idea what these ‘giant pulsating donut’ things are that you all joke about. well, someday maybe. :>
Ok, so here is my text-drawing of how I am planning to set up my ‘speed up a party in real-time’ video. Mind taking a look, and giving a critique? I’d really appreciate it.
jit.qt.grab – grabs the video footage
|__jit.qt.record – records all the footage for safe keeping
jit.matrix – stores the video data as a buffer ( this will work, right?)
Then this matrix receives a bang, causing it to dump all the data down stream to..
jit.qt.record – this saves the footage to disk
jit.qt.movie – this will be set to play back the saved movie at 10X
|when the movie is done playing, some how it sends a bang to the jit.matrix above, causing it to dump all the data again.
And then I’d be done!!
Ok, so here is what is worrying me:
1. Can a matrix store video like this? The matrix seems like a memory’buffer’, but I haven’t gotten it to work that way, so maybe I’m misunderstanding
2. If it does store the video, however, is it storing it on RAM? Because though I have a 1GHZ 1GB RAM powerbook running 10.4.6, I think my computer will die after the footage gets long. How might I deal with this?
3. Is there a way to get the matrix to ‘dump data’ with a bang like I’ve shown? am I thinking idealistically?
Finally, does anyone know how to make a movie send a ‘bang’ when it is done playing? I tried to see if it does it naturally, using the ‘LED’ but it didn’t seem to do it. This is key to the process repeating itself.
quickly, as I am about to fall asleep:
to know when a movie is finished playing, use loopreport to
jit.qt.movie, and check the dumpout outlet (rightmost)
jit.matrix can defintely act as a buffer. On the forums there is a
patch that shows how to make a ‘video buffer’ (search for that)
basically, a [jit.matrix 4 char 320 240] stores ONE frame of video
(alpha channel, red, green, blue (the 4 char part), at 320 pixels by
240. Thats the matrix part, the 320 by 240 pixels. If you want to
store more than one frame, you need to add a ‘3rd dimension’.
so a [jit.matrix 4 char 320 240 100] stores 100 320×240 ARGB video
frames. You have to use the srcdimstart and srcdimend messages to
move through it, which still confuses me to this day.
Fortunately someone (Wesley Smith) wrote a sexy wrapper for this,
called xray.jit.3dbuffer.pat in his Xray objects.
At first glance your methodology seems correct, but I think you might
want remove one of the jit.qt.records (why record the same data twice?).
As a matter of fact, I know someone who wrote a pretty kick ass video
step sequencer using almost just these principles. It works very
well, and is quite fun to watch :)
v a d e //
A dumb question: how does one use these posted lists of command-type things?
#N vpatcher 10 59 1091 845;
#P origin 0 2;
#P window setfont "Sans Serif" 9.;
#P window linecount 1;
#P newex 340 102 40 196617 t clear;
What does one do with this stuff? I see it throughout the forums, but I don’t even know what to call it, so I haven’t been able to search for it. I’m assuming it can recreate a patch?
PS. Vade, thank you for being a super-star~ I will update on my progress, this is cool.
copy and paste it into a new patcher.
i’m the one who’s writing the "kickass video step sequencer". :) it’s
coming along pretty well. once i work out the last few bugs in my
logic, i’m going to post it to the list and my site.
p.s. yes, thank you, vade, for your long-standing, inspiring
commitment to super-stardom. Where other stars are hesitant to tread,
you tread firmly, steadfastly, upon super ground.
On May 19, 2006, at 9:29 AM, Dan Winckler wrote:
> p.s. yes, thank you, vade, for your long-standing, inspiring
> commitment to super-stardom. Where other stars are hesitant to tread,
> you tread firmly, steadfastly, upon super ground.
Don’t go making me jealous, come on. ;)
So I’ve looked through a lot of the forums, and though I see allusions to video buffering, etc, I am still pretty regularly confused as to how this would operate. I see how the xray.jit.3dbuffer keeps a little bit of video in frames (though I’ll admit I couldn’t get the 3Dbuffer example to play on my computer or & pick up footage from my camera). Would anyone be willing to share a real example where video buffering is being used to store more than a minute of video?
For now I’m going to try to just record straight to disk, and send that to playback/ I think this will require that I prepend all the successive video I take.
matrices are grids of numbers. nobody ever said that a grid had to
be two dimensional.
a one-dimensional grid of numbers is a single row in an excel sheet.
a two-dimensional grid of numbers is an excel sheet. or the building
blocks of an image. this is how a two-dim matrix can hold an image.
a three-dim matrix is multiple images. if they’re in a row, they can
look like a series of movie frames.
so if jit.matrix 4 char 320 240 is one frame, jit.matrix 4 char 320
240 30 is thirty frames, or a second or two.
wes’ objects are wrappers for putting multiple images into a single
three-dimensional matrix and making it act like a video buffer.
does that make more sense?
It seems like you are getting a little ahead of yourself. Start with
the jit.matrixset object, which allows you to create a video buffer
pretty simply, and then write out to disk as a QT movie. This part
should be pretty straightforward. No matter how you slice it, you’re
going to run out of RAM eventually, so you’ll want to write to disk
occasionally. The next problem will be coming up with a playback setup
that plays all your saved sequences in order before playing through the
Forums > Jitter