VJing with Ableton and Jitter
i would like to do a hybrid dj/vj set. i have thought of a few scenarios and i also have a question.
which would be the most reliable system?
1. entirely in max6
i would create two "decks" that play video and sound, including a mixer to crossfade between the two. the challenge here is there is a huge amount of work to assemble everything since i’m still a more novice user.
2. ableton live to play audio, sending midi/osc signals over to max6 to trigger video.
in this scenario i am taking advantage of all the assembled audio components of ableton. the only problem is i dont understand how a slowed down music track would also slow down the video in max.
3. max4live for the whole set.
i have my qualms about the stability of maxforlive and jitter. feel free to tell me that it’s no more risky than option #2.
now my question is pertaining to option #2. how do you send a message from ableton to max6 to trigger a bang for a video?
any tutorials or patches are much appreciated
the best is option 1, but i am biased.
for you it sounds like you should use option 3.
this guy has done all the work for you already:
I do your option 2. Works great.
You could easily use the API in Max for Live to send commands to the jit.qt.movie object in the max instance to slow the video when the audio slows.
IMO It depends on your hardware and setup and my pick would be 1 as well, though I’m a die hard Live user.
This weekend I hooked up all my gear (mostly USB (also sound) devices) in Win7 32bit and had the latest Live crash on me several times. Up until now I blamed my hardware / OS because I know that Windows often reacts a bit funny to USB devices. Add up Asio4all which is pretty stable but often commented as "if you have native ASIO drivers get those instead" I ended up puzzled. Now (looking back) I should have concluded differently considering that nothing happened as long as the hardware was connected and not used but alas; talking after the facts is always easy.
2 weeks ago I upgraded my Reason 4 (which has no sound support) to Reason 6 (which does).
This weekend I hooked up all my gear with Reason 6 and started playing. Obviously Reason couldn’t do much with my APC40 but it still reacted.
3 hours straight without any crash what so ever.
With Live, written with all due respect, I can’t reach 15 minutes. If I hook up all my gear, load a demo song and start playing my device crashes within 15 minutes.
IF otoh I skip my USB audio devices and use the internal stuff Windows provides (even through Asio4All) there is no problem.
So as much as I hate to say this: when in doubt use Max.
If Live proves stable enough then 2 or better 3 maybe the best (easiest?) solution.
Apart from that….
For the record; crash reports and additional data have been sent to Ableton.
I’m working towards 2 as a solution as I think this is the most stable going forward, i’d hate visuals to crash m4l and stop any other processing that m4l is doing (e.g. audio, controller work). Also, the nice thing about 2 is that if you wish to offload the visuals onto a another laptop, it’s easy to do as you are already sending commands to a separate process
thanks for the responses so far. i still have a lot to consider! definitely taking a closer look at v-module and 0xf8
I am trying to make the same design decision you are Mr. Tunes.
I find myself leaning towards option 2. I like the idea of having Max as the brains of the
performance and sending commands. It is so much easier to handle sensors, map data,
and create relationships in Max
I do have one question, can you send an audio signal from Max4Live to Max? I would
like to send a audio control signals like a clock pulse or phasor to the Max brain to use
in various processes.
I would be interested in hearing any new feedback.
hello, i think you can use ReWire to go from Ableton Live to Max.
You can try out the scenario and see how you like it. CellDNA is a "max patch", and I have M4L plugins that tie Ableton to CellDNA to trigger videos and effects:
You can even extend CellDNA with your own patches – this could save you a lot of time because you don’t have to program your video playback engine!
I use udpsend and receive to send messages to the video patch (and from the video patch to Live).