Anti-Keyframe VJ Application
I posted a message about this subject earlier but I received no
response at all. Perhaps it helps to cut the crap and put it in a
////////////////////////////////// Important notice start
Please check out this fancy PDF:
////////////////////////////////// Important notice ends
I would really like some comments from people that work with VJ’ing
or patch VJ applications. Do you recognise the problem I pose?
Vade posted a message April 1st about DBV on a thread about modular
patches. DBV is a good example of the trend I describe: "eyes fixed
on preview screens". No discredit for the DBV people, it is some
really fine work you’ve done, but the concept resembles what Im use
to seeing in commercial VJ software products based on a modular
> however someone HAS released a very nice, simply user extendible
> environment for video mixing : DBV)
As I have a background in playing music I find it natural to work
towards the instrumentation of the VJ tool. Does this sound familiar?
Anyone out there?
I agree with Greg, very few "VJ’s" strictly use their computers anyways…
Almost every program that I’ve seen or used has had MIDI compatibility,
allowing the user to control with whatever instrument they please.
As I see it, the preview screens are usually not that helpful…
But on top of all this, we’re dealing with VISUAL media…it doesnt really
make a lot of sense to take your eyes off of it.
PS: Carl Carlsen? Is that your real name? Why isnt there anyone on this
board named Homer Simpson…or C. Montgomery Burns?
> Vade posted a message April 1st about DBV on a thread about modular
> patches. DBV is a good example of the trend I describe: "eyes fixed
> on preview screens". No discredit for the DBV people, it is some
> really fine work you’ve done, but the concept resembles what Im use
> to seeing in commercial VJ software products based on a modular
I have to agree 100% with Gregory here. He hit the nail on the head
I find once I spend enough time with a patch (especially the one I
built), I ‘get what I want when I press the shiny button’ and can
spend most of my time really listening to the music, and watching the
dialog between what I am doing and what the musicians are doing
(amusingly enough sometimes with my eyes closed and head bobbing),
and not staring at the preview screen trying to get something to
happen that should have happened 4 bars earlier, etc *.
(IMHO), This comes down to knowing your tools. Ive watched people use
of the shelf software (modul8) for example, and do things with it
that I havent seen done anywhere else. Last night had the pleasure to
watch my friend Ilan play a set with Photoshop. Fucking incredible.
He knows his tools so very well, and just made it dance and sing all
while playing live. Im not sure what my point is here exactly, but a
good craftsman doesnt blame his tools. Some craftsmen build their
own, but it simply comes down to knowing your tools in and out, being
able to anticipate, and generally having your shit together.
Interestingly, I generally dont use a midi keyboard or alternative
input device besides my mouse and keyboard. Im not sure what that
says, but ive spent a lot of time honing my UI to do exactly what I
want it to. The only reason I chose a modular framework for my patch
(and why most others do as well) is that it makes it so much easier
to extend the full application.
but, the bottom line is, you need to choose an approach that fits
your style. Ive only glanced over that PDF, but it seems you have a
system that works well for yourself. The bottom line is just taking
the time to fully explore it, and get it to do tricks you didnt
anticipate it being able to do. Thats when it gets fun.
echoing Greg yet again, YMMV++
*(Gregory, this means I get the dancing strobing torus exactly how I
want it :)
v a d e //
> *(Gregory, this means I get the dancing strobing torus exactly how
> I want it :)
Non problemo est. I’ll just be in the other room watching that guy
do the Photoshop set….
I’d have to say that my main "point of contention" would be that the patch allows little room for concentrated narrative advancement. While the artist may free themself to listen and respond more physically to the music, they’re sacrificing the ability to respond to it in an intellectual manner. Navigating a sea of ideas with this program seems nearly impossible. Putting clip selection on very generalized autopilot is not a good way to tell a story. This is all couched in my personal taste, which leans toward the figural and away from the hyperkinetic.
That said, I like the idea of "duking it out" with a controller as a very physical form of interaction with the music. Obviously this approach is going to be way more effective at a show with quick beats and changes then at an ambient dronefest.
My personal approach has been to automate the hell out of audioresponsive elements, allowing me to put those layers on autopilot and focus more on the selection of ideas. I actually see your concepts working more effectively in an A/V band environment. Select the song and then focus on the performance. Combine performers, and you’ve got a show.
As to the "eyes fixed on preview screens." I don’t see how that precludes listening. I’m pretty much funneling information from the sound environment, and translating it through a complex interface into video. DJs cue with headphones, VJs cue with previews. Now, if your complaint was that VJs tend to get focused on their computer instead of the people/physical environment around them, that seems more valid then implying that they aren’t listening because they’re looking.
As far as physical interfaces go, I’m usually selecting ideas with a laptop touchpad, but physically performing more with a midi controller and video mixer. A game controller is compact, but is sort of reliant on being held in both hands. I setup a PS2 controller once with visualjockey, but found it a bit like trying to drive a stick shift and eat a crumbly taco.
I have a friend that helped out with the XBox music mixer program that included a video toy. It worked in a similar manner with the machine going fullscreen, and all the effects and clip selection done without navigation. The limiting factor there was the ability to seek out specific clips, and the small number of effects/lack of ability to combine effects for different feels.
Ideas for improvement.
I’d add audio analysis so you don’t have to tap in the beat, and maybe some sort of ability link/automate some of the effects to the beat.
If you want to get really fancy, you could try and make a tool that analizes a large video clip for perceived qualities that fit the different categories of clips, then chops the clip into subclips and organizes them in that way. Kind of the scrambled hackz methodology, but applied to clip organization.
could you explain how he did a photoshop set?
just all photos? in jitter?
is that it?
Gregory says, and the rest seems to agree:
> I’m not sure I’d be in such a hurry to present your particular
> solution as any kind of binary opposition to current practice.
I may have presented the problem as black and white, and of-cause in
practise it’s a combination. However, "my personal experience" is
limited to what I’ve seen and I don’t yet have a full survey, not
even in Denmark where VJ’ing is still somehow an exotic phenomena,
all due to my "mileage" as Gregory beautifully pinpoints. This is
also why I posted this thread.
But it sure did trigger a clear opinion with many useful side-stories
=) Thanks to all of you for that.
Joe says about the applications (as Gregory puts it) I am "dissing":
> Almost every program that I’ve seen or used has had MIDI
> compatibility, allowing the user to control with whatever
> instrument they please.
Vade sums up:
> [...] comes down to knowing your tools.
First of all I’m not dissing any particular application. Yes, I am
aware of the MIDI compatibility features and the once that resemble
the function of the HI-object in Max. I’ve tested the newest Modul8
for instance, and I was very happy with the control assignment
feature. I just haven’t seen it in use, and just moving the
interaction the a physical interface still doesn’t answer the problem
of selecting video live, eyes of the preview screen.
Leo says about the Anti-Keyframe application:
> While the artist may free themself to listen and respond more
> physically to the music, they’re sacrificing the ability to respond
> to it in an intellectual manner. Navigating a sea of ideas with
> this program seems nearly impossible. Putting clip selection on
> very generalized autopilot is not a good way to tell a story.
True, that is the payoff for my solution. The storytelling is mostly
done in the process of producing the clips. I know its a rude way of
selecting clips, which is best suited for rude music (highly dynamic
IDM stuff). True, true.
> DJs cue with headphones, VJs cue with previews. Now, if your
> complaint was that VJs tend to get focused on their computer
> instead of the people/physical environment around them, that seems
> more valid then implying that they aren’t listening because they’re
I believe this is the spot: The performance situation together with a
live band, electronic or acoustic, not a DJ. The application gives
you the opportunity of being a closer member of the band. When
improvising in in a band, eye-contact and awareness of your
surroundings is important. Also, regarding the performance situation,
the worst case scenario would be an audio-visual laptop concert where
the audience feel like they are watching an abstract TV show (I think
I stop the "dissing" here). Thank you for describing what I couldn’t.
… and Leo continues:
> My personal approach has been to automate the hell out of
> audioresponsive elements, allowing me to put those layers on
> autopilot and focus more on the selection of ideas [...] with a
> laptop touchpad, but physically performing more with a midi
> controller and video mixer.
I really like this approach you pose. The fast audio-responsive
visual "sugar-coating" effects are more suited for computers rather
than story telling. The Anti-Keyframe approach is to have a four-way
fader where each of the single faders control the level of effect
automisation in a specific style and combination. It is not
customisable, but that’s only because Im a design student and not a
programmer/developer. Further development of Anti-Keyframe will
probably be directed towards my own usage.
… and Leo continues with ideas for improvement:
> I’d add audio analysis so you don’t have to tap in the beat
I have been struggling with Bonk, fiddle and home-made stuff for this
purpose, but I never got it right. How do you, for instance, know if
it is the first or the last bar? I would really like to see an
example of stable and CPU-friendly beat tracking.
> you could try [...] Kind of the scrambled hackz methodology, but
> applied to clip organization.
Yes, this could be fun (and time-consuming) to experiment with. Maybe
I will look into something like that. Thanks for the reference, fun.
//////////////////////////////////// gossip starts
Joe also says:
> PS: Carl Carlsen? Is that your real name? Why isnt there anyone
> on this board named Homer Simpson…or C. Montgomery Burns?
Hmm, can’t help it. I actually have a cousin named (oh my God here it
comes…) Carla Carlsen =) What a cruel world, blame the parents.
Anyway, Im happy about my name ;-)
//////////////////////////////////// gossip ends
Greg, Vade and Joe. What is your approach on selecting video-clips
live, storytelling vs. expression?
Let turn from dissing to discussion.
On Apr 10, 2006, at 5:25 AM, Carl Emil Carlsen wrote:
> Greg, Vade and Joe. What is your approach on selecting video-clips
> live, storytelling vs. expression?
What’s all this "vs." stuff, then?
storytelling is such a ridiculous self-imposed box.
insisting that visual work incorporate storytelling is like insisting
that melody or harmony always be present in music.
> What’s all this "vs." stuff, then?
Ha! =D Didn’t see it in that perspective. Sorry…. I’ll try again:
What is your approach on the problem of selecting video-clips live?
What do you choose to focus on in a performance situation and why?
On Apr 10, 2006, at 10:02 AM, Carl Emil Carlsen wrote:
>> What’s all this "vs." stuff, then?
> Ha! =D Didn’t see it in that perspective. Sorry…. I’ll try again:
> What is your approach on the problem of selecting video-clips live?
> What do you choose to focus on in a performance situation and why?
what is the art/performance/installation/whatever that you are going
for? when coding in max, i always have some final outcome, some pice
in mind, and write the patch around the specific piece.
-if you have a work in development that requires selecting clips,
then what is that work?
-why do you need to select clips?
-are you responding to music? are you using the video to conduct an
orchestra of DJs?
-are you interested in public displays of epilepsy?
-if you are a storyteller, is the projector your "voice"?
-is it a narrator?
-what is it that you are trying to say?
-what do you want others to perceive and experience?
-what do you want them to be aware of?
-what do you want them to not notice?
>> What is your approach on the problem of selecting video-clips live?
>> What do you choose to focus on in a performance situation and why?
> what is the art/performance/installation/whatever that you are
> going for? when coding in max, i always have some final outcome,
> some pice in mind, and write the patch around the specific piece.
I think most of us can agree on that. There is a huge diversity of
approach in the area of visual music. The "performance situation",
for this specific thread, is perhaps (in my personal view) about
controlling visuals "like it was a musical instrument" to become a
coherent part of the band playing, acoustic or electronic as It may be.
I can only answer for myself:
> -are you interested in public displays of epilepsy?
Ha! =D …..hmmm, could be. It depends on the music.
> -what is it that you are trying to say?
Usually Im trying to support the mood/feeling/atmosphere/state of
mind in the specific tune. Why am I saying this … I mean this is
To communicate a specific feeling.
I’m going to respond to all of the below…
First, you don’t necessarily need to queue with a preview. You can
use text. Whether or not that’s as good is up for argument, but I
find it workable and easier on the computer. (Sometimes the
practical trumps the perfect, even for me) Think of it like this –
you play an instrument, and you press a key and a note is played.
You don’t have a "preview" of that note, except that tone in your
head that you’ve learned from playing the instrument over and over
again. (Practicing your instrument/vj tool is key)
Second, video is more complex than sound. Think about how much more
space it takes to store a sound file than an audio file, it’s about
10MB/minute for a WAV and at least 100MB/min for a lightly-compressed
video file. (Let’s just say a factor of 10 or more, for the sake of
argument, even though it’s very unspecific).
If it takes that much more data to simply represent the video itself,
then what about the symbolism associated with it, the theme, the
content, the overall color, or any of it’s other 2-order properties?
With sound, you can get pretty far filtering frequencies in 2D, but
video is 3D (at least, because the notion of time is different as
well from sound). My point is that it’s pretty tough to create a
complex VJ tool that *doesn’t* require the performer to sit and stare
at their laptop. My patches are usually so complex that the resemble
an airplane cockpit, which I think is a good metaphor. The pilot
can’t control all airplane systems at once, nor does he/she need to,
but he/she can take a single control off autopilot at any time and
manually manipulate it, or change the parameters of the automatic
system that control it.
So my advice is to think of video control as a high-order level of
control than sound. Unless you’re generating raw video (colors,
lines, points) and building images from scratch (which i am certainly
not brushing aside as uninteresting), you’ll need to have a lot of
intelligence built into the software, so that you’re manipulating
"concepts" and "processes" that act on the video, rather than the
videos themselves. Which is what I think you’re getting at with
Antikeyframe, and the 3 modes of videos.
"Storytelling" is what happens when people draw connections between
the processes and concepts, that’s all. How interesting the "story"
is depends on how conceptually deep your concepts and processes are.
I’ve seen some impressive performances that consisted of simple
geometric shapes flashing across the screen, building up and lightly
interacting with each other. If you can project a white square, and
hold the audience’s attention such that when it partially obscures a
blue line it acts like a release, then you have been successful.
On Apr 10, 2006, at 8:39 AM, evan.raskob wrote:
> So my advice is to think of video control as a high-order level of
> control than sound.
Well, at least we can all be comforted by the predictable
urge for visualists to privilege visual material in the same
manner as audio people do with the noise they make….
except for those moments when what you want to do is tell a story,
which is after all a long standing human tradition.
no mystery there.
On 10 Apr 2006, at 23.50, joshua goldberg wrote:
> storytelling is such a ridiculous self-imposed box.
On 10 Apr 2006, at 17:39, evan.raskob wrote:
> (Practicing your instrument/vj tool is key)
There we have it again. Know your tool =)
> Second, video is more complex than sound. Think about how much more
> space it takes to store a sound file than an audio file, [...]
I don’t see this as THE big difference. I see the diversity in the
linear/non-linear nature of video-clips and music/sound as the
biggest problem of coherence.
> My point is that it’s pretty tough to create a complex VJ tool that
> *doesn’t* require the performer to sit and stare at their laptop.
> My patches are usually so complex that the resemble an airplane
> cockpit, which I think is a good metaphor.
I came to think of a musical parallel to the cockpit; the church
organ. That instrument always scared me =) So many buttons. I guess
this is the discussion about if the tool should be able to do
everything or specific tasks, which is all about preference: you want
to play the guitar or the organ? John Maeda, a visual artist and user
of Processing, proposed in an blog interview that big applications
like Photoshop should should be sold bit by bit, so you only pay for
the features you need. Modo, an upcomming 3D-modelling program is
completely modular so you can show/hide everything and create you
complete own setup. This is what Max has been able to do for years,
if you had the guts to look into it. I hope this "customisable
simplicity" is a general trend for future software, but I wont be the
judge of that. Im just one of those "simplicity is bliss"-people.
> you’ll need to have a lot of intelligence built into the software,
> so that you’re manipulating "concepts" and "processes" that act on
> the video, rather than the videos themselves. Which is what I
> think you’re getting at with Antikeyframe, and the 3 modes of videos.
That was what I though of. But I would like to know about other
categorisation/structuring concepts and processes.
> If you can project a white square, and hold the audience’s
> attention such that when it partially obscures a blue line it acts
> like a release, then you have been successful.
True, we shouldn’t forget the audience in our strive to become visual
On 10 Apr 2006, at 17:50, Gregory Taylor wrote:
> On Apr 10, 2006, at 8:39 AM, evan.raskob wrote:
>> So my advice is to think of video control as a high-order level of
>> control than sound.
> Well, at least we can all be comforted by the predictable
> urge for visualists to privilege visual material in the same
> manner as audio people do with the noise they make….
Yes, it is a predicable … but difficult, and yet so simple vision.
what a discussion.
BTW, Carl, I wasnt making any fun of your name. Carl CarlsOn is a character
on the Simpsons…the urban Lenny.
I never insisted that any work contain storytelling. I’m just stating a limitation of this tool, which presents itself as somehow replacing a paradigm. If the author had expressed a particular opinion about ideas being presented non-linearly, then maybe I wouldn’t have raised my complaint. As is, it seems like he’s more focused on expressiveness, audio response and interactivity. I don’t think one has to preclude the other just because the navigation could use a little spicing up. An out of the box better browsing solution would probably result in a variety of applications being more useable, instead of the various ways we all try and cobble things together using ubumenu or worse.
"higher-order" as in x^2 vs. x^3 – some people might consider
anything "higher" to be a good thing, but i think it’s arguable
whether or not one prefers linear vs exponential relationships.
and of course video is better than audio! ;)
otherwise why would you be on this list…
On Apr 11, 2006, at 7:08 AM, evan.raskob wrote:
> and of course video is better than audio! ;)
> otherwise why would you be on this list…
I’m on all the lists.
>> and of course video is better than audio! ;)
>> otherwise why would you be on this list…
yea, you not only hear the bugs, you also see them :-)
Acousmatic derives from the Greek, /akousma/, a word pertaining to
It was originally used by Pythagoras to describe the manner in which he
delivered his lectures.
Pythagoras stood behind a black curtain so that his students could only
/hear/ the content of his lecture – the source was unseen.
In doing this Pythagoras forced his listeners to concentrate their
mental faculties on the content of his lecture.
Then in the XXI century, the meaning shifted as people used to hide
behind smaller curtains of aluminium
with a luminous apple on it, so that they could hide from their audience
and concentrate their mental
faculties on complex pieces of software, trying to get a grasp of what
the hell is happening with that weird
patch they worked on till too late at night the day before.
A little history is a dangerous thing, isn’t it? A word is worth a
> Then in the XXI century, the meaning shifted as people used to hide
> behind smaller curtains of aluminium
> with a luminous apple on it, so that they could hide from their
> audience and concentrate their mental
> faculties on complex pieces of software, trying to get a grasp of
> what the hell is happening with that weird
> patch they worked on till too late at night the day before.
A luminous Apple? Pretty platformocentric, aren’t we?
Performing with a patch one wrote the night before? Ah, so
*that’s* the problem, then. The Greeks had a word for that
thing, too. Hubris. :-)
the thing about video performance, is that the performance is extremely
limited a portion of the audience’s field of vision. the performer’s
actions are totally peripheral. so your gestures are strictly
ergonomic and personal – there’s no need for theatrics (unless you are
filming yourself ;) ).
if the video is part of something larger – say a band or stage
performance, then this changes, as the audience’s visual attention now
has competing space.
which brings me to the biggest frustration of performing video instead
of audio is that audio easily surrounds the audience – even if coming
from a single speaker. the setup issues and price point of surrounding
an audience with video is basically unattainable for an individual
artist doing a small show.
right now, my favorite "vj" app is a relatively simple patch that
captures short loops of live video from multiple cameras, controlled by
midi faders and a saitek game controller. having spent a lot of time
with clips and the limitations thereof, i joined with an electronic
based duo – chided for their visual boringness – and simply capture
their "boringness" and transform it…now if I could only setup
multiple screens…. :)
i like to think of myself as an artist who creates a running series
of visual experiments which can be enjoyed for two or three seconds
at a time by people who will occasionally glance at it. the beauty
of wallpaper is that when it surprises, it’s REALLY GREAT.
It seems like the thread is finally dead.
I think got the discussion I was looking for and a lot of great hints
I hadn’t thought of. Thanks to all of you for contributing. I hope
there will be future discussions like this.