Hi. Can someone point me to a place in the jitter documentation that describes recording a live audio track with a qt movie? (jit.qt.grab does this nicely because it is expecting a live source.) I think I'm overlooking an obvious way either to a) create an audio track in the new qt movie and get audio into it (jit.poke?), or b) create the tracks as separate entities and combine them somehow downstream. I'm trying to keep this as much a one-step process as possible. I want to build a very simple screencasting patch using jit.desktop, and would like an easy way to add the option of simultaneous audio input with the display capture, not too cumbersome for the person who will use it.