I am really seduced by the GPU processing, especially right now
because I work on multimedia installations and want to free as much
CPU for the audio part. It seems full of potential, but I find the
learning curve quite steep. For example, the amazing examples by
Andrew are so elegant, but there is lots of features in it I can’t
find in the documentation, for instance, the @capture that can save
This is not a complain, but just a stucked poor coder who needs help
to go further… What I do right now is designing the patch in Jitter
matrix, then port is to slab. I know I will probably drop this
approach quite soon, but my openGL licks are not near as my matrix
I don’t want to be given the patch, but I want to know where to find
a good tutorial/reference for that whole world of openGl objects. Is
there something I should read after the documentation to help me
design my really simple patches.
Any pointer is welcome.
ps For example, I want to move around a jpeg right now, to use it as
a mask on an incoming film. I manage to put my jpeg as a texture on
a jot.gl.gridshape plane object, then position and resize my plane.
But I have erasing problems, when the new size is smaller than the
previous… we can see leftovers of the previous frame…
It’s good to hear that you are interested in this. I strongly recommend
grabbing a copy of the orange book, OpenGL Shading Language, by Randi J.
Rost. You will want to pay special attention to the chapters 3,5, and
16. I tend to keep this near my desk, and refer to it often.
The same could be said for the red book as well. This book goes a great
distance in demystifying the OpenGL pipeline, and explains the more
low-level jit.gl.sketch commands.
The important thing to remember with GLSL is that you are dealing with
each pixel individually. A shader program doesn’t have memory of its
own. This can make them alternately very straightforward or very
troublesome to implement. If you are having trouble implementing
something specific, I encourage you to post to the list.
If there is any specific aspect of jitter and OpenGL that you feel could
be explained in more detail, please let us know. I can always add it to
my agenda for weekly examples.
I have checked out your example too. It is a very nice patch on making use
of @capture and slab to generate effect out of a 3d geometry.
However, I am working on something I want to render a scene into texture and
use it in later rendering. I have been told of the "to_texture" message of
jit.render and have checked out the example in jitter tutorial. But I
still could not figure it all out and always produce error message,
"warning: method screen_grab called on invalid object".
Is it possible to have a more detailed example use of the message ?
to_texture is currently broken, to everyones dismay. We all patiently
await c74s fix. currently there is no way to render a whole scene to
a texture on the GPU.
this makes baby jesus cry.
v a d e //
But why would the example "jit.gl.render.radialblur" still manage to use
I am using to_texture regularly. The destination context size must
match the texture size exactly.
can you provide an example patch? Wesley stated it wasnt working
"No there was a big discussion in november and render to texture does
not currently work at all. I agree that this (excuse my french)
patiently awaiting 1.5.3
Apologies for mis-info if it indeed works. id love to get this working.
v a d e //
Yes, the radial blur example works, but I was referring to the
@capture attribute of jit.gl.render. The to_texture message isn’t
very flexible and I find it very cumbersome. Personally, I like to
render to a pbuffer. Ideally we could also get textures from the
depth buffer as well.
sorry for any confusion, still waiting for 1.5.3,
way to make baby jesus feel like an asshole! I forgot all about that
Sorry Wes for spreading mis-info ;)
v a d e //
Thx. I’ll try it out.
by the way, what do you mean getting texture from depth buffer ?