When I play a video through jit.gl.nurbs or render what exactly is the geometry representing? Where does it get its information from? Sound / visual?
What you you mean by ‘play a video through nurbs’? Do you have an example patch? Are you using the video input as a ctlmatrix for the nurbs object?
In that case, the values of the pixels are the 3d points around which the nurbs object is created. The 4 planes of your video are used as (x,y,z) and w, which is weight (to calculate the mesh, nurbs uses a grid of 3d coords as input and makes a smooth surface around it, much like bezier).
Yup. In your example you’re using the 160 x 120 output of the movie as the ctlmatrix, so you’ll have a nurbs with lots of control points. Why the shape looks like that? That’s just the video data…keep in mind it won’t look like a grid but like a dot when your video is all black. If you want to ‘extrude’ your video, you should mix the z-coord of a gridshape with the luma of a movie.
The tri_strip object in your output is still 20 x 20 vertices big, as you’re fading between 20 x 20 and 160 x 120 (latter is scaled down).
I see you’re using a lot of pwindows. Great for testing, but keep in mind they take up resources.
C74 RSS Feed | © Copyright Cycling '74