Mar 24 2010 | 11:12 am
    When I play a video through jit.gl.nurbs or render what exactly is the geometry representing? Where does it get its information from? Sound / visual?

    • Mar 24 2010 | 11:58 am
      What you you mean by 'play a video through nurbs'? Do you have an example patch? Are you using the video input as a ctlmatrix for the nurbs object?
      In that case, the values of the pixels are the 3d points around which the nurbs object is created. The 4 planes of your video are used as (x,y,z) and w, which is weight (to calculate the mesh, nurbs uses a grid of 3d coords as input and makes a smooth surface around it, much like bezier).
    • Mar 24 2010 | 12:18 pm
      sorry a bit new to max here is a print screen of my patch. I think I am using the video as a control matrix for my nurbs object.
    • Mar 24 2010 | 12:22 pm
      here is what geometry I am getting from my patch.
    • Mar 24 2010 | 1:46 pm
      Yup. In your example you're using the 160 x 120 output of the movie as the ctlmatrix, so you'll have a nurbs with lots of control points. Why the shape looks like that? That's just the video data...keep in mind it won't look like a grid but like a dot when your video is all black. If you want to 'extrude' your video, you should mix the z-coord of a gridshape with the luma of a movie.
      The tri_strip object in your output is still 20 x 20 vertices big, as you're fading between 20 x 20 and 160 x 120 (latter is scaled down).
      I see you're using a lot of pwindows. Great for testing, but keep in mind they take up resources.