How can I get Normals out of

    Apr 16 2009 | 6:57 pm
    Hi all,
    Im trying to work out how much of a cubes face is facing the camera , Ive been told that the normals for the x y and z of each plain will give me this information. How do I get this information out of Im still pretty new to jitter and am struggling to work out some of the more complex stuff people have been posting on the matter. Any help would be greatly appreciated.

    • Apr 16 2009 | 8:36 pm
      Geometry Matrix Details Video in Jitter is typically represented by 4-plane char data, but how is the geometry data being represented? Each vertex in the geometry is typically represented as float32 data with 3, 5, 8, 12, or 13 planes. Planes 0-2 specify the x, y and z position of the vertex. Planes 3 and 4 specify the texture co-ordinates s and t. Planes 5-7 specify the normal vector nx, ny and nz used to calculate the effects of lighting on the geometry. Planes 8-11 specify the red, green, blue, and alpha vertex color. Plane 12 specifies the edge flag e. The output matrix of the object has 12 planes, but since we are not applying a texture to the geometry, and lighting is not enabled, the texture coordinates and normal vectors are ignored.
      taken from tutorial 37 in jitter
    • Apr 16 2009 | 8:54 pm
    • Apr 16 2009 | 9:30 pm
      Thanks for that. I understand a little more now. Do i need to double up my gridshapes and set the second sets matrixoutpt attributes to 1? What do I use to unpack/receive the plane information?
      Thanks a lot
    • Apr 16 2009 | 9:33 pm
      I suppose it's a silly question, but I don't identify this: "Planes 5-7 specify the normal vector nx, "
      Anyone to give an hint?
    • Apr 16 2009 | 10:31 pm
      if you dont understand these basic things you may want to do the jitter tutorials in order.
    • Apr 17 2009 | 5:25 am
      i don't think you will find much information on vertex normals in the jitter documentation. however google and wikipedia are both good homeboys.
    • Apr 17 2009 | 6:18 am
      sorry I was replying to Jamie, You might want to learn more about how geometry works in jitter before you go onto modifying normals, especially since modifying just the normals of an object is somewhat unusual.. you would usually modify both the geometry and the normals, and probably the texcoords too.. but It all depends what your trying to do.. look up jit.unpack and, these are the things you will probably be dealing with. good luck!
    • Apr 17 2009 | 1:22 pm
      Its not that I want to modify the normals, Ive been told that a sum of the normals from each plane would give me the amount of each visible side. I am trying to use a 3D cube as a set of volume faders so that each side has a sample assigned to it and the volume of that sample is dependent on how much of the face it is assigned to is visible. Is this not very similar to the way that the lighting works, only rendering what is visible to the camera? Everything that I have read in the tutorials is about inputing this data as a way to manipulate the shape, I just want to output the data so that it can manipulate the audio.
      Thanks for the support
    • Apr 17 2009 | 6:11 pm
      problem is, matrixoutput sends the data out in local space, not world space (so it's always as if position 000, rotatexyz 000).
      you will have to apply these transformations yourself to the normals you output.
      my guess is there is a much easier way to get the same result, perhaps just using the rotatexyz attribute of your gridshape.
    • Apr 19 2009 | 2:30 pm
      At the moment I am using the y rotation to calculate which side is facing the camera using if objects to split the inputed rotation value into the different sides. This is fine when working in 2D using just the rotation from left to right, but becomes really complicated when using the x axis also. Im sure there is a better way of doing this, but I am unsure of how to go about it. I could post up the patch if it would help, its quite a big and complicated one though.
    • Apr 20 2009 | 1:57 pm
      I don't see why you would use normals to do this either, as the object shape supposedly doesn't change. Just the rotate or rotatexyz attribute should give enough information on how much of a plane you see.
      It's a lot more difficult when things go 3D. I don't know exactly, but usually it looks a bit similar to this: (x and y are amounts of rotation from rotatexyz) Front face amount: cos(y) * cos(x) Side face amount: sin(y) * cos(x) Top face amount: sin(x)
      But you can probably find it correctly somewhere on the internets.