Jun 12 2008 | 1:31 pm
Aloha,
I was wondering if it is theoretically possible to make some vertex displacement shader which is recursively applied? (am I saying this right?)
e.g. : the shader gets fed the value of 0. 0. 0., and alters it to 0.1 0.1 0.1, now this value is used in that shader and spits out 0.2 0.2 0.2
I was thinking along the lines of: named matrix > jit.gl.mesh with shader & matrixoutput > named matrix.
But apart from jit.gl.mesh not working with matrixoutput, I wonder if it is even possible if it would. If so, there's a good reason to have matrixoutput working with jit.gl.mesh after all :)

• Jun 12 2008 | 2:38 pm
if you want to feedback a mesh just use a jit.matrix full of vertices, before pshing it into jit.gl.mesh. then do a normal feedback using named matrices, xfade, what ever.
you can feedback fragment shaders, see the recpies for examples.
also see this patch
On Thu, Jun 12, 2008 at 4:31 PM, Brecht Debackere wrote:
> Aloha, > > I was wondering if it is theoretically possible to make some vertex > displacement shader which is recursively applied? (am I saying this right?) > > e.g. : the shader gets fed the value of 0. 0. 0., and alters it to 0.1 0.1 > 0.1, now this value is used in that shader and spits out 0.2 0.2 0.2 > > I was thinking along the lines of: named matrix > jit.gl.mesh with shader > & matrixoutput > named matrix. > > But apart from jit.gl.mesh not working with matrixoutput, I wonder if it is > even possible if it would. If so, there's a good reason to have matrixoutput > working with jit.gl.mesh after all >
• Jun 12 2008 | 2:58 pm
That's indeed how I've been doing it now, but when using several jit.expr objects, with different expressions and weight, it becomes slow very quickly. That's why I was looking for a way to move what the jit.expr objects are doing from the cpu to the gpu where I think they should be.
• Jun 12 2008 | 3:11 pm
As far as I know(so, not further than my nose), you can't read back vertex shader results back to cpu, but in theory you should be able to process the geometry on gpu, as a float texture, using fragment shaders. In practice, this still doesen't work reliably... see:
best, nesa
• Jun 12 2008 | 3:17 pm
whats the mesh size? it shouldn't be slow, and you get a lot of flexibility with jit.matrix. shaders work differently then matrices. things happen per vertex (and parallel) and they have no knowledge about other vertex, thats what makes them fast.
if you post a patch i'd like to see were its going as i have jit.expr problems of my own.
On Thu, Jun 12, 2008 at 5:58 PM, Brecht Debackere wrote:
> That's indeed how I've been doing it now, but when using several jit.expr > objects, with different expressions and weight, it becomes slow very > quickly. That's why I was looking for a way to move what the jit.expr > objects are doing from the cpu to the gpu where I think they should be. > > >
• Jun 12 2008 | 3:19 pm
CHOIR:
Hello nesa!
On Thu, Jun 12, 2008 at 6:17 PM, yair reshef wrote:
> whats the mesh size? it shouldn't be slow, and you get a lot of flexibility > with jit.matrix. > shaders work differently then matrices. things happen per vertex (and > parallel) and they have no knowledge about other vertex, thats what makes > them fast. > > if you post a patch i'd like to see were its going as i have jit.expr > problems of my own. > > > > On Thu, Jun 12, 2008 at 5:58 PM, Brecht Debackere > wrote: > >> That's indeed how I've been doing it now, but when using several jit.expr >> objects, with different expressions and weight, it becomes slow very >> quickly. That's why I was looking for a way to move what the jit.expr >> objects are doing from the cpu to the gpu where I think they should be. >> >> >> > >
• Jun 12 2008 | 3:19 pm
CHOIR:
Hello nesa!
• Jun 12 2008 | 4:16 pm
I've been doing some tests with 100x100 matrices (it "should" be possible to have 10 times as much with a decent framerate, say at least 20, in theory). Ultimately I would like to have that amount of textured planes, but for now it's just points/lines.
On 12 Jun 2008, at 17:17, yair reshef wrote:
> whats the mesh size? it shouldn't be slow, and you get a lot of > flexibility with jit.matrix. > shaders work differently then matrices. things happen per vertex > (and parallel) and they have no knowledge about other vertex, thats > what makes them fast. > > if you post a patch i'd like to see were its going as i have > jit.expr problems of my own. > > > On Thu, Jun 12, 2008 at 5:58 PM, Brecht Debackere > wrote: > That's indeed how I've been doing it now, but when using several > jit.expr objects, with different expressions and weight, it becomes > slow very quickly. That's why I was looking for a way to move what > the jit.expr objects are doing from the cpu to the gpu where I > think they should be. > > >
• Jun 12 2008 | 4:33 pm
it seems you r talking about 2 diff things
On Thu, Jun 12, 2008 at 7:16 PM, Brecht Debackere wrote:
> I've been doing some tests with 100x100 matrices (it "should" be possible > to have 10 times as much with a decent framerate, say at least 20, in > theory). > Ultimately I would like to have that amount of textured planes, but for now > it's just points/lines. > > > On 12 Jun 2008, at 17:17, yair reshef wrote: > > whats the mesh size? it shouldn't be slow, and you get a lot of flexibility > with jit.matrix. > shaders work differently then matrices. things happen per vertex (and > parallel) and they have no knowledge about other vertex, thats what makes > them fast. > > if you post a patch i'd like to see were its going as i have jit.expr > problems of my own. > > > On Thu, Jun 12, 2008 at 5:58 PM, Brecht Debackere > wrote: > >> That's indeed how I've been doing it now, but when using several jit.expr >> objects, with different expressions and weight, it becomes slow very >> quickly. That's why I was looking for a way to move what the jit.expr >> objects are doing from the cpu to the gpu where I think they should be. >> >> >> > > > > > >
• Jun 12 2008 | 5:09 pm
Hi Brecht, I believe this is possible using Geometry shaders, which are a LOT more work to put together. For using standard Vertex/Fragment shader programs, you're not going to be able to get recursion. If you post a patch, maybe we can help you find ways to optimize your processes and move things into a vertex program where possible. Keep in mind that 100x100 is 10,000 vertex points, and you might be also encountering drawing bottlenecks.
Let us know how you get on.
Andrew B.
• Jun 12 2008 | 6:40 pm
I'll get back on it, once I get some stuff cleaned up. I know it is 10 000 vertex points, but looking at some graphics cards tests, the tiangle/primitive count of a number of tested games goes at least 70x higher. I'll see where it strands.
> it seems you r talking about 2 diff things
not really, it's basically the same matrix calculations, whether I use the result to draw points, or use them as positions for the planes.
On 12 Jun 2008, at 19:09, Andrew Benson wrote:
> > Hi Brecht, > I believe this is possible using Geometry shaders, which are a LOT > more work to put together. For using standard Vertex/Fragment > shader programs, you're not going to be able to get recursion. If > you post a patch, maybe we can help you find ways to optimize your > processes and move things into a vertex program where possible. > Keep in mind that 100x100 is 10,000 vertex points, and you might be > also encountering drawing bottlenecks. > > Let us know how you get on. > > Andrew B. > -- > Andrew B. > -- > Cycling '74 Support
• Jun 12 2008 | 7:13 pm
It's certainly possible to render around 100,000 points in realtime and dynamically. When you start rendering planes at this size, you start to hit the fill-rate bound. For triangle meshes, I can reliably do around 70,000 points dynamically and on workstation GPUs around 150,000. It's of course higher with less intensive shaders and more static data.
The way to get this rate of throughput is via render-to-vertex buffer. Jitter currently does not support this although it is planned for a future release. Also, geometry shaders can do what's called transform-feedback but again Jitter does not currently support this but will probably do so in the future.
to give you an idea of how it works with the RTVBO path, you use float textures as buffer of vertices and process like normal textures with slabs. Then, when you want to turn them into geometry, you flip a few switches on the GPU to send the float texture data into a VBO, which you can then pass to a vertex shader or draw with the fixed function pipeline.
wes
On Thu, Jun 12, 2008 at 11:40 AM, Brecht Debackere wrote: > I'll get back on it, once I get some stuff cleaned up. I know it is 10 000 > vertex points, but looking at some graphics cards tests, the > tiangle/primitive count of a number of tested games goes at least 70x > higher. > I'll see where it strands. > > it seems you r talking about 2 diff things > > not really, it's basically the same matrix calculations, whether I use the > result to draw points, or use them as positions for the planes. > > > On 12 Jun 2008, at 19:09, Andrew Benson wrote: > > Hi Brecht, > I believe this is possible using Geometry shaders, which are a LOT more work > to put together. For using standard Vertex/Fragment shader programs, you're > not going to be able to get recursion. If you post a patch, maybe we can > help you find ways to optimize your processes and move things into a vertex > program where possible. Keep in mind that 100x100 is 10,000 vertex points, > and you might be also encountering drawing bottlenecks. > Let us know how you get on. > Andrew B. > -- > Andrew B. > -- > Cycling '74 Support > > >