That's kind of what electric sheep is doing with iterated function
systems. That system works by having 14 equations each with its own
weight where they all add up to 1. It morphs between functions by
changing relative weights. The only trouble with doing that in a
vertex shader is how many equations th shader can take before it
becomes slow. I bet it could handle 3 on a good card, maybe more if
they're not very complicated.
On 9/13/07, vade wrote:
> Nice one :)
> I just picked up the Orange Book today, its been super helpful already. Ive
> been able to properly antialias some of the procedural texture generators
> ive made.
> Might be interesting to put in two formulas and add mix() to morph between
> vertex targets. Hmmm :)
> On Sep 13, 2007, at 9:39 PM, Wesley Smith wrote:
> Hi folks,
> Here's a shader and patch I made following the "Calculate Normals In
> Shader" blog entry on http://tonfilm.blogspot.com/ . It should be
> quite easy to plug in other equations from
> http://local.wasp.uwa.edu.au/%7Epbourke/surfaces_curves/ .
> This one implements the Tranguloid Trefoil equation:
> v a d e //
Try reducing the dimensions of the input matrix. I set it quite high.
BTW, what graphics card do you have? Radeon 9700? If so all of the
trigonometric stuff might kill it and if the shape takes up large
portion of the screen, the per-pixel lighting won't help either.
I couldn't let Wes have all the fun, so here is a version that does
Spherical Harmonics. I also threw in my special normal-dependent
pixel-disposal sauce to make it that much more interesting. Try
animating the delta value and changing the fig/thresh params. Also, the
sphere param mixes between a sphere and the distorted shape, and of
course you can extrapolate. For those of you not running the latest and
greatest hardware, I recommend lowering the jit.matrix dims. It does
look really nice at 500x500 though
I'm not usually a big supporter of mathematical surfaces, but I guess
I'm beginning to see the light.
Try lowering the dimensions of the shape matrix to something reasonable
(say 50x50) or something and see if that helps. It could just be that
the shader pushes the limits of your graphics card. I'll have a look at
it later and see if I can streamline the code a little.
Andrew Benson skrev:
> oops, that patch had a minor issue. This one will work better.
I have just one issue with it, and that is that the texture params "fig"
and "thresh" control an effect that seems quite jagged - is there a way
to antialias there? That would make it even more awesome.
Unfortunately, this effect makes use of the "discard" method, which is
either on or off. Basically, it's just throwing out pixels if the
output of a formula falls below a threshold. If you don't care about
depthbuffering, you can use a
smoothstep(thresh,thresh+fwidth(goop)*fade,goop); and apply that to the
If you do want to keep the depthbuffering and need antialiasing, you
could always render to a double-size texture and do averaging
downsamples, as has been discussed previously.
Here's a random question about this... not to provoke an argument,
but simply because I'm always curious about the "best" way of doing
Why do all this in a shader, why not use jit.expr to calculate
geometry for surfaces and then the normals? Is there a benefit to
first calculating a normal-distribution-based coordinate spread for a
plane, and then using that as input into a shader?
I have a friend who does a lot of OpenGL programming, and he swears
by using only vertex arrays (especially cached) and standard opengl
blend modes. But he doesn't know shaders well, I feel like there
could be something he's missing.
On Sep 17, 2007, at 3:16 PM, evan.raskob [lists] wrote:
> Here's a random question about this... not to provoke an argument,
> but simply because I'm always curious about the "best" way of doing
I guess my general practice is to let the system breathe... So stuff
as much as you can on a GPU, until it strats to choke.
> Why do all this in a shader, why not use jit.expr to calculate
> geometry for surfaces and then the normals? Is there a benefit to
> first calculating a normal-distribution-based coordinate spread for
> a plane, and then using that as input into a shader?
It is simple, and since the vertices are sent only once - there's
almost no performance hit.
Also, as far as I know, shaders are incapable of creating new vertices.
> I have a friend who does a lot of OpenGL programming, and he swears
> by using only vertex arrays (especially cached) and standard opengl
> blend modes. But he doesn't know shaders well, I feel like there
> could be something he's missing.
he's missing a lot! Many of the patches posted lately on the list
would be very hard, if not impossible, to recreate while using only
GPU without the shaders.
> > Why do all this in a shader, why not use jit.expr to calculate
> > geometry for surfaces and then the normals? Is there a benefit to
> > first calculating a normal-distribution-based coordinate spread for
> > a plane, and then using that as input into a shader?
Imagine doing this:
x = 2 sin(3 u) / (2 + cos(v))
y = 2 (sin(u) + 2 sin(2 u)) / (2 + cos(v + 2 pi / 3))
z = (cos(u) - 2 cos(2 u)) (2 + cos(v)) (2 + cos(v + 2 pi / 3)) / 4
with jit.expr at 30fps on a 50x50 grid. You're framerate will crawl
if not die if you do it on the CPU. On the GPU you can warp the
coordinate range and other fun stuff dynamically for smooth
animations. jit.gl.mesh does use VBOs under the hood for sending the
planar data so that is also fast.
> It is simple, and since the vertices are sent only once - there's
> almost no performance hit.
> Also, as far as I know, shaders are incapable of creating new vertices.
Not entirely true. With Geometry shaders, you can generate quite a
large number of vertices on the GPU.
> > I have a friend who does a lot of OpenGL programming, and he swears
> > by using only vertex arrays (especially cached) and standard opengl
> > blend modes. But he doesn't know shaders well, I feel like there
> > could be something he's missing.
VBOs are definnitely best practice for dispatching data to the GPU but
shaders are required for complex rendering effects. Blend modes are
nice but limited. For example, you can't do per-pixel lighting with
just VBOs and blend modes.
I've been following this thread for a while and it is really fun.
This is my first attempt at 3D open GL, so I am having quite a time
with it. I sat and watched Dan Vatsky at Share the other week and we
were wondering why it isn't possible to morph a 3D shape he had
designed in maya in Jitter. And you guys point us in the some near
direction of that now!
Basically I wrote the shader to mix() between two of the shapes
documented on the website mentioned in the beginning of this thread.
Values for the param "sphere" - 0. shows the owl shape and 1. shows
the mobius strip. After some tuning of the incoming points each
formula seems to work and the fade does too. What I am having
difficulty with is the lighting and adding a video texture.
Am I right to assume that the lighting and texturing is done for each
point on the shape (and only one "side") and not as a shape in
general and that is why the lighting and texturing don't quite work?
Surprisingly I could figure out the math, but I am fairly new to
this. Any explanation on what is happening with lighting and
texturing and how it might work would help.
Thanks again for all the sharing and the great ideas on the list.
How do you imagine the video on the form looking? Is is mapped to the
surface directly? Is it reflection mapped? For the former, you will
have to generate texture coordinates in some fashion. The simplest
way it linearly with the input points. For the latter, just take the
fragment shader portion of the refract.reflect shader I posted last
week and it should work.