I spent a few hours creating a patch to explain my problem here and eventually found the solution by myself. Not a reason to not post here about the problem, in case someone else encounter something similar.
So the patch is basically grabing the webcam and using luminance as a heightmap for a gl.mesh, after some treatments : a slide and some blur.
I tried many different ways in order to get best performance I could (in my main project the video source is also used to texture the mesh and there is a lot of other gl objects), when I realized that the result of a jit.slide was way different compared to a [jit.gl.slab @file tp.slide.jxs], the last one's output being annoyingly ugly (the mesh was not smooth anymore, but full of tiny stairs) even after 6 passes of gaussian blur. Weirdest thing is that the problem was solved if I passed my video source through a jit.matrix object.
Actually the solution had nothing to do with CPU->GPU or GPU->CPU, or a bad use of a shader, but was only related to matrix types. When I had ugly results, that was because the slide object (either jit.slide or its slab alternative) was using char matrix, that has a way poorer definition (only 256 values) than a float32 matrix which is obvious actually. Spent a day to figure it out. I feel quite dumb right now.
Here is the patch to reproduce the problem and solve it by yourself ;)