I did many mxj and patches with the quickhull3d.1.4.jar years ago. (max 4 patches , yeepah)
I used it only in 2D, flatening the z dimension.
perhaps i can send you the mxj i develloped has interface for the library...
My work was principaly into 'breaking' an image depending of the uniformity of the color values and then create an graphic redesign of the voronoi polygons in matrix and then jitter GL meshes.
if you want to use it for strictly 3D weights, perhaps it's better you start from scratch.
With the mxj interface, i think I could output the voronoi of changing 20 points easilly at 30 fps, on mac laptop of 2009.
the problem is that voronoi/delaunay calculation is exponential of the points quantity.
The realtime problem I had was to calculate this 100 points of interests in an image at 30fps, and the feed it to the mxj.
I can send you some mxj if you want, but they are really 'cascading' for matrix output, using Mpolygon java lib for storage.
If you just want to feed 20 points to quickhull3d.1.4.jar and get the polygons out, it should be more simple from scratch.
I said 20 pts but my image was more like 200 points.
It's very important to know if you gonna use for animation:
_ a well define set of points > you will only use quickhull then, and it should be fast.
The points for each frame will be moving slowly > so will the voronoi polygons.
I wiil absolutly recommand this solution. Generate and control your points for the delaunay triangulation > voronoi
_ if you need to generate those weighted points (from an image for example)
In my experience, the main problem for realtime was not coming from quickhull but from the algorithm to generate those points.
See this interesting thread for sampling algorithms:
What happens is that a subtle difference between 2 frames can lead to a totaly different set of points being generated by the sample algorithm. Then quickhull will generate from those points a totally different set of polygons. a complete different 'mosaic' for each frame.
Then it will flicker like hell...
I never found a way to generate an 'soft evolving' set of points between each frames...
actually, they will come from a depthmap movie that will be playing and from which luminance value will distort a jit.gl.mesh.
It means that I'll need to recalculate the convex hull at each frame of the movie.
Actually, there are also cheap and glitchy solutions :
- finding furthest dots in both direction of all axis (total of 6 points) and to link these points with 'things' (lines etc)
- reduce the grid and calculate the convex hull of the reduced set of points
I suppose you will use depthkit.
I'll be interested to know the way they encode the depth in a mp4 video codec.
Talking of depth map of humans and skeletons:
Just a though of the way to 'voronoi' the 3D datas.
I think the delaunay points could be a sum of:
_the skeleton points from kinect, as main pts for the grid (to give consistency between frames)
_the intersection of lines going from skeleton points to the border of usermap. (with specific angles, ex the neck node will 'send' 2 lines 30° upleft and upright, and get the intersect with the outline usermap)
It should result in a first simple triangulation, and the way triangles are created should be fairly consistent between frames.
Depthkit contains eventually the usermap but does not retain the skeleton (by the way, i think it's an omission in their format that will not allow some futur process).
Well, you will see in your devellopment. But keep in mind what i told you of my experience:
If you do not create a proper logic for selecting points consistent between frames, you will obtain a complete different set of points for each frame and that will result in a flickering effect.
It can be funny but also very limitating graphically