Any implementations of qHull in Max ?


    Feb 04 2019 | 6:50 pm
    Hi there, before trying to understand the time cost of my on implementation of this, do you know about some qHull implementation in Max with x,y,z matrices as inputs ??

    • Feb 05 2019 | 3:33 am
      I did many mxj and patches with the quickhull3d.1.4.jar years ago. (max 4 patches , yeepah) I used it only in 2D, flatening the z dimension. perhaps i can send you the mxj i develloped has interface for the library... My work was principaly into 'breaking' an image depending of the uniformity of the color values and then create an graphic redesign of the voronoi polygons in matrix and then jitter GL meshes.
      if you want to use it for strictly 3D weights, perhaps it's better you start from scratch.
    • Feb 05 2019 | 8:45 am
      Hi SPA, interesting. Was it "real-time" ?
    • Feb 06 2019 | 10:42 am
      With the mxj interface, i think I could output the voronoi of changing 20 points easilly at 30 fps, on mac laptop of 2009. the problem is that voronoi/delaunay calculation is exponential of the points quantity. The realtime problem I had was to calculate this 100 points of interests in an image at 30fps, and the feed it to the mxj. I can send you some mxj if you want, but they are really 'cascading' for matrix output, using Mpolygon java lib for storage.
      If you just want to feed 20 points to quickhull3d.1.4.jar and get the polygons out, it should be more simple from scratch.
    • Feb 06 2019 | 6:34 pm
      gosh. few set of points. I'm curious about imagining performances for this exact same code but as an external (c++) processing a matrix and rendering a convex hull as another matrix.
    • Feb 06 2019 | 6:59 pm
      https://github.com/akuukka/quickhull seems interesting and free domain implementing this with max sdk could be nice trying to find a hole in my schedule for that :-/
    • Feb 06 2019 | 11:22 pm
      I said 20 pts but my image was more like 200 points. It's very important to know if you gonna use for animation: _ a well define set of points > you will only use quickhull then, and it should be fast. The points for each frame will be moving slowly > so will the voronoi polygons. I wiil absolutly recommand this solution. Generate and control your points for the delaunay triangulation > voronoi _ if you need to generate those weighted points (from an image for example) In my experience, the main problem for realtime was not coming from quickhull but from the algorithm to generate those points. See this interesting thread for sampling algorithms: https://codegolf.stackexchange.com/questions/50299/draw-an-image-as-a-voronoi-map
      What happens is that a subtle difference between 2 frames can lead to a totaly different set of points being generated by the sample algorithm. Then quickhull will generate from those points a totally different set of polygons. a complete different 'mosaic' for each frame. Then it will flicker like hell... I never found a way to generate an 'soft evolving' set of points between each frames...
    • Feb 07 2019 | 3:02 pm
      Hi SPA,
      actually, they will come from a depthmap movie that will be playing and from which luminance value will distort a jit.gl.mesh.
      It means that I'll need to recalculate the convex hull at each frame of the movie.
      Actually, there are also cheap and glitchy solutions : - finding furthest dots in both direction of all axis (total of 6 points) and to link these points with 'things' (lines etc) - reduce the grid and calculate the convex hull of the reduced set of points
      let's see...
    • Feb 07 2019 | 5:11 pm
      I suppose you will use depthkit. I'll be interested to know the way they encode the depth in a mp4 video codec.
      Talking of depth map of humans and skeletons: Just a though of the way to 'voronoi' the 3D datas. I think the delaunay points could be a sum of: _the skeleton points from kinect, as main pts for the grid (to give consistency between frames) _the intersection of lines going from skeleton points to the border of usermap. (with specific angles, ex the neck node will 'send' 2 lines 30° upleft and upright, and get the intersect with the outline usermap) It should result in a first simple triangulation, and the way triangles are created should be fairly consistent between frames.
      Depthkit contains eventually the usermap but does not retain the skeleton (by the way, i think it's an omission in their format that will not allow some futur process). Well, you will see in your devellopment. But keep in mind what i told you of my experience: If you do not create a proper logic for selecting points consistent between frames, you will obtain a complete different set of points for each frame and that will result in a flickering effect. It can be funny but also very limitating graphically
      Keep us informed of your result.