Luminance based layering of frames stored in 3rd dimension of jit.matrix

    Nov 04 2015 | 9:21 pm
    This feels like it's going to be a bit esoteric to explain but I'm going to give it a shot. I'm working on a NIME project where in I take surveillance photos of the audiences faces while I'm on stage and display the captured faces and sonify them. What I'm hoping to achieve is the ability to have all of those portraits blend together, not in terms of averaging but more in the sense that all of the images are stored in a 3 dimensional array (imagine 25 portraits stacked on top of each other) and then I could feed in some thing like a 2 dimensional jit.bfg or jit.noise object and have the luminance value of each pixel determine which pixel it is showing in the layers of portraits. For example: if the luminance value of a pixel was 0.0 it would display the pixel in that location with the portrait at the bottom of the pile, a luminance value of 1.0 would display the pixel in that location with the portrait at the top of the pile, a luminance value of .5 would display the pixel at that location with the 12th or 13th portrait in the pile depending on how it interpolates the value to the total dim size of that 3rd dimension.)
    This gif is a visual example of what I'm kind of trying to create. Note: I'm not trying to recreate this gif exactly, it's just the closest visual reference I've found for getting my idea across visually:
    I'm using the jit.matrix trick where you have a 3 dimensional matrix where the 3rd dimension is used as a frame buffer in which individual frames are accessed using the jit.submatrix -> @offset 0 0 $1 attribute. This is working fine but after that I'm kind of stuck. The only idea I've been able to come up with is a complex and inefficient poly system in which the total voices correspond to the dim size of the 3rd dimension and the luminance 'mask' is chopped into layers like a topographical map with each layer paired with it's topgraphical slice and then they are all added together.
    There's got to be a simpler way to acheive this. Any ideas? I've got the 3rd dimension frame buffer example here:

    • Nov 04 2015 | 11:28 pm
      hey matt, check out the subtex.3d.maxpat example patch, as i'm pretty sure this does what you're describing. it shows how to fill a 3d texture, and then display parts of that texture based on the luminance value of a map texture. it uses a shader called td.plane3d.jxs to perform this mapping. the example patch uses a simple gradient to do the mapping, but you could plug any matrix into that (e.g. bfg output).
      the important caveat is that your source images must be converted to POT textures, as opengl does not support rectangular 3d textures. the example patch has this set to 256x256, but there's no reason that can't be much larger.
      check out the patch, and let me know if you have any questions (make sure you enable usedstdim by clicking the toggle, i'm not sure why that's not enabled in this).
      you could probably recreate something like this with jit.gen, in order to get over the POT limitation
    • Nov 05 2015 | 12:09 am
      Oh wow this is totally what I was trying to achieve and I'll get a performance bonus since this pushes a large part of it over to the GPU. Thanks for sharing this Rob! I'll post back if I run into any issues.
    • Nov 09 2015 | 1:53 am
      This patch has been exactly what I needed for my project! I am curious though if there is a way to pop and push the frames after they're written to the object. I'd love to have each incoming frame push out the oldest stored frame with each subsequent frame shifting their position, like you could normally do with an array. At the moment the only solution I have is to rewrite all the frames every time a new frame comes in. Any ideas?