This feels like it's going to be a bit esoteric to explain but I'm going to give it a shot. I'm working on a NIME project where in I take surveillance photos of the audiences faces while I'm on stage and display the captured faces and sonify them. What I'm hoping to achieve is the ability to have all of those portraits blend together, not in terms of averaging but more in the sense that all of the images are stored in a 3 dimensional array (imagine 25 portraits stacked on top of each other) and then I could feed in some thing like a 2 dimensional jit.bfg or jit.noise object and have the luminance value of each pixel determine which pixel it is showing in the layers of portraits. For example: if the luminance value of a pixel was 0.0 it would display the pixel in that location with the portrait at the bottom of the pile, a luminance value of 1.0 would display the pixel in that location with the portrait at the top of the pile, a luminance value of .5 would display the pixel at that location with the 12th or 13th portrait in the pile depending on how it interpolates the value to the total dim size of that 3rd dimension.)
I'm using the jit.matrix trick where you have a 3 dimensional matrix where the 3rd dimension is used as a frame buffer in which individual frames are accessed using the jit.submatrix -> @offset 0 0 $1 attribute. This is working fine but after that I'm kind of stuck. The only idea I've been able to come up with is a complex and inefficient poly system in which the total voices correspond to the dim size of the 3rd dimension and the luminance 'mask' is chopped into layers like a topographical map with each layer paired with it's topgraphical slice and then they are all added together.