Changing the colors of a jit.gl.multiple in real time
I am trying to feed the output of my jit.gl.pix into a jit.matrix into the jit.gl.multiple color array. I notice that there is no out when I feed the jit.gl.pix into a jit.gl.matrix. Is this new behavior for max 9? I am pretty sure I used to do this technique a lot.
Looking at my patch will help make everything more clear :) Thanks for the help
It was not working because:
you were not banging the source jit.matrix for each frame because your [qmetro] was disabled
the first [jit.gl.pix] was just generating a texture with normalized texture coordinates (the [norm] object inside of it) regardless of the imported image in the [jit.matrix] above it. I removed it, assuming you want to get your image imported with 'importmovie'.
the last [jit.matrix] was not connected to [jit.gl.multiple]
Furthermore, some optimizations could be done:
Don't use [qmetro], use the bangs from [jit.world] instead. You'll get exactly one bang per generated frame, no less, no more, at the right time, that's all you need here.
Sending a [jit.matrix] to a jit.gl object requires the image to be transferred from the RAM to the VRAM of the GPU of your computer. Sending a texture from a jit.gl object back to a [jit.matrix] requires to do the opposite way. This is very inefficient and you want to avoid this as much as possible. Given that [jit.gl.multiple] only takes matrices as inputs, you don't need to use [jit.gl.pix] at all here. You can replace it with a [jit.gen] with the exact same operators inside of it.
I also did bits of cleaning, and added a [jit.time] to easily change your time
attribute.
Old habits have to die hard :)
Thank you so much for helping me out. Long time max user here.
I became so accustomed to using jit.gl.pix because I love the efficiency for generating textures. I never realized the technical inefficiencies of passing the output from gl.pix into a matrix.
Is there an efficient way to convert the texture output into a matrix output or would you recommend using gen for shaders
It is an issue for me when I create a shader and want to use that output to modulate jit.gl.mesh position points for example
Usually it's fine to go from matrix to texture, especially when you just loadbang them and that's it.
I believe the most costly operation is the other way around, from texture to matrix, but maybe it's simply because you usually pull more data in that direction (like a 4K 4 channels textures that you want to get back into the matrix domain) than the other (like pushing a geometry matrix with a few thousands points). You can use [jit.gl.asyncread] to do this conversion without impacting your performance too much.[jit.pwindow] handle textures just fine (except for a bug when they are in subpatchers that are being edit I believe), so no need to use asyncread for that.
If you want to play with geometries efficiently on the GPU it quickly becomes a bit harder than playing with textures. Using jit.gl.pix to play with geometries is kind of a workaround already, and the only way to do it efficiently (staying on the GPU) is to send the result as a height map texture to jit.gl.material, which is a quite limited approach. Otherwise, you have to go the hard way with jit.gl.tf and jit.gl.buffer, and/or jit.gl.shader, which requires to right your own shader code in GLSL langage.
But if your project is small enough and/or if you have good enough GPU/CPU power, it's fine to generate geometry into the texture domain, so a read back to matrix and send it to a jit.gl.mesh or multiple. If it works it works! But depending on the amount and kind of processing in place, it might be more efficient to actually stay in the matrix domain rather than having to pay the cost of a texture-to-matrix read back.