GPU Frame Delay w/ 3D Textures - Calculating Maximum Buffer Size + Resolution
Hey all,
I've been reverse engineering a GPU based frame delay module from the subtex.3d.maxpat example and it's working really well. I've been led to believe that all the dimensions of need to be powers of 2 (ie 32, 64, 128).
I've been playing with different resolutions for the xy dimension (texture resolution) and the z dimension (total frames stored) and at certain points Max just crashes (ie z dimension stores 128 frames at xy resolution at 2048).
Is there a reason for this or a way to calculate at what point it would crash? I assumed it had to do with video memory but I can make multiple texture buffers at high resolutions and see no slow down or crash but when I go over an arbitrary value on a single texture buffer it crashes. Is there a resolution limit to individual 3D texture buffers? It looks like it when I open the OpenGL Status window and find the Texture tab (MAX_3D_TEXTURE_SIZE). Is there a way to override that or adapt that. I'm using a pretty hefty video card for my Windows desktop but the max size seems similar to the one on my OSX laptop.
This may not be so much a problem to be solved but curiosity about how video memory and 3D textures work. You can play with patch I made below to get a sense of what I'm talking about.
hey Matt, no easy answer here. It's crashing in the glTexImage3D submission, so likely a vendor limitation (even though the reported MAX value suggests otherwise). searching for "glTexImage3D max size" might elucidate somewhat.
trial and error is probably your only friend here, but others might have more experience. my take is that unless you have a very specific reason to use 3d textures, i would embrace the flexibility of something like the jit.gl.textureset example.