How do I get a jpeg image from memory address into matrix?
I’m working with the Point Grey Ladybug2 camera and the libdc1394v2 example code (Mac OS X) and I can get the 24 jpeg images that compose the 6-4 plane frames of video to write to disk.
I feel like I’m close but missing something vital. Are there any functions (or combination of functions) in the Jitter SDK that already interpret JPG information stored at an address into a Matrix? Or what intermediate step might be required? Would I have to populate the matrix cell by cell? Or is there something in the Apple SDK that would interpret JPG information into a form that I can populate a matrix with?
Any and all suggestions appreciated.
We don’t provide anything for decompressing compressed JPEG data from RAM. If it is an uncompressed image pointer, you can often wrap the memory directly with JIT_MATRIX_DATA_REFERENCE. Search the SDK for that string for examples. This is the strategy I would suggest (i.e. avoid the JPEG compression, as I’m reasonably certain it can provide you with the uncompressed data).
Otherwise you can write the compressed JPEG data as a file and import with jit.matrix’s import method.
If you have some code excerpts with specific questions, we might be able to help you further.
Thanks, I’ve been trying to figure out how to initialize the camera properly with uncompressed images. Finally got it.
Please forgive any stupid mistakes, I’m learning as I go.
I’ve been attempting to write to a matrix just as you’ve explained in the following post:
So I create the Matrix like so:
Then in _new:
info.flags = JIT_MATRIX_DATA_REFERENCE;
info.planecount = 1;
x->Uname = jit_symbol_unique();
m = jit_object_new(_jit_sym_jit_matrix, &info);
m = jit_object_method(m, _jit_sym_register, x->Uname);
jit_object_method(m,_jit_sym_getdata,&frame); //likely wrong, frame in memory might be frame->image
Would you point me in the right direction as to how to set up an outlet in Max that allows me to pass out a matrix and how to pass this matrix out on a bang? I just don’t see any example that pass a matrix out an outlet without the MOP/jitter class.
Full code attached.
Sorry to bump, but I’m really stuck here. Is it possible to output a matrix without the jit mop? Or should I rewrite this external.
If I need to create the jit mop class (I’m assuming jit.noise would be the one to duplicate, how do I break up the code to initialize the camera? Do I put the commands to initialize in the max wrapper then tell the jitter class what address to look for the image at?
Thanks, any feedback would be helpful at this point.
Nevermind. I’ve started using an MOP. What’s the best way to send open and close messages to initialize and close the camera? Should I just create a method that’s called from within a mess function in the max wrapper?
Okay, I’ve got a working prototype, but it’s painfully slow. Biggest issue being the bayer conversion which doesn’t work, likely because the image is at 90 deg from the way the libdc conversions do them. I’m imagining we could speed up the performance if we were able to do the color conversions on the GPU, is it possible to do this within the external? Or should I output a raw image and create jit.gl.slabs to transform the bayer filter?
If anyone knows of a slab that does Bayer filters can you post here plz?