using an entire image as a 'pixel'
Hi,
I'd like to divide a matrix up into something like 16x16 and fill each pixel with an image selected at random from a number of possibilities, eg. 4 images. My idea was to use jit.noise (1 float32 16 16), then look at the value for each pixel: if 0 to 0.25 use img 0, if 0.25 to 0.5 use img 1, etc. The problem with this is that I can't use the 'setcell' message to jit.matrix to use an image rather than a set of values, obviously since a whole images is larger than one pixel!
Anyone have a better idea of how I can divide a matrix into a number of 'pixels' and randomly distribute any of a number of images into those pixels?
Cheers,
Jay
You're on the right track. The matrix that you finally display will of course have to be much larger than 16x16; each one of those 256 images (what you're calling 'pixels') must itself be the size of whatever you want the resolution of each image to be. So, make a named jit.matrix object (with @usedstdim 1) that is 16 times as great as that resolution (say, 1920x1440 if you want each image to have a resolution of 120x90), and place your images in it using the dstdimstart and dstdimend attributes. (See Jitter Tutorial 14 for more about that.) Here's an example of how to do that. I'll leave the process of choosing the image locations up to you. (I'd just use a couple of "uzi 16 0" objects, multiplying the count output of one of them by the image width and the other by the image height.)
wouldn't it also be possible to move "inward" in a different dimension in the matrix too? I've never explored this, but it's been one of the ways I'd imagine storing and retrieving multiple images from a single matrix.