Splitting ARGB planes
I’m new to Max and am working on my first project.
I’d like to take a live webcam feed and split the ARGB so that I can view each plane individually in individual jit.pwindows within the patch. From there, the dominant colour will generate tones. The problem is that I cant seem to split the ARGB channels after I get the webcam feed in. I’ve applied jit.matrix to the webcam feed, then unpacked it, and then I’ve tried calling the individual planes using jit.matrix object again, which doesnt seem to work, and I’ve also tried using jit.findbounds, which also doesnt seem to work for me.
I dont know how to post the max patch within my post here, so I’ve attached it as a separate doc. If anyone could help, that would be great.
The way it is now, I’d remove the matrix (looks like you’re resizing to a one-dimensional matrix with 4 planes and a width of 16). Then, after jit.unpack, you should have 4 separate planes. I’ve never used findbounds.
You can post using Copy Compressed from the Edit menu, then just paste into the browser.
hey there…I’ve actually made something similar if not exactly based on what you are trying to make..check it out: http://vimeo.com/953967
It does some math on the different r g b values and determines which color is there..but I decided to go super simple and stick with rgb..doing intermediate colors (purple, yellow, etc) would require much more accuracy with both camera white balance settings and other calibration..my installation frequently falls out of calibration with even a slight color change. I split my image into 8 ‘lanes’ so that each area triggers a different note..blue is low notes, green is middle and red is high
you’ll want to split everything into the 4 planes like you already have..then send all of those into separate jit.3m objects and probably use the ‘mean’ output to get the average color coming in..then stare at the numbers you’re getting out of that
if you wanted to re-pack the color planes..you can just use jit.pack..but if you havent quite gotten unpacking down..thats just jit.unpack
laserpilot: when you speak of these lanes, are you talking about jit.matrix? I’m not quite understanding the lanes…
I managed to get the RBG values split, did some math on the values but not sure that my scaling is correct. There might be a flaw in my math, possibly my logic! Anyone wanna take a peek?
----------begin_max5_patcher---------- 2635.3oc0cs0aiiaE94jeEDFEK1AaVCdmT8ghsW.5KEXA5C8khhEx1ZbzVao rxxyjtK1+6UhG4DYaIKpXJR6LSRLoUrOee5imygW8u83CyVj+ZxtYn+H5eid 3ge6wGdvTUcEOzT9gYaiec4l3clKa1x7saSxJm8D7bkIuVZpu74zcnultYCZ WR1Jzp7soYwYknk4ax2Wf9R7l8InxbT4yInc6W7Rb4xmqJDWB+QqSxRJhKqu jrjcnEw6RVgxyPlW1r8aWjTb3sbSZVxx78Yl2WdSkoqLVQ9he96IX5gK8y4Y kYwaSLO2etHMdS6mYW5uZdFBcNto5p2ozrMIkFrRdux78kGp8vkZPPZ15epH YYIvfDEo5UBIkx5eQDZSopWcz+o9u42e7w5e7jkLcVxWqfyYD81jxhbDGiwy 5B6D2fcZmXmzO1iT.zqe8P.OPw0Hu4OAdUJ+euj.W+rEwYqm8gHlsI61EuN4 LlYmQrUzIsfCKsnw0+hC5B8knkOFkbbKjVHOJZJaKPGD3bidPfGVOjV6T4Zj E8zd4W2F+B56IHBBinBQWbjxMbjXrhCt.XGso4BvXWljbs5PGH0QCxkXxcf7 PDH4AUy.4AMTxCYfjGMHWhuG7dvCj7.DDBfplN0wm2jWYRcAbRXcdHjVqNpv P7DpOLRCiJoKZhE3nK7PIPz5v593dQffCb7kvIPDAM4TgjeOnOz3vFfgymV4 QeinveYy9jNnCkJP8tmEAxEk9TUiCA8euHIIqKTqCDpoTwzi5+YxptvbTntS qZAYN4pfbuN+Tgx4GKRzxseX89k75KEn+PJA8cU+jZ9IqKtxQcDgMVWfGZyS LTFgv7dLREKTIQQu+jI7.ISNvUATlPCk2D0skJ4aqjIeWkHo5a1m5hnHgxUh psDAOcilS2jyOmVNeabYQ5qHNZ4ywEUobJ6fe3iMrDYtHRPzJmnlDLHlKlAM jrPOUAreB.Vsnx4b1KwK+uHdWDkzgDEezDkDxGi1zG+aXdRcCvS7H1MOOoCK OofYn5VQOsOqWlRL8tnDWhoLyhGG7mKUikorujS4zWJReuiWsHSIYxIyKzSK ILknbpgFYWWWstfZ54zck4qKh2VE7qJEp00A+P+P79x7kaRphERP+PVdw13M Un7PguDuomg7WRCYLRF13RioLtzXBkeZq1apmAMgAVU9YxpunxZmWrS6v9As PQ5tkwaLu+34QSXpo8OVmgjknUZk6BVhnBJMw32IzTPUSjHwMEM067WShBpq IgXNq5KE2Azz0O+18NI+gji3J58.GQIAkjXr6ARhD1rhh32RjT2oh1zoFL7u tDZrImC4V5whY5IHgIsdw2b9ub5jDUcQcwXhP1+ENIxnzzx2WemNe1w1lD20 TBRkgD4TX0k3FjOPyk4u8+NnAVPaxbTtPinIyao8z2CbquYFOnILJ.+w2twt XAsSrDB+VmeBZ+xThac5In8GiIuQnmOXROrnflzSyv4FpzcF430xwgLpeyzW 1vVL1TMdsrtF0CN4FPm.yLIORaw729TqucaFgockPHmFRoArRYbS9flIJ6qo Yqx+ZWI9NyMi1WjI2cBrbnfcnkb3an9XMNJC0ZbTBylPCm3403nTFLTaZMSn RuuFGk7vdmF.6UtFG6Cx+sCaB3+Z89xD8uh6VsKhB190EZzCKdIxUdiu2rCE zvtGLwrPuKpR+rYcYg9S0KeuOg9lu4sRrOUuivyLKtujM6RP0W52VuH+N9Yo vy1859iEpkGZCASgYhUEMUqoqKGOj413gfWvoMd3HS5kDHGD.izbikwmprcu vZcYRCIJtzlGCVbAFLqX2ZKxkdOB.proJfrqKMDSEny..8QYWPmpy.fK6jf7 9P.dsSYGHF.mEvVHivmH+DCgIrSc8AgHmVOek4qWuoyb90i.KWZaeZTaJ5gl r1Dy2osAWtIeWW.jEpilDPwRfewhlryfitcv+KvA1Bqyo6KTtj.QBAabIIES 4o0xEh68KkyWWDu.wnXDk2EAITgZOlCZllSxGrmVic80jJ+kN6vrPFV4CDQC VHrd8TsQGpdTA.mQUdqGU8025+Q5WRL8qdeA5GKV24LDyrXrhWTk445h78Yq ZSHKVCGkRFPfmqUbA4opGPE3Hc8CTDkjqpts2B7SRJ3vtOl1z0DwDNrh7nKy INYWW1LxZTA34kGhzlDXe.Tv4PXAJwC.swKPPwodJwIkARV3XH5CBTy0ZN59 N4zFz7FVW+wneWkeskGd4NbPlgd+sbUxtpN5EWllm05hLG+cstpmSWsJIqsG lsoqdIuxobiUzyMCaMpSe+5ypvd0ppGNrAMpS3yo1lpOowFzlLFN0e1jzVah 3Mapdrhsyl738NaD40Ghd9zlT1XSZuZSZaz30mlb9jmD1XS9UOorxlX9kmX1 XS9M5hxJaxuwVN4sqaaR621c0m.eCZSm3Xcx8EXkFm6Waxp6c90+jxl3cZ+p wU1DuS4W8Dyl3cJOm8K0Vaxe4zQrlm7WNcLa7Oo7bNc1ZR96VmvVSxe24nV4 If6UZRaqI4QAtvVaxiNwsoycJOGryJGA9MwWtMNLE9M.L2FMtvuA631nwE9U OIsI.L2u1jvF8jjdCduyucZwHUXCcuSdzEctMczrzHeB9gYwb1dhYNwxk0K7 6mP7lOeRf0x+wkDDnDgVekBXx8td7RGBupABk1BuFThmqivjHCxwMO5BHWDE 0F4v516jRZAr7OpdoYLFg0TkKfOYH3qGHr842t4ueOuWPqUs.sPqmSjRoV0T kK7AMDvjizWsTHLlLb5l0tj2Z0gm1VcBAok1qoTSqNgR4+VcXe0pSHa2p6PI snEx8aCN7D2fSIeqz0lmMyB06HC2ypZUQNrAlg8o6YU4iTB3izEgvvpDX6Fw vvRPoUAVDryFLtNLEt1jHH135XjIJKnbv0FrD8aU5Z64pMRkQNEfbMwDSzb7 PzrktOsJWX1C4uhNRoBmSOytOsJWX2C4sgLxrdq2qgmZ2mVkKr6Am+5QpSXR 9o18YU4hFiCpSXeL6lyeqE4Q1sCZVxrQeOxNhT+w3wIdvOqJWX1CJuG4L.S4 7yr6SqxE18fx6wx2XwY18oU4B48P7Mi8wraE8M48Q1sKj2QVD0gMxABjHoPB TvVMnUIWXsC0XjMx7oHUzXKqsUIWXsCpIFYaPMukw1pfKr0AWEKircGjJ2g. gBWYqbaFieVzGxVossUGz9hX0TIooejToYG5QJrwuZWf17Aei4xnbG.CalI5 wF8FqM8Hf.ayiVkt1Q3vlXerQl3ur4L7EVZkvYmvwkf6.M3PPbBLFx07HENv Bf8RffDQae2Hh5DXPcqxQaFXlKBilrAafA0S8IlL1PkG0o3VEb.myGxVGYXR fJan41k7hKR4U3gTveqv0xq1LOWzQFRGbS274cvaO9pYUqllvQZpQFyi.awC 3SJ5iKz7oQ.HjkWuyCFYBPwAySz1Vcv.XJmBa02LNUMcnnY6mwZuWzXSBJrx A3n6WyQt.aF0x1EnX96vvT3jsTwi+9i+eV7FgVB -----------end_max5_patcher-----------
By lanes I mean I segmented my video so that I could figure out the dominant color on smaller segments as opposed to just the overall. using jit.scissors and jit.glue
I don’t understand why you have those zmaps there…they just convert one way..and then convert right back? Seems really unnecessary. I’ve never actually used zmap..I’m so used to scale..but i guess zmap is nice for lists.
I’m also not sure why you’re adding together the min, max and mean values for all of the colors. On the one hand I can’t think through how it would hurt..but on the other hand I can’t see how it would help.
All of those histograms and extra jit.pwindows seem unecessary for debugging as well..or at least, I would take them out when it comes to presentation time..they’ll blast your fps like mad.
Here is how I figure out my values for my inverse color organ..let’s see if i can keep it making sense:
My program works by making a different octave sound based on what dominant color it sees in each of the 8 segments of the screen. (I do the following 8 times for each note area) It takes the feed from grab, sends it into jit.3m, then just unpack from the mean value (the average value…). I ignore the alpha channel.
Then I send the R G and B values into a series of if statements for each color. This ensures that two notes do not sound at the same time, but only the dominant one sounds. For example..when deciding about blue..I have to decide if it is greater than both green and red seperately. If they are in fact greater..they send a 1, if not then 0. I then add those 0′s and 1′s together. These go into a [select 2 1 0] if select sees a 2, then a note sounds, and if it sees a 0 or 1..nothing happens. This is all repeated for green and red. Where you have to test (in two seperate if statements…or thats how i do it) if the mean value is greater than both the other ones.
I ran into problems when i tried to compute things like ‘is green greater than red and blue’ because it’s difficult to scale properly..it’s easier to just ask ‘is green greater than red’ and ‘is green greater than blue’ ‘if both are true..sound the green note!
oh also..you’ll most likely need a way to offset your webcam’s tendency to color balance..they like to try and adjust their white balance which will throw everything off
my program has a ‘tolerance’ addition that offsets a camera’s color tendencies. In addition i have a way to normalize everything and say ‘this color..what you’re seeing right now..is white..so r, g and b are all equal’..that is essential or else you’ll start off with unbalanced values that are too hard to predict
hey thanks, this is really helpful information. I’ve cut out the unnecessary stuff that you mentioned. I like your point about the white balance, so I’m going to try to work something into the patch.
Really appreciate all your help. This is my first Max patch, so I’m kind of learning the program as I go (hence all the newb mistakes) :)