Forums > MaxMSP

Light spectral information into Color synthesis

September 12, 2013 | 10:45 pm

Hello again forum,

I’ve been posting a lot lately circling around an idea I’ve been trying to develop, finally I think I’ve found the correct words and hopefully this time I will find the right answers.

I’m interested in finding methods to translate light spectra into tristimulus values within max/msp, the idea is to process real time spectral information to output colors. Does anyone knows any works that have done anything like this before with Max? Do I have to go through all the steps spectra->tristimulus values->color pixel? or are there any objects or works that can help me skip one of the steps?

Any help is welcome,

Thanks again


September 13, 2013 | 5:30 am

Don’t take this the wrong way but Max is a programming environment… so program it dude! If you want some other thing to do it for you there is plenty of software out there. The good news is that what you are talking about is pretty straightforward. If I undersand you correctly, you want to convert intensity/wavelength data into a renderable color format for your computer monitor or lighting equipment, right?

The best way to handle this is to first convert spectral data to XYZ data. The XYZ system was specifically designed for this sort of thing. It all comes down to defining the reflective context of those given spectral values. Considering that there are lots of things that can reflect light, this value is usually generalized as a fixed reference known as an illuminant.

The good news is that wavelength values of visible light have already been converted to XYZ and are available in tabular format. In other words.. for every wavelength value, there is a corresponding X, Y and Z value.

Ideally, you want to sample your spectral data in at least 5-10nm steps. Here is a link that will output a csv file for all visible and some nonvisible wavelengths and the corresponding XYZ values to .1nm resolution.

http://cvrl.ioo.ucl.ac.uk/cmfs.htm

All you need to do is load that data into tables, colls or similar Max objects so that you can input wavelengths and output XYZ values. It’s very straightforward. If you don’t know how to do that, search in the tutorials.. or ask for specific help in getting that data into a table or coll.

Next step is to specify a color space and convert those XYZ values. Keep it simple and convert to sRGB.. which will work with monitors, lighting equipment, cameras, etc. I recently made some abstractions to calcuate RGB to XYZ but not the other way around… But it’s not difficult. It’s a simple linear conversion followed by nonlinear companding (for uniformity). Numbers in, numbers out.

If you start working on the spectral to XYZ conversion, I will make an abstraction that converts XYZ to sRGB (and then Roman Thilenius will probably come around and make a better one).

So that’s how I would generate RGB values from spectrographic data in real time. XYZ/RGB can also easily be converted to hue, saturation, luminance, chroma, intensity and so on. I don’t recommend using jit.colorspace as it seems more geared for video work, and I have found some of the output values to be grossly incorrect in several of the conversion formats. Stick with good old mathematical expressions and tables, collections, sliders, number boxes, etc.

Whatever you decide to do.. good luck with your project!


September 18, 2013 | 9:58 am

Hello Metamax,

Thank you so much again for your time and interest. Your advice has been most useful and enlightening.

I already started the spectral to XYZ conversion, but I’m stuck with the list step since the csv file I download comes with an odd format that [coll] doesn’t recognize. I’m now trying to solve it through XML external objects but so far I haven’t been able to recall the data properly, do you have any ideas to solve this? Im working with 1nm steps so manually reformatting the csv is not a good option.

On another subject, do you have such XYZ to SRGB at hand? I would like to do some concept testing with a small sample list.

Thanks a lot in advance!


September 19, 2013 | 12:57 am

Open the csv file in excel (or any program that can display the csv file as a table). Copy a single column and paste into a .txt file. Use the included [p load-a-coll] to load your colls with the contents of the .txt file. Be sure to select ‘save data with patcher’ in the coll inspector.

The included subpatch will also load a coll with multiple columns if you save the csv as a space delimited file (all columns separated by spaces, not commas) and then change the extension back to .txt.

XYZ to sRGB conversion patch [p xyz2rgb] is included.

Enjoy!

– Pasted Max Patch, click to expand. –

September 19, 2013 | 1:09 am

Here is an alternative approach using human cone spectral sensitivity functions (http://www.cvrl.org/cones.htm). This approach doesn’t require any XYZ to RGB conversion but it’s also outputting a different ‘interpretation’ of wavelength values based directly on the range of our cone receptors. Notice how the values are black at the extremes rather than red and blue.

And just so you’re aware.. this is all closely related to my own interests and work, so it was no big deal to share it with you. Your question actually motivated me to investigate all of this more closely so thanks!

btw.. you will notice that all of the values in both patches are at .1nm resolution. That’s for my own use. You can easily replace the colls and tweak everything to 1nm resolution.

– Pasted Max Patch, click to expand. –

September 19, 2013 | 10:06 am

.


September 23, 2013 | 9:21 am

Hello Metamax,

This is me again, abusing your knowledge :P. I’ve been doing a lot of thinking into how to approach the sound-color mapping, and now this NM – XYZ thing is resolved, this is what I have in mind:

(I know you are already aware of the introduction information, but I want to be as clear as possible with my ideas to avoid confusions so here we go)

In sound, Frequency is perceived as pitch, amplitude as loudness, while the amount of waves sounding simultaneously and their relationship between each other is perceived as timbre; the higher the amount of simultaneous frequencies, the more ‘chaotic’ the timbre (e.g: white noise), the lower the amount, the purest the timbre (eg: sine tone)

In Color, oversimplifying, we could say frequencies are perceived as spectral hues, amplitudes as brightness, while the amount of waves vibrating simultaneously and the relationship between each other determines mostly saturation, the higher the ammount of simultaneous frequencies, the less saturated the color (eg: white color), the lower the ammount, the purer the color (eg: spectral color), BUT the thing is that simultaneity also could be perceived as the mixing of different colors, it all depends on some sort of relation between amplitude and ¿harmony?

And this is were my questions begin, I would like to read a sound spectrum as if it were an electromagnetic spectrum (for now, please forget about the octaves issue), mapping frequency as spectral hue, amplitude as brightness and the amount of individual waves, and their relationship among each other to determine the saturation, but in order to do this, I need to know how color is synthesized from a spectrum.
how do I know when a frequency with certain amplitude should be considered as part of the synthesis or disregarded?
how many peaks do I take into consideration?
Are there any methods around to do this?

Again, any pointers would be most appreciated.

Thanks again and best regards


September 24, 2013 | 8:52 am

The correlation between pitch and colour is a tricky one, and as you point out its complicated by perception not being easy to simplify – understatement of the year!

Having said that, my approach for this project would be to do a spectral analysis of the sound and use the FFT data to map to hue / saturation / brightness. Depending on how confident you are with spectral processing, you may find it easier to look around for FFT objects to do some of the "heavy lifting". AFAIK, there is no official way of doing this, and you will most likely have to make the connections between sound and light a personal creative choice. There are some general principles that you outlined, but I have not yet seen or heard of a "scientific" approach in this regard.

I’d love to be wrong though!


September 26, 2013 | 6:53 am

Hello Justin,

Thanks for your answer. I’m actually already doing this you are telling me, I’m taking a signal, analyzing it in real time and producing colors out of pitches (thanks to the help of Mr. Metamax).
Regarding my last post, perhaps I wasn’t clear enough, but the thing is that I’m not tracking a single pitch but rather a harmonic noise, and this is why I want to take different simultaneous pitches into account. The idea is to take different pitches and group them according to their harmonic relations, take that spectral data and then synthesize different colors according to the present frequencies in a given group. My current question being: In the spectrum of visible light, how is simultaneity handled? We already know the coexistence of red and blue will give us magenta, but whats the mathematic analysis behind this? Whats the wave behavior I have to look for in order to produce such a mixture?

Thanks for your help again guys


September 30, 2013 | 7:18 am

bump :(


Viewing 10 posts - 1 through 10 (of 10 total)