As the OCZ neuro headband thingy is out of retail and somewhat unfindable on eBay, does anyone have tips on other devices that can pluck brainwaves from my head and deposit them ever so gently into Max?
I recently bought the MindSet from Neurosky. It's 200 bugs and only has one electrode.
It's simple to set up and there's a max external, which puts out Mindset's data (EEG Bands up to Gamma(40Hz), the raw wave, and two special values for attention and relaxation).
The reason I bought was that it's rather cheap and has an open API for developers.
But I'm not sure if i should recommend it. The (preprocessed) EEG band values seem to jump around pretty randomly. I don't any practical experience with EEG, so maybe i just don't know how to properly interpret the data.
The so called eSense values for attention and relaxation, which are processed by a closed properitory algorithm seem to work quite well.
I guess, if you want to do some serious stuff, you might want to check out Emotiv's EPOC headset, which looks a lot more attractive...
I have an Emotiv headset - and there is a free bridge/program for it called "mindyourOSC" that sends OSC signals quite easilly to MAX.
Ive not used it with MAX but only PURE DATA - It worked allright, but how it works sort of depends on what you want to do - these headsets dont have great accurcy for conscious control - I used the output of the headset to control sound timbre.
The facial expressions worked 90% of the time just out of the box
The emotional suite: that has algorithm that are supposed to detect - meditation, long term excitation, short term excitation, frustration and consentration - It does do this ok, but it is hard to say how well and accuratly
The suite where you can consciously move a box is flaky but works - it is something one has to train (both the algorithm and your self) - I connected it to minecraft and played for 3 half hour sessions - cant say that it promised a smooth learning curve but the accuracy was maybe (60-70%)
I think these devices are interesting but the application for them has to be something novel (something you wouldnt want to use a joystick for for example) because if it is just about moving around the learning curve is to steap for the casual gamer.
Im gonna make some sort of patch/application in Max to try to discern how well the emotional suite works, but havent had time yet
I'm reposting from another thread on this list re obtaining brainwaves from the Emotiv system (Research SDK).
Re MindYourOSC's it only gives you very abstracted data re mental/emotional state, and it sends this info out very slowly, once a second or something like that - not fast enough to feel very interactive.
I need to obtain alpha, theta and beta rhythms for a concert in a couple of weeks, and am looking for someone to build an FFT filter that will give me this data...
Here's info re a hack of MindYourOSCs - that transmits the raw data.
Any suggestions how we can progress this further?
A friend of mine, David Morris-Oliveros has built a version of MindYourOSC that just transmits the raw EEG data from the headset (sum of all the contacts I think). Windows only sorry!
Install Note: First of all, try running the EmotivfilesMindYourOSCs.exe.
If it gives you an error, install Emotivvcredist_x86vcredist_x86.exe,
then try the executable again.
If that fails, you will need to install EmotivSetup.msi,
and then run it from
C:Program FilesCoDMind Your Own OSC RawMindYourOSCs.exe.
Finally the np-epoc doesn't work on my system, either because it's said that it works only with the research and developer editions, but essentially because my os (Mac 10.6.8) doesn't like the library extension (which should be for 10.6)...
Hi all, actually since then i found a post where it's explained how to obtain the EEG data from the Consumer Headset, i tried and it works, you need to find the encryption key related to your specific headset, it worked on windows XP and 7, and also Mac and Linux. So you don't need their "weird" software. I don't feel right in releasing the method, i can point at sites where it is explained, it's called "emokit" and github is a good start searching point. For any info on my experience i'm available to explain the process. Now i use it in Supercollider as Max is kinda buggy with time scheduling. I have some patches for FFT anaysis in case.. but first you need to find your raw data... Good luck. A
maybe you are interested into checking my last performance involving EEG detection, butoh dance and multiple visuals.
It was presented at the Institute of Sonology, the Royal Conservatory, Den Haag, last 2nd February.
The sound is all live-generated in Supercollider based on the EEG signal,
and the visuals in Jitter.
I missed these posts - but in case anyone has some Neurosky based BT (like BrainBand and MyndPlay) - here is that link to the router that Trent Brooks wrote for me: