(Muscle) Audio to sensor/control data
I did a quick search but didn’t find anything on the forum, which I guess makes sense as it’s built in PD.
Basically I’ve been looking at the Xth Sense sensor:
Basically an electret microphone that you attach to your forearms.
Seems quite ingenious as that’s a very cheap way to generate complex data.
Now the patch is written in PD, and I don’t know PD at all, but I’m curious if anyone has come across this, or something like this, in terms of turning what I’m imagining is very quiet and complex audio into meaningful data (streams).
has Marco made the PD patch available; could you "translate" from there? I spoke to him at a conference last year regarding the broader issues and he did concede that custom microphones and some hardcore filtering were required to reduce the abundant noise – a problem common to any "biometric" input I guess. At that time he was using actual audio from muscle/tendon against bone/each other, as both control data and audio sources. Not sure this helps though??
That’s the thing, I wouldn’t even be able to translate a simple PD patch, much less something more complicated. I was speaking to some colleagues at my school and they were thinking of working out a port for the patch (for Max), but had similar problems. None of them knew PD.
Hey Rodrigo, I m following Marco’s work with the biosensor’s (I’m trying to have a third working channel i hi s software). As in a first moment I wasn’t familiar with pd I’ve used sound flower (or another similar) to get the audio from pd to max (the only problem is that if u are working with more than 1 audio channel, pd will send all the channels mixed together so in max u will have just one channel. I was trying to solve this in pd, and I didn’t have time to finish the work, I hope I’ll do it soon)
Hope it helps
I have an Xth sense kit (but haven’t done much with it). Have you looked at PD? It’s very similar to max. Wouldn’t be hard to translate one to the other. Though I think there’s custom objects/externals in the loop, now that I think of it. Building an audio/control data bridge between the PD patch and Max would be the way to go I guess.
I wouldn’t want to have both environments open as my max patch hits the CPU pretty hard as it is. It would also be great to use the control data for my own processes.
Did you not find it useful or why did you not use it?
I had a quick look to Marco’s software and what it’s doing is mostly to emphasize the low frequencies (under 80 Hz or even less, u can set the value in pd) so what u hear (and see, because u can see the wave forms in the software) are only the low frequencies (he used pitch switching to do this). then Marco’s made some libraries for the muscles tracking (so for example the software understand when u keep contracting the muscle or not), but I haven’t looked at all at the libraries. Then everything is sent to a mixer so u can control the volume of each channel. Doing some pitch switching with max is not that hard (and also I’ve tried to use instead of the Xth sensors some normal pickup/contact microphones and the signal was pretty good).
I was thinking to translate the software in pd to max as well (because testing Marco’s software I found different issues and I would like to work with something that I can totally control)..
What is pitch switching? Do you mean pitch shifting?
If so, is it doing add-overlap shifting?
It would be amazing if you could translate the software, or at least the input/library software!
> Did you not find it useful or why did you not use it?
I played around with it very briefly and didn’t instantly get something useful for my current work. Then got too busy with stuff to delve in deeper. I really should give it another try sometime. (Marco just gave a workshop at STEIM here in Holland but too bad I was too busy for that)
Did you buy the kit complete? (as in all the components) I hate having to order individual things from different manufacturers.
yes sorry SHIFTING (blame my telephone corrector!)… I’m thinking about translating the software for my final project at uni…and I have bought a compete kit but I’m still waiting for it (i ordered it in november, it was ready last month and it will take another month to be delivered) so in the meantime I’ve built a couple on my own (they don’t look very nice but they work)…also u can try with contact microphones the signal it’s not the same but they work pretty well too…
> Did you buy the kit complete? (as in all the components) I hate having to order individual things from different manufacturers.
I got a complete kit. Takes 15 minutes or less to assemble.
I can shed some light on what’s going on in the Xth-sense and help solve some of your immediate problems.
The software is actually an impressive piece of patching and porting it is not trivial. It is conceived as an instrument so even though the methods are simple the way it all ties together is quite complex. Basically you have your audio inputs that are downsampled to the message domain so that you can use several extracted features to control processing parameters that affect the incoming audio signals. There is a sequence creator so you can re-route control parameters to different objects during any given piece and store presets. You also have a preset friendly audio mixer so you can fade in/out different sound streams. The main patch generates different canvases which are blank PD patches. There, you patch at a bit of a high level since every effect and feature stream gets named automagically (it is a bit like patching using Jamoma modules). A simple patch workflow would be as follows:
1 create a new canvas/preset.
2 Select incoming audio,
3 create a processing object (say that you have one arm’s sound to be pitch shifted using one of the included modules.)
4 create an incoming extracted feature (let’s say we use the amplitude envelope of the other arm to control the transposition)
Now you have a simple patch where you generate very low frequency sound with one arm and you can transpose it with the other.
You can now create another preset with some crazy delay parameters as a different stream that fades in after you’ve passed a threshold on one of the arms a number of times and so on until you have a piece.
Marco has planned to set up a team to do a proper port of the software but it will take time and more than I port I reckon it will need to be a fork as there are significant differences in the way PD and Max work. Also, he doesn’t know Max so it is difficult to get much done without physically meeting and going module by module. However, on his working version of the patch there is OSC streaming of the features he is using. I do not know if this is included on the current ‘release’ version or if it is only on his personal patch.
In any case, help is at hand and if you have FTM installed (and everyone should) you can do the feature extraction in Max like so:
-- Pasted Max Patch, click to expand. --Copy all of the following text. Then, in Max, select New From Clipboard.----------begin_max5_patcher---------- 3629.3oc6cszjiZjD9bO+JXk8gYbzVA0SJ1SiOr1G2H1Yu4XiIPhpUyZDHKP 83Yc3429VO.Df3QgZAhtaMGlVfPTY9kYkYVUkYU+46tawp3+fmrv5ua8qV2c 2e9t6tScK4MtK656Vr06OVG5kndrEqi2tkGkt3d82kx+iT08+GQOwCi2wshe v5gfvT9dtuURvlHuvj7GN5v13Cog7T0qxN6tODGk9f2Zt7dnR2KI3+otGftL +Q24kt9wfnMedOecplpwTj3qsvNX4ef.W0eHKss9OY+n.eEAFu5+9iD6EkZf HuspFXwOsOvKrDQFDkSi.489q28N4+c+yDg9jBLr9487e+.OZ8Ws9YELIXmQ EeHJ7AZyz3CrK7A6dEwmBMneNNLL9K78iIrPTesETo0.HciJNWQT4eeXejku mUR7gHeq3n+V+nRAB.6BAbT8Tz8W.1NZkjlA.H5JB.+ycoAwxNN+z1cgAO7U Kc+nA.C1KQTfKE2NV.frkPBzEcuEAoQDfBQ.rlQD.XBPjGBiEuilYTX2xa8S l90cbMCJeWdBr0ZwJOg8lBVpI8BkghLXfX2kdA.9JWu.IPgAoWPe0pV.YB9G hbgTSUNrekqbPvNCS4f7pU4.65XrQC7TnW7P51ka4IsD4IH6te2gnzC6B4so D31N1bDNDM0mE70d9CedcbzSkeW61ySDZmdRcwJgjoB2fnibkhU8q.LYbHkf ruKHJPqLCrN5sQzXIqi0jAnqn9LpIDBXesv8HQm2GRvje98u2aUh02C9f0OX 88vOrnI4IBOP4YtZ32ktOXyF9duvPU6+ZSLCw3KhXtiX3LrItHhYzMwbihYD Zrkxl0BWDgL75Ij2dHLMHILv+3.+V6E4GH9ONTyD1KAXhsqf0sWBsEwh.keB QnLpvILnDZzriOQzBB3MXUPXP5WKq5jHdvzupUcH0abm7Fm4vvHfpwE+E4be IBpbiep508kUwJdwz7WrCR9O4qyFIBn.K+jCiAQjpu3heII+WRjTj54YPGLQ QbJZqFdztKBpTKRNNPfHfGWGpH7NW1RDzk4RNsgw4MLVf9.pr4jQIYqDI1N1 DVMrPzIJ02K0qLZWdjkEALui68aqiCi2WzBtLgH99F+TMQcoXEpSvn7WGz0V PnpWhHLMJQgwBfRFOa4WmPUP79DZl4+P6VjArhG3HxiEHfhPIxFCj8Ku0YX1 2YHyQ5Utyf6sNC25Lb86Lni2351W.YequPS8Eh3eQ.OmLeGaijQH6EY8Q8Zu Hi40x05i9AasRR2y8JlRfvfH95XQnxkU+6edB5J34J5vcLXTFPgY5II.5zxb MO3YIIW938D2WF6tnc+rWpHfzUGR0qs0cEfYWA3WOfb0MUhm2BBo7gRZlPBe SHcMDRYiDzLYD7kuLpso288wYyu6GpMAuIWpUGC.zqmLU6DDS5b8wlhozrE8 0evhtr64EwPkO8r81ocbGkSOjsd0RUqSDnEsO.anyjvDLY2tWh45VqQ.oJ1W qWzJH37lPu.pmYJizKbl.0hqOhP.FCHnqGfjD4sK4w3zuYgMGW5eAi5GXf1J OWNzNW+vqGvrSkjN5zyoEOJnABMJW684dOCcPJyr50guMzgdtgIqZV99S7lG DxeRFsSbTom9tEk5kqL0Qc0SBsZ89PnRJ0BrdSX75ei6WIT.e9CC4cHhYH53 yq.hJ+W0mLHp7jxWuc8NDl94lEPU+97zHpwurQr8tEa1G3GGIIhJ+R4syate UtJxprqoLcqdBQOuF9wowwgq71+TPRvpPdE4fPO1KJXqWJOMPSOP6heWv1c6 CzwgVbOdjm3c7Xx58w54jGT8adpguwm+TvZ9WB7SeT8tNhVURSvR5MU5gU49 05ooTDyAuVszXhYk1SPAkxEBjmJWUSnq1LubZOE6ReQd.nxe0h7aqCk8tbCM mCbTMv1pVejZzoVqB98Cd9hgbHD4siako0F8d05HIbT9r0IAlKntGpSRcA2E 0Zol6VzbDMWHPq1x.1iZTCKGXOvT6KKXeC3oikGDP04DhiBiECxlhnN.2Ff6 iqQ3hZsZ8EIr0AsXda0vpEdxJFtNl+fbECWzlVQwb0bpVwuvi3O40pZArLsz vZGdgzXp3e+zw4m0G6zw1293IFrJz80UiZpyX9TspyhnrjRsitiEKa1f6NV5 K5Yb9Uw09T+abL+Ujhir7rPgth06hoMFniefULYu1YeBTCka2fx.j5UFDYe8 VA5QRx5UneyF7YZCFNclfgSqE3hkI4EnE3Wd8XgCqCaQlwcM6v1uKPkN1G2F 6ysBi+xNwOz5iOXqV+pJ92aZ9tubB.SAejF1sw0m66SPexaSejuHsPmGAzTX hd.s0kwFM8VTxWhnjyiWx3njQN2hR9EuO27dql5zEa+Ryo6iAadrjW2YkS2b zO2qqcedcwfadcew30ESokVoA5RJgQcAihSWyaJi74Fx812pCWp6MGtWBGtv A5u0AeyeauV6Ep19kGiEy5i+t0bxjedXVYl70JAcYxmwdaJ10PqYqMSq8tfY KBduqWEEXDNNVKX0yjYyTpLmaguj41rnlLmawWDt83ZLKMgzxx0pHW422LLj DeX+5baE48tspR497jzfnhkb+WOtXP0dvGC78qtD2JjNveWbPTZFApSC5dEb CktkNiLhtkcrlQzsL3Ein6FXPinarCYor5Ab0KWNs3hQhgv.CYH47jLiDDxA RZDcimWJPxY8vLEe37htolZnYdomfHlpeaOunaSMPhlWF1Q1irAxQhtglZXu AOtWU51z9kzYFdab+RmYEcCFY0anKN2kOor+exUleNWwPU9A3LaXny0OUMAD D2MCkcSAmMnx2YSX7JuvrrAtX.6Eo2ZioE6XkKsc7UskFuW9ZJ5237cVdVBg wO9PwVznrtfiiJ83FTzWm6dMFzdI.BcvPYRrhWxbot.nbFX.KITLSVxun1xU dxUbu1RAbgweo.1B3IGAtjwG4.x81RHg.PmAxQu1Hmb4etZPGABy2CyFLxME 6Un2J8+IdSgIqD+zaZrWoZ+mN6p8+oo19G+xY7QoomJ.JKS7zzmJi.hvHRV6 kshkKcyt6pMUeYTw+T8.N8SFp3omUbTmkUIjcyxyqucfmrhH8pZ4AwtY44sl kmbEOSr7ftEyyqOKOYEq80MjGmaFddqY3ISuC51ucGJ65sQRryJ+fYn6ykAX yi.se4U261F5dkXEV4x5b60lcladoWjRhGo2+jfX2yuj3a9cbqj3uUR7CuZY xRdOaCy.kAWM75cGCqOIErSStzISwVgpsnqp+gPd2YvNhZKhQQ4VVF8.Zjxf cSECLzRUjMBerNN8V1BygDX+v1UEF5ubocprG48Mk3+msW518T+L7V2izT3L JWZdpa6rUzQXXRDWSMMqx4I+sZ48LqkWsLfBKjADlXXHPvHk35Cr8LI60Ksw B1QYiM22aE5ZKL4W3oVpMQw2mrMNN8QK4Pe9vv2FSJEcRaq8CSmc5YlUO8nG 4DXcNzyqKrai.63YQbOJHlsxlkK939+Rm3k8qRKU0vowonVUCWQOw1D0zJ.v 3pie4RURql1RkMOAZw7j2pj3vCo7VsLggmoJw0u7ZNcKua.AC1mNiY6SU5Cd P.D07VfWqKs+fA6oAQ8DCQ1aC+axIP6xCo5CPTCwTWEjxv8Boy+JD0ncf3gT sPvy0NnIEJFVqRqs+.z6.aHbGB.6YdECAGmJF54Lh+A0Svzg8SHW0cAumcQE oORaYFxrzAWkM+KuuTL6myHFGOPF2YvL9mzgs2EuOJUajo4Z77JUiIllpwyq JEw3DjddUWNvWjfsoUawLqXKLEqOyZaAlk+4DzR0zvAEiegT8NiUVbap1OYd UtQFS2f4U4FALsrcHuLKGCvkri6sb5e1kxLuIyFXW0AXfqdbcNpAVyHM0ZCN eXJGdJ71Q80PpZf7S7lfHwSHDXVIC8rMu6S9FlZzE31lP8JyDH6U6QZd1zXn 2vw5AEnWwi4kuIzunWliuiZSwP64rj1RfqamfxTbvljFuYSQp.zB223hP2OO BJ3t14Q1Kob97M4IYoRc2JeMBGIGX.7MGXMoTqlN0u0seoZIngTy.nTrO8S8 HqyVB3LYs9PWxosJHoHca2GeHxuR9487sJskmx2+sAZUp+yNIZ1BGojlt5KP sX7crc634u9aWFOx4AuL.2OYGbPNcdj7MAtj2I5.0SXWqh26qyrzNJBSrZcq zGJRY65ZLbKBVPCJtPxbiifZ4jdS+hRyNRdnsvS3WD7DJKr.MOkEQHkzBOAO KdR8ppNe5ZVr9rVjwnmtacUdRHZaeInxybxjTz7DTbBnaJE4X.EgbmejDZXj DzAluaMnz0QEW8LIVpIhTpyjBfxCeNKPOzTejTE+95j10ARv5.ip5suF06ls 5l5k+RC7pqtDrUuJFSORC6gj.LygZP0xGnCTFHFADQqJq74d7pmIOIqSid0d Hvgn935PwN.SYLQTDxAcvTOH0Ac5U5KDw+lewkfk6SyBfGlpkVwOaTMHaVow 3jckNhXDTwF5XgeVrAlZ.aHG3eIw6X2CAiLflHrgAsTaEzh0.Gr3hofVwiQm 4LFBnYHRwESACAcNKv2VW4WDmSuRm0pxIpL6hofM.Spg+p69js4KBOoc0pEl VKzDaRwop6hgFs0zN9zjQ8JvSKMAMgll1gFXBLQYyNRBAOG6YYiL.CuTCL.Z z.CrmT3CZRDzU2W7FeZxHKDS6HPglzaDMwxNSisa5HI.yTmfCflxFICRm.9X Jn3pofZgSq8LfIy0gyvhhvQsgTBYL8e.EWMEzJ5rnUWkIXpawESAoBXOCbEm CuW.bEZpSsoKTRfsoyA2DRSl3O84DIo3h+5c+ePKGcXN -----------end_max5_patcher-----------
It is still up to you what sound processing modules you develop and how you map the filtered signals but it is a head start.
I’m hoping to meet Marco in London in a few weeks and see where he is going with the Xth-Sense. I know he has been working on a wireless version and I have a wee PCB design to simplify manufacturing. As for components, I got mine from farnell:
I had resistors, capacitors, female jack connector and project boxes lying around so I didn’t purchase those but you can get at Farnell as well.
I have not done the sewing and have been using mine with electrical tape.
Last but not least, I’ve found a better option for the silicon cases that will be included in the next official version of the Xth-sense
They’re 11 quid for a sheet with 72. You can find them at: http://www.fingameblobs.co.uk/index.php?id=buy (Part: W. PD.2210 recessed) available in clear and Black! These provide better signal to noise ratio and you only have to drill a hole making them easier to make.
Hope you find this info useful.
Tons of info!
It was mainly the hardware and feature extraction that I was interested in, to use in my own patch/processes. I only saw the PD patch (screens of it anyways) recently. I had no idea it had the spawning/reconfiguration stuff, which is great.
Interesting to hear about a wireless version being developed. I would imagine, especially since it’s working with messages, that the downsampling can happen on an embedded circuit, and pipe it over using an XBee (or bluetooth if you’re fancier (or that new TI all-in-one wifi chip that’s supposed to come out soon, if you’re even fancier!)).
The posted patch looks interesting/useful. I didn’t even think I had ftm installed, but as it turns out I do. I just hate developing dependencies, as I never want to have to completely refactor or abandon code when eventual deprecation comes.
I know the guys at Goldsmiths were looking at both bluetooth and XBee but not sure what the next official version will be. I have to say though that I quite like the simplicity of audio and long cables. I’ve worked with the BioMuse EMG bands and a Bluetooth Arduino for some years and have run on a few problems now and then. Also, the Xth-sense going directly to a digitech whammy pedal is a blast!
I generally share your point on dependencies but have fallen for the ftm objects. Having matrices that are not assumed to be video and having static/dynamic named variables makes for very clean patching.
looking forward to see what you create with the x sense.
I’ve messed with XBee a bunch, and although the xbee communication itself seems fine/reliable, I REALLY hate serial connectivity in Max.
I’ve never had anything crash my computer (much less Max) so hard. Like hard locked up.
ftm is also scary in how big it is. As far as I know, I don’t need anything in it, so it would take me a while to learn what I need, then more to implement it….
Thanks for your help with all of this!
(off-topic) I’m using SenseStage’s Minibee (Arduino+Xbee variant). The lightest version of their supporting software suit uses a simple GUI-less Python app that handles Xbee communication and sends OSC to Max (or whatever). Receiving data at 60Hz without a hitch.
You never have a problem with the python app getting hung up on serial?
I ended up abandoning my XBee experiments when I got funding to buy an x-IMU (which blows my xbee/arduino system out of the water), but do want to use xbee stuff for other projects.
I don’t recall having any issues with it. I initially had some communication with SenseStage about getting high frequency data transmission stable, with some firmware updates etc. After that, no problem. Except for when the battery runs out of juice ;)
I’m using two senders and one receiving node.
Sorry for the noise on this topic , but I see this link https://getmyo.com/ on electromyogram sensor so of course I used in the past bioflex sensor http://infusionsystems.com/catalog/product_info.php/products_id/199
But I hope a developpement of specific arduino will come….
hi all! this is my first post here :)
first, thanks (and to Miguel specially)! I’m glad of such interest in the instrument.
second, please, note that the Xth Sense (XS) uses muscle sounds or MMG (mechanomyogram). This is a quite different signal than what the Biomuse and the Myo are based on. The XS captures muscle sounds, i.e. mechanical vibration, with a custom mic and no skin contact; Biomuse and Myo capture electrical voltage from skin contact electrodes. Therefore, the XS amplifies sounds from your body, and produce control data from them. The Biomuse and the Myo only produce control data.
Unfortunately, a piezo mic won’t report the frequency range of the electret condenser mic used by the XS (which is indicated in the publicly available parts list at the XS website, along with tutorials, etc. http://res.marcodonnarumma.com/projects/xth-sense).
That said, a small team of Max/MSP developers emerged spontaneously, and we have started the porting of the XS software, and Miguel is with us too. The progress will be slow. As Miguel said, the instrument is way more complex than it looks like; there’s lots going on under the hood so to keep a very high usability also for beginners. Besides, nobody of us is paid to do this, so ppl have other priorities. We do plan to have it ready around autumn, but that’s only a plan, cannot guarantee :)
We are looking for expert Max/MSP programmers that would like to spend a little time collaborating with us on the porting. Please contact me in pv if you’re interested or know someone who might be.
Another side of the story is the license. The XS software is GPL licensed (free software http://www.gnu.org/licenses/quick-guide-gplv3.html), and this won’t change. The GPL license poses some issues in terms of porting to a commercial software, such as Max. If anybody knows well Max license, we could use some further info.
We do have a different team working on this. A working prototype exists, and will be tested soon with a large-scale biophysical dance project; we are all quite excited about it. So expect an open source, DIY wireless Xth Sense sometime this year!
Also, don’t want to do propaganda here, but for all XS related questions, etc, feel free to post at our Users Forum (kindly hosted by Create Digital Music):
Because I’m working sometimes with Atau, I’m fully interested to give some help.
Thanks for chiming in.
It’s cool that there’s a port happening as there are quite a lot of Max users who would be into it (assumption). It would be great just to have the feature extraction side of things ported over, as the core idea (muscle audio to control/audio data) is interesting, and useful beyond the scope of the XS project in specific.
My max is ok, but not good enough to help with something like this (hence the starting of this post), otherwise I’d be into it.
Thanks @mortimer59! will write you in pv.
@rodrigo, I see your point. It is my belief though that the Xth Sense needs to be ported in the most transparent way, and "in full". Which means bringing simply making the Xth Sense as a musical instrument available to the Max community. This means porting also the custom mapping methods I developed, and more generally, the way by which a musician compose music with it, that is dictated by the design of the GUI, and the way you interact with the software side.
Of course, once the port is complete people is free to reverse engineering, and take what they need the most. Which is what happens as we speak with the Pure Data version, and the hardware components.
Glad of this discussion!
I’m probably not an expert in Max programming, nor in Pd~ but if I can help for this wonderful project (for example, French translations of a user guide or something like this), it would be a pleasure !
Forums > MaxMSP