Live-coding continues to grow as a musical performance format, with Algoraves and similar shows popping up all over the place. Graham Wakefield (Cycling '74) and Charlie Roberts (author of the super-popular Gibber environment) have teamed up to integrate the rapid interface and expressive possibilities of live-coding with other environments, allowing performers to leverage the DSP and other capabilities on offer.
Gibberwocky began as a Max for Live project that exposed Ableton Live's features to coding with the Gibber environment. This last week at NIME, Graham and Charlie introduced a new Max package that offers the same kind of integration with Max features including signal processing in Gen, the Max transport, and message passing of all sorts.
Our interest in using musical gesture led to an emphasis on continuous modulation in gibberwocky. Declaring modulation graphs, changing them over time, and creating visualizations depicting their state are all heavily prioritized. The dual-emphasis of pattern manipulation and modulation positions gibberwocky somewhat uniquely among live-coding environments.
Here's Gibberwocky in action:
If you'd like to learn more about Gibberwocky and live-coding in general, I highly recommend reading over the NIME white paper accompanying the package, and installing the Gibberwocky package from the Max Package Manager.
- May 24 2017 | 1:16 amwow. fantastic. love it.this is the salon music of now. (applause, chatter, clink of glasses) and what a great interface too!
- May 24 2017 | 2:22 pmThere's a quick intro video to Gibberwocky for Max here:
- May 24 2017 | 3:29 pmthat looks fantastic ?!
- May 24 2017 | 3:33 pmGraham, your intro video is set to private just FYI
- May 24 2017 | 4:48 pmNice, and reminiscent of something I wrote about 20 years ago for live-coding in Max, a rather tersely-notated dynamic pattern sequencer. First written in Pyrite (the SuperCollider precursor), then C, then Java, then Python. Latest (hardly documented) version is here: https://github.com/cassiel/pulse-sequencer and there's some output here:This is the stuff you had to write in the original version. As I said: terse.. [ TangramFM; . . pnames, trigger_Velocity trigger_Pitch fan emit_MAIN input; . . snames, tail.SHORT tail.OK prefix_2 prefix_1 velocities tail tail_1 tail_2 prefix P2 randomParams P P1 P0 random_1 random_0 input_patt; . . [ pulses; . . . input, pbind.sequence input_patt fan spell:00 empty empty; . . . emit_MAIN, pbind.emit notes 0 0 0 100; . . . fan, pbind.fanout trigger_Pitch trigger_Velocity emit_MAIN; . . . trigger_Pitch, pbind.sequence P pitch:notes spell:11 empty empty; . . . trigger_Velocity, pbind.sequence velocities velocity:notes spell:11 empty empty; . . ]; . . [ sequences; . . . input_patt, sbind.assemble prefix tail; . . . random_0, sbind.random randomParams; . . . random_1, sbind.transpose random_0 const:1; . . . P0, sbind.capture 59 61 64 54 66; . . . P1, sbind.transpose P0 const:7; . . . P, sbind.assemble P0 P1 P2; . . . randomParams, sbind.assemble 1 127; . . . P2, sbind.transpose P0 const:12; . . . prefix, sbind.select prefix_1 prefix_1 prefix_2; . . . tail_2, sbind.assemble spell:0.0...0. spell:1..00... spell:0.0..00.; . . . tail_1, sbind.assemble spell:0.0..00. spell:00000.1. spell:0.0..10.; . . . tail, sbind.assemble tail.OK; . . . velocities, sbind.assemble 120 80 random_1; . . . prefix_1, sbind.assemble spell:111.00..; . . . prefix_2, sbind.assemble spell:1.1100..; . . . tail.OK, sbind.select tail_1 tail_2; . . . tail.SHORT, sbind.assemble spell:0.00....; . . ]; . ];
- May 24 2017 | 6:16 pmI was skeptical at first but this is the "on the run" moment of our generation
- May 24 2017 | 10:11 pmWow, this is awesome! As a musician who is also a programmer, I have always been cynical of live coding systems. But this looks really well thought out. With a focus on creating a realtime connection with the code. Making it... expressive.
- May 25 2017 | 7:56 am
- May 25 2017 | 11:59 amHi Nick,Big fan of your work... thanks for these precedents, they look really interesting. Sorry we missed them for the paper(s) but we'll be sure to mention them in any future writings.And thanks to everyone else for the words of encouragement! - Charlie
- May 25 2017 | 2:40 pmYes, but all the code annotations would break. I need to spend a solid chunk of time abstracting the annotations away from language-specific / implementation-specific details. The annotations are fairly fragile at the moment. It would be a lot of fun to have a pull-down menu with all the different languages that compile to JS and let people choose whatever they wanted to use.That said, the newest versions of JS are getting pretty close to CoffeeScript... certainly in functionality. JS is becoming much nicer in my opinion; the last two years have brought some big improvements in usability and CoffeeScript was the inspiration for many of them.If anyone is interested in hacking around we published the communication spec so that other systems / languages could use the Max object:https://gist.github.com/charlieroberts/a0a4234646f4ab06b5a07dbe969b6b6a
- May 25 2017 | 3:05 pmOddly enough, I was contemplating doing something similar (cueing events ahead of the current playback time) to embed the pulse sequencer inside Max for Live, but rather than have it run (almost) real-time I was going to make it render out MIDI content directly into MIDI clips, in effect recording the generated data a bar at a time. I believe Sam Aaron's Sonic Pi also cues events some time into the future (rather necessary on a Raspberry Pi 1). Meanwhile, maybe I can target your system at the web socket layer. (The web socket implemention looks like a good call - I've done MIDI-over-websockets for Max (to/from a Heroku server from various locations around the globe) and it's pretty robust.)
- May 25 2017 | 7:29 pmThanks Charlie for posting the websocket spec. That's the spec for Gibberwocky for Ableton Live, the spec for Gibberwocky for Max is a little bit different so I've posted that here:
- May 25 2017 | 7:45 pmYeah the websockets have been really stable, and one beat of advance is more than enough to cover local network jitter. Using websockets also gave us multi-client support almost for free. It would be fun to try more remote connections.
- May 25 2017 | 7:55 pmI was testing by playing a MIDI keyboard into Max from the UK via Chrome websockets to/from a Heroku server in the US, and while there was a bit of latency, it was remarkably robust. We rolled out clients to galleries in Ireland, Finland, Australia and Japan and that all went well - I think some of the browsers were just left running 24/7 for several days.Have you tried any cross-continent gibbering? A while ago I worked with a choreographer to live-code a dance piece in London while she was in New York. We simultaneously executed commands in the same running code base. (That was Clojure network REPLs, which I suspect are websocket-like.)Btw, I love your in-browser indications of beat and waveform alongside the JS. Again, reminds me of Field, which has escape sequences to drop graphics into Python source code: http://openendedgroup.com/field/OverviewBanners2.html.
- May 25 2017 | 7:57 pmYes, Field has been a pretty amazing source of inspiration for a while!
- May 25 2017 | 7:59 pmReally glad to hear that... sometimes it feels as if Marc and I are the only users. (And I've not touched it for quite a while now.)
- Jun 01 2017 | 3:03 amHey ! the shortcut for stopping all sequences doesn't seem to work on a non us keyboard...(i have a french one)
- Jun 01 2017 | 5:25 pmanother thing : in the demos of the gibberwocky script editor, "tutorial 1 : basic messaging" it says :* To start make sure you open the patch:* gibberwocky_tutorial_1-4but there doesn't appear to be such a patcher inside the gibberwocky max package
- Jun 04 2017 | 11:45 pmHi, I'll look into the keyboard issue. In the meantime you can use:
Gibber.clear()... to stop all the running sequences (I'll add this to the reference as well). In regards to the tutorial, they're all designed to work with the help patch that comes with the gibberwocky object; I just missed changing the text in that particular example. Thanks for pointing both of these issues out!- Charlie
- Jun 05 2017 | 3:20 pmThanks !was hoping for a Gibber.clear(), but didn't know if it existed. (btw, i oppened an issue on github for the same issue, idk where is the best place for this kind of report/feature request ?)
- Jun 05 2017 | 3:21 pmMaybe Gibber needs a Gitter?
- Jun 05 2017 | 6:13 pmDiscussion about Gibber / gibberwocky is already really fragmented... between the mailing list, the slack channel (https://livecode.slack.com/messages/C1NEL9Z61), discussions on GitHub, discussions here, discussions on llllllll.co etc.But Gitter looks interesting! I guess I'd need to make a Gibber organization (this is way overdue regardless) and then have a public chat for that repo? I'd like to avoid separate chat channels for all the different versions and repos of gibber / gibberwocky etc.Anyways, vichug, for right now the best place for bugs / feature requests is definitely the GitHub repo. For more general questions Slack, the Gibber mailing list (http://lurk.org/groups/gibber/), or this forum are all great.
- Jun 08 2017 | 4:44 pmJust a quick update for folks: we've pushed a minor update that fixes some bugs and documentation, available now through Max's package manager.Also want to point out that the source has moved to a new location -- please send any bugs / feature requests via the issues page here:Thanks!
- Feb 12 2018 | 9:45 pmThis crashes Max 8. I insert a Arp.amxd Max midi effect. I insert an object. I change it to "gibberwocky" and it crashes Max 8.
- Feb 12 2018 | 10:19 pmBetter yet, https://github.com/gibber-cc/gibberwocky.live