#CarbonFeed directly challenges the popular notion that virtuality is disconnected from reality. Through sonifying Twitter feeds and correlating individual tweets with a physical, data visualization in public spaces, the work allows viewers to see and hear the environmental cost of online behavior and its supportive physical infrastructure.
#CarbonFeed works by taking in realtime tweets from Twitter users around the world. Based on a customizable set of hashtags, the work listens for specific tweets. The content of these incoming tweets generates a realtime sonic composition. An installation-based visual counterpart of compressed air being pumped through tubes of water further provides a physical manifestation of each tweet.
Tweets are parsed via node.js, which then sends OSC messages to both Max/MSP for electronic sound synthesis and Processing for controlling the release of compressed air. Max/MSP uses information about the tweet to dynamically generate each sound.