Theoretic question: Shared rendering contexts across computers

t's icon

I was wondering if Jiiter will support shared rendering contexts across computers somewhere in future?

Now as Google announced their Project Tango I believe that in near future the phones will be able to share contexts so people will be able to play group games etc. in augmented reality, using 3D glasses as an individual camera view in a common 3D space.

So I was thinking that if Jitter would be able to share contexts between phones somewhere in the future that could revolutionize the live performances. The whole performance space (and beyond) could become a 3D augmented reality space where people could walk and observe 3D meshes from various perspectives. Now add to this a surround sound and you get my wet dreams:)