poly~ strategies for myUniverse project
As some followers and precious helpers here noticed, I’m building something huge (I’m not flattering myself anymore, it is just huge for me and totally fitting to my needs, no less, but no more ; size doesn’t matter, right ?!)
Basically, for those who didn’t follow:
myUniverse = 3D Space with a cam and moving objects emitting sound
My cam is the place where ears are.
My objects emit sounds.
The cam is moving, rotating with total freedom.
A JAVA Core instantiate pre-made abstraction for the visual part.
Each object in the 3D space (with all reference in JAVA Core) is associated with a "visual unit"
A visual unit is an abstraction instantiated as much time as objects number.
I’d like to have the same approach with the sound.
but the poly~ concept seems naturally more interesting.
Whatever my choice, I’d need at any time to associate an object with a sound unit (voice, poly or abstraction)
The reason: I need to be able to send message (sound trigger, master clock, mute/unmute, distance & angles for doppler effect etc) to the sound unit.
I can also add if you are VERY far from an object and you cannot see it, that one is disable and the sound unit associated is also muted/disable…
SO, here are my ideas.
1/ one poly~ per sound type
In that case, let’s imagine: I have 10 object X, 4 Y ; I’ll have 1 poly~ type X and 1 type Y
It could fit, but I wouldn’t have polyphony for objects (which can fit for drones)
It would be a bit tricky to associate objects & voices..
2/ one poly~ per object
It means, A LOT of poly~
I could instantiate all poly~ directly inside my pre-made object abstraction… a dream to handle things like that: I instantiate from the Java Core all my objects, everything is inside etc.
It seems the easier stuff, probably not the better one for performance.
I’d have total polyphony for each object which could be nice in a close future maybe.
I’m very tempted by this case.
I’d be able to create and place a lot of objects quite soon after my GUI will be finished (the GUI which will provide me the way to compose, change object properties etc) ; so I’ll test it probably by using a lot of objects…
BUT, it means a very strong coupling between objects visuals & sounds. If I wanted to separate things on more than one machine, it will constrain things a bit (btw, creating some protocols over OSC, I guess it would be that hard)
3/ no poly~
One sound generator per visual abstraction.
same as 2 but without polyphony (maybe a bit less cpu killing, because no concept of voices..)
btw, strange case.
What would be your first idea ?
ANY ideas would be interesting :)