I'm working on a nice installation based on very easy concepts:
- elements traveling on the screen (gl.gridshape > gl.multiple)
- each element has a poly~ voice and its position, orientation etc produce voice modifications
Objects' movements drive the whole system.
I mean, traveling elements is the root part that provokes sounds' alterations.
Thus, I'm spilling some matrices (one coordinate from positions, one from rotatexyz etc) to alter the sounds by sending these data to the appropriate voice of my [poly~]
As soon as I start to feed [poly~] like that, the performances decrease awfully.
My bottleneck here is that part for sure.
I have 64 voices and at each turn, I need to change 1 value in each voice of my poly.
You want to know about the poly? It is currently only a [cycle] with 2 [*~] in order to map the visual position with the stereo position.
I even played with audio parameters (vector size + signal size) because when you are near from critical limits, that can help.
It doesn't really help here. At least, it wasn't enough.
Are there limitations while we feed a poly like that? I mean, not about voices number, but more about deferring all messages entering into the poly or whatever else..?
I used jit.change, jit.qball and deferlow in order to limit the number of messages but, if it helps a very bit, it isn't enough.
Any experiences or ideas for me here?
Thanks by advance!