I hope you are all fine on this 1st day of year.
a square shape, some points inside that square, a line moving from a side to another, "reading" point.
when the line hit a point, coordinates of that point are popped out something to another things and a sound is produced.
using jitter for the visual side of things.
my only question is:
- should I have just one system including visual and detecting coordinates match using jitter? jit.gen would furiously make some comparison and I'd pop out coordinates matching into my sound engine
- should I uncouple that and have my visuals and another system (coll based maybe or even techno~ or seq~) in which each visual object would have a corresponding event etc.
any ideas? opinions?