The cam, the object ... a trigonometric love story
Hi there,
Is there a native or not object or routine to calculate in one pass:
- distance between a cam and an object
- azimuth, elevation
?
trigo is cpu expensive
LUT can be used, Taylor Series (limited) too ..
but just wanted to know before to rebuild the wheel :D
anyone would help me with my rusty brain ?
cartopol & poltocar are 2D only
but I guess I canmake some pre-calc in order to use them
any help would be totally appreciate :)
Hi dtr,
thanks for your answer.
especially, I wanted to know if you make that kind of calculations always in jit.gen.
I know it is very fast!
I'm doing some calculations inside my JAVA Core. no trigo, only distances and optimized (I mean, no sqrt directly called etc.
Maybe, I should use even jit.gen only for those kind of calc.
I don't know.
fyi, I need to calculate this in order to know a,e,d relatively to my cam for ALL objects (ALL = only zone active, which means in my case, those for which I'm near enough to be in their sound range)
Maybe there is some native things to do that.
Anyway, I'll know dive in jit.gen (finally), even only for calculations =)
just a question
what input type should the script expect ?
Hey,
In my current project I try to do all the geometry math (kinect skeleton, particle system, etc) with matrix operations. I used to store my data as lists in coll's and do the math on the lists. Matrices are definitely faster.
I'm making a habit of putting more complex operations that require a chain of jit.expr's etc in one jit.gen. First and foremost because of tidiness. I'm sure there are performance gains btu I haven't really been doing comparisons. Would be interesting to do.
I haven't done enough java to be able to compare.
About input type, here's the abstraction it's used in, should be self-explanatory:
... it makes me thinking..
I'm not that guy who like coding text more than anything else but I chose to have my JAVA Core as a brain.
It works fine (but I still have to add the calculation of the relative position of objects to my camera position/orientation)
I'm using HashMap in JAVA, making fast and tight loops etc
But it makes me thinking that, of course, I could do everything with max objects.
storing stuff in coll, dict, etc
I guess, as usual, I'd move to that progressively IF things go bad with my current stuff.
About your jit.gen, I'd only need to enter x,y,z/orientation for my cam and the position of my object.
I'd do that for each object (= abstractions too, in my case)
I'd need to matricize my 9 parameters and to tweak the codebox
going to do that right now.
Thanks for pointing me to this jit.gen universe that I can use for "pure" calculation too :)
I'm rusty a bit here.
I have a vector (camDirectionX, camDirectionY, camDirectionZ) that represents my camera direction of view. I have a (camX, camY, camZ) that is my camera position.
Then, I have an object placed at (objectX, objectY, objectZ)
How can I calculate, from the camera point of view, the azimut & elevation of my object ??
My abstraction doesn't take into account a pivoted reference point like you need with your cam. I might be wrong here but it seems to me you could:
- calculate the A&E of camDirectionXYZ in relation to camXYZ
- calculate AE of objectXYZ in relation to camXYZ
- subtract the first AE from the 2nd
But then I s*ck at trigonometry so that might be b*llsh*t ;)
I achieved to matricize my vectors values but I didn't succeed in using .x .y .z part in the codebox
I guess my matrices aren't correctly designed.
I missed something totally here.
would you see my bunny's error ...? (why I wrote bunny? I don't absolutely know, sounded nice :)
for your formula, yes.
sounds like that for sure and makes the whole depending on the direction of view vector will make the trick
The problem is it's expecting 'jit.matrix 3 float32 1 1' format. XYZ is stored in 3 planes, not 3 rows.
So no need to unpack etc. A 'setcell 0 0 val $1 $2 $3' will do.
and that famous dtr just taught me how it worked!
big thanks.
I totally got it now. I'm just not enough familiar with jitter basically.
but I like matrices :)
You're welcome!
Btw, I think you'll only see performance advantages in using matrix operations if you also process big chunks of data stored all together in matrices. In the particular AE case I process single coordinates in jit.gen but in other parts for example all skeleton tracking joints or particle system data gets processed in 1 jit.gen operation. That's where the potential speed gains lie.
In my case, to explain better, I'll have around 200 objects (basically abstractions containing visual & sound part).
I'm instantiating them by my JAVA Core.
In those abstractions, there are some receive objects, corresponding to broadcast busses or specific objects busses.
My cam position & orientation are sent on a broadcast bus to ALL objects and each object has to calculate ae (and d) according to the cam.
the infos are sent when:
- the position of objects change (some objects are moving, naughty objects!)
- the position or orientation of cam change
Maybe, it means I wouldn't have too much benefits in that case... a lot of little matrices processed.
But I already saw performance improvements with only 32 objects compared to the Java Core which calculated that for each objects in an ugly loop and fired the result to all objects...
I ended with that.
It works better in my case.. I means it fits better because I had to create matrices, then to send them in jit.gen, then spill them at output, then use the value. a bit ugly. in my case, I repeat..
BTW, I'm worry about performances...
of course, distance & orientation calculations using trigo take a while
maybe, even a pure JAVA Core using pre-calculated trigo could fit ...
don't know, and worry :-/
How about 1 [expr] and 1 [vexpr]?
You can even do it in 1 expr if you like.
totally right and better.
just writing that to say in my case, maybe, it would be more appropriate to do that outside of jit.gen/matrices world.
maybe, I wrote..
my all objects are abstractions.
I mean
Each object, each abstraction is instantiated with some parameters at load time (and when I'm composing in this system).
it means I have to keep traces of all particular objects' parameters somewhere.
Maybe, indeed, it would be better & more efficient to have ALL parameters outside of those abstractions and to calculate ALL (a,e,d) outside, then to use the results by firing message to each abstraction (where my poly~ are and also my system to activate/inactivate the OB3D depending on distance to save CPU a bit)
What would you think about that?
Sounds to me like you'd gain in both data structure legibility and performance by storing data externally. For example, in the AED case we've been discussing, if you keep all your objects' position data in one matrix, you can calculate all AED's with one operation (jit.gen, jit.expr,...). And since you need to calculate D first anyway, you can choose to skip the AE's of objects further away than a set threshold.
Also, if your data matrices become really big and the operations complex, you can potentially have your GPU do the math (jit.gl.pix, jit.gl.slab) to offload your CPU.
Hi dtr,
thanks for your precious leads (just posted that : https://cycling74.com/forums/strategies-for-azimutelevationdistance-calculations-with-a-lot-of-objects in order to keep things maybe more clear for occasional reader ; but let's continue here if you'd prefer)
I got your point.
the gpu/cpu stuff is the way I need to continue now.
If I really got you, I can put all my positions in a matrix
That matrix would have to be fed by my java core which is the interface with my GUI (no perf problem here, I'm using the GUI only while composing, not at performance time at all).
That java feeds also all my abstractions.
So ok, I have that 'HUGE' matrix (I don't even know how I would organize all my triplets in that matrix, but I'll see that)
That huge matrix has to be processed to calculate all my distances.
BUT, my main question is: each abstraction has to know at each time a,e,d ...
In my small head, it would mean:
- I have 2 matrix, one with ALLLLL positions, the other one with the cam position & orientation.
- I trigger the calculation when the cam change, and when one of the object position is changing (some are moving by themselves)
- I have a new matrix as a result, I split that matrix and send a bunch of messages to my abstractions.
does it make sense ?
what I save by plunging the calculations to the matrices/GPU side, wouldn't I reloose it by that messaging step ...?
(just to add some infos, to do only distances calculations + firing message from the JAVA Core, for only 64 objects, gives a fps of ... 19 . compared to the 30fps, when I'm doing that in the abstractions themselves using expr.)
An alternative to sending messages is having the abstractions read from the big matrix themselves. You 'bang' your abstraction when you want them to process and it looks up its little chunk of data in the matrix (getcell x y). All the abstractions will contain a link to your (named) matrix but there's only one actual matrix held in memory. You'd have to test but I guess that's more efficient than the message route.
ok.
if I understood correctly:
- I put all my objects positions in a huge matrix (3 planes for x,y,z, I guess, each cell being one object)
- each time I need to calculate (basically when distance change which means: cam move AND object self-movement), I fire the calculation. But it would mean everything would be processed everytime the cam is moving.
SO
I'll feed the matrix with my JAVA Core.
I'll message all my abstractions by just a bang when a cam change (object moving will bang themselves)
I'll try this RIGHT NOW :)
I'm using a global ID for each object, referenced in my JAVA Core, in my abstractions (passed as a parameter) and in one session, the ID is relevant & constant ... it will be the cell index in my matrix
thanks again , posting after some tests.
ok I have that (not that huge) matrix with all positions
indeed, I can access within all abstractions.
when my cam is moving, a bang is sent to ... ALL abstractions and those grab their data from the global huge matrix.
should I process that with jit.gen too ..?
I mean, the split is already made, each abstraction has to calculate only its data..
Where is your AED calculation going on? Centrally, before the abstractions get banged, or in the abstraction itself?
in the abstraction right now and just think it is stupid from me :-/
I guess, as you clearly explained, the HUGE benefits is to process all at the same time.
But that calculation will be triggered everytime a change in my camera will occur (I could reduced the amount of calculation maybe, using kind of snaphot object or whatever)
I guess I should have:
- objects positions matrix
- cam current position matrix
- resulting aed matrix
this is the aed matrix which would be requested inside abstraction at any calculation change..
no?
Yes indeed :)
If necessary you can add more data (more planes) to your objects matrix, for example 'changed' or 'in range' flags to select which ones get processed in a jit.gen operation. (Or keep that in a separate matrix, whatever you prefer.)
if I got it, the benefits of all of these matrices manipulations are both helping cpu by making the gpu working a bit more AND to save memory by globalizing data.
ok about plane.
I got it too.
in my case, indeed, it could be nice to centralize the "in range too".
till today, it was calculated inside each abstractions too.
the only thing that frightens me a bit: I'll bang all abstractions at ANY change of the cam (orientation or movement)
but indeed... all my calculation will be done in one time by that sharpened katana name jit.gen...
(now trying to basically grab a triplet (3 planes indeed) in one message to a jit.matrix..)
ok.
things are a bit better.
but not enough :-(
maybe, I could reduce the calculation amount... I mean, reduce the calculation frequency ..
I don't know
I'm at that point where I'm a bit despaired :D
not really.
It is just: I don't make any sounds, any tricky visuals stuff ... and I have that fps around 22.
hardcore time
> if I got it, the benefits of all of these matrices manipulations are both helping cpu by making the gpu working a bit more AND to save memory by globalizing data.
GPU is only involved with shader processing (jit.gl.slab or jit.gl.pix).
jit.gen and jit.pix run on the cpu but the gain is that one matrix op on a large matrix is more efficient than a ton of single expr calculations.
To me personally, the central storage in named matrices also is a benefit, but others might find other ways better.
About bangs, did you consider running everything off 1 master clock (qmetro)? In my system everything is triggered in sequence by the master qmetro: processing of skeleton data, sensor inputs, geometry generation, openGL rendering, etc. Runs steady at 30fps, though I'm looking to optimize. Got a couple of big bottle necks here and there.
The only use of master clock here is currently for informing my sound objmake tect related to sequencing to make them knowing which is the current step
I don't get you all here about qmetro.
In what it could help?
In my case, cannot see currently other bottle necks except the number of element. To display and those aed stuff...
Grrr..
> I don't get you all here about qmetro.
In what it could help?
To synchronize all processing with the rendering framerate. All calculations occur only when a new frame is to be produced. And there is a clear sequence of events. With complex chains of stuff banging stuff there might be superfluous, framerate eating stuff going on.
Of course, one might choose to decouple core processing from rendering. What's more handy depends on the application. It's a choice between event or framerate driven approach.
Attached is a screenshot of my top level patch with the master clock and its distribution outlined in blue.
@julien
qmetro is used to not clog up the scheduler for expensive calculations. This is mostly used for 3D rendering and video processing.
It's still not clear to me what your setup looks like. I only saw dtr's patch above and yours as an image. I did notice some things with the image though. I think you might be confusing cell coordinates and planes in your use of the jitter matrix. If you want a matrix that holds vectors of size 3, use a 3-plane matrix. For example, [jit.matrix 3 float32 1 1] will hold 1 vec3. [jit.matrix 1 float32 1 3] could be looked at as a single vec3 but the convention in jitter is to understand it as a scalar matrix with 3 values.
hi wesley,
I'm okay now with jit.matrix, plane(s) in order to trigger calculations.
Imagine I have 30 abstractions with jitter gridshape or like inside (the fact the java instantiate the isn't important here because it happens one time only)
I have now 3 matrices:
- one for all object positions (fed at init time and some change when my moving objects move)
- one for cam position (basically now a triplet x,y,z , later with orientation stuff)
- the latest for all azimut/elevation/distance relative to cam for each object
Each abstraction needs to know aed in order to process that and react (distance, basically inactivate my gl object inside the abstraction concerned.
Now, I'm firing the calculation with the global qmetro on the left, instead of doing that IF a position (object OR cam) occurs.
I'm also using the qmetro trigger inside each abstraction to grab the concerned cell in the AED resulting matrix... in order to process that in the abstraction as said before.
Does it make sense ?
Where could I improve things ?
Actually, performances suck and, as I wrote, I didn't yet do any object design, only colored sphere without lighting :p
in order to add an info
if I remove the triggering of the calculation without removing the triggering of the get cell in each abstraction (which doesn't make sense, except for testing), I come to 29fps
if I remove the get cell AND the calculation, I'm around 31fps.
with both (which is the required process), I'm around 22 and if I move .... OMG... 17, 18fps.
I see a jit.print (1st image) and a numbox (2nd image). Having those updating every frame is another of these classic framerate eaters. What's it like when you remove all that kind of stuff (also jit.pwindows if any)?
Btw, your qmetro is set at 30fps so it will never go much higher.
Btw2, what's your computer's specs like?
I see quite a few things that will improve performance. There are probably more, but I can only see the parts of the patch in the image. Here are my suggestions:
- don't use @precision fixed. You inputs are float32, so you're incurring a cost to convert to and from fixed point. Also, the math you're doing is very much dependent on floating point level precision. Fixed point math is inherently inaccurate and for these kinds of calculations will be very inaccurate.
- send the camera position as a parameter, not a matrix. use [param campos] instead of an input
- your GenExpr can be simplified to:
out = length(in2-in1);
- don't use @precision fixed.
whoops, my bad...
dtr,
ok for jit.print & numbox.. just wanted to log things and as heisenberg said, if you observe, you disturb :p
about qmetro. that stupid I am.. of course. putting 60 Hz helps!
but would I trigger the calculation at that rate too? I guess.
computer: mac book pro i7 2,2 Ghz, 8Gb RAM, AMD Radeon HD6750M 1Gb RAM.
OS: OSX 10.7.4
wesley,
ok got it about why to use @precision float32.
ok I'll work on {param] ... I guess I can use a vecto (I mean a packed list) to send that..
ok about the GenExpr. I'll have to add azimuth/elevation (trigo) calculation ..
testing it right now
MAXI thanks to both of you. it is sincerely precious to have you around in my adventure :)
Catching up with this thread late, but I'm happy to help optimize the Java code.
Something maybe helpful to know re: hashMap:
Iteration over collection views requires time proportional to the "capacity" of the HashMap instance (the number of buckets) plus its size (the number of key-value mappings). Thus, it's very important not to set the initial capacity too high (or the load factor too low) if iteration performance is important. (http://www.xyzws.com/Javafaq/linkedhashmap-vs-hashmap/77)
Also, your hashing function may affect the performance of hashmap because of collisions. Have you implemented hashcode and equals?
I've done DSP in Java and found it to be only slightly slower than C (~1.5-2x), so I'm guessing there's some room for improvement in performance.
lol @ heisenberg :D
dtr, kidding a bit in order to make my hard work more insanely fun :D
Hey Peter, nice to see you here :)
actually, my JAVA Core is only used for:
- instantiating all abstractions and keep reference to them in hashmap
- store/retrieve all data to an xml preset file (not finished, unfortunately)
I wouldn't consider to use the pattern: calculation + message firing from the JAVA Code.
Especially because I can easily get it now with the matrices stuff outside of both JAVA & abstractions
I didn't implement hashcode & equals (don't really know what it is..)
btw1: doing everything in java would be a dream. not because I'm in love with java, but because of centralization
btw2: all dynamic calculations (I mean, the one that isn't important for the JAVA Core, the current calculations, made in matrices/gen seems to be .... very efficient and it isn't a pain to handle. It is like another abstraction made especially for that task
(btw3: fighting with the param stuff in the jit.gen... )
What's the problem with param? Parameter smoothing?
No... just how to bound that in the genexpr box...
Big storm here. Only connected on my mobile for couple of minutes and still not have Max installes on my android phone :-/
Have a look at this example for how to do params with codeboxes:
thanks Wesley.
it is totally clear & intuitive.
our storm passed...
now rediving in that and I have to write that with that only thread, performances have been terrifically improved!
so yes. it is better!
now:
- 60 Hz qmetro triggering each abstraction to grab the portion of the resulting (AED) matrix
- calculation (still) triggered by camera position change but sent to the jit.gen with precision float32 and using parameter.
performance are quite stable while moving the cam, around ... 25 fps no more, but no less.
I'm still not inactivating/Activating OB3D according to distance because i have a problem in my distance calculation
I attached my current proto.
OoooPs and sorry
my patch are wrong, it is now ok: I mean, not correct screen shot (I removed numbox, jit.print etc ..)
and performances go down :-(
still 16 fps.
cannot track the bottleneck, I'm not doing anything else than explained here.
Maybe it's about time to post your patch so we can poke in it? ;)
dtr, I probably missed something at that toooo late hour of the night.
it works fine :)
I double checked this morning and it seems to be ok
just popped out a video on youtube (to prove to myself: "YES, at one moment, it worked !!!!"
http://www.youtube.com/watch?v=UB4oEgKOmtk is the video
my only questions about that calculations, now, is: is the qmetro triggering both the position matrix AND the result matrix in each abstraction the most efficient way ?
- objects positions can change, indeed.. but only for moving objects (maybe, to separate into 2 categories ... I mean 2 matrices?)
- cam position change everytime
i don't see a problem in that.
ok dtr.
maybe, I should begin to add hardcore things into my objects
progressively.. as I'm doing for each part of that system.
I have also to finish the ae part of my aed calculation.
maybe, that part should be done elsewhere, in order to calculate AE only for current activated object (those who are in a particular range)
I was thinking this could be done in the same jit.gen as distance but using that value to select whether AE is calculated or not for the given cell. But I see there's only a switch function in gen, no gate. Not sure how this could be done then.
Wes, any hints?
yes dtr.
BUT, my range will be different depending on object types.
It means, maybe, I should split the matrices into sub matrices etc etc
ugly, and dangerous, from a performance point of view. I mean, there wouldn't be a lot of improvements compared to my matrices splitted, the tasks involved etc...
I will probably use the same range for each object.
(firing one more question about trigo :p)
hello maxers,
here is what I made, finally.
it calculates azimuth/elevation/distance for a matrix of objects' position, relatively to the cam position & orientation.
I have a curious behaviour when cam is facing straight up or down but I guess it is the projection I'm making with vec 0. 1. 0.
I'll probably have to improve it later.
but it works fine & make its job.
just wanted to share it with you.
Here is the jit.gen code:
congrats :)
about strange behavior up and down, see paragraph 'coordinate singularity' at: https://en.wikipedia.org/wiki/Mathematical_singularity
dtr, I got the point.
implementation will be hardcore, I guess, because I'd have to change my whole algorithm to use .. quaternion.
not sure about what I'm writing here but, if I need more freedom degrees (read a bit about gimbal lock concept), I need more "variables"
argh..
yeah i've recently been toying with quat rotations as well...
you know what?
I'll probably go to only use azimuth value..
I wanted to use ambipanning~, but I'm not satisfied.
a basic % of the sound, calculated according to azi + distance, will fit my needs.
often, we walk a path because we need to see what happens, then, we walk back, or, on another point of view, we just reach the previous path but a bit further :D
about quat, I'd have to use them.
Robert gave me the way to use retrieve quat values sending getquat to (jit.anim.node)
As usual, the main job will be to know what I can do with them :D
little statement about that and a video
http://julienbayle.net/2012/06/18/testing-doppler-effect-attenuation
doppler effect is easily tweakable.
probably, I'll let some objects-specific parameters. Indeed, I need to have some objects with a more constant frequency.