A core calculating, a lot of objects (=abstractions) living... strategies ?

    Jun 06 2012 | 9:28 am
    I have a JAVA core This one instantiate pre-made abstractions and store references and all infos from them at init time (basically when I'm creating or removing or modifying objects, and when it has to store/load presets)
    The bunch of objects almost lives itself and is able to trigger their own events like sound triggering etc.
    I have a question. All those objects have parameters JAVA core, in order NOT to be the bottleneck wouldn't be the things calculating, for instance, distances between the camera & all my ob3D objects as soon as a movement is occuring.
    Because I need that each object is able to know the distance between the camera and itself in order to be autonomous to trigger an event (here a music notes), how would you do that thing ??
    1/ distributed calculation Of course, the basic idea would be to put a distance calculator in each object.
    2/ centralized calculation + maxi propagation The second idea, is obviously to trigger the calculation inside the JAVA Core one time (but VERY often, as soon as a movement occurs), and to propagate ALL distances at time calculation TO EACH object.
    Indeed, in the first case, no need to propagate, the JAVA Core only needs to know about initial position (knowing some objects are moving, but the new positions is only important for sounds calculations)
    What would you advice to me, you gurus ??

    • Jun 06 2012 | 4:11 pm
      How are you doing distance? I'm assuming Euclidean and in 3D? The reason I ask is that that's potentially a lot of sqrts. If you're using some threshold to trigger it, I'd do your threshold in terms of distance^2 rather than distance = sqrt. e.g. boolean isBelow = (distanceSquared) < (x-a)*(x-a) + (y-b)*(y-b) + (z-c)*(z-c); // 3 multiplies and 3 adds instead of boolean isBelow = distance < Math.sqrt((x-a)*(x-a) + (y-b)*(y-b) + (z-c)*(z-c)); // ack.
      Can you describe a little bit more what your system looks like? Is the threshold for triggering events always the same, or is it different for different objects?
      My suspicion is that it's going to be faster to just go ahead and do the calculation on the objects in your system provided that there's not an insane (100,000...) number of them, as the branching instructions (testing if in range) could end up being just as slow (since the distanceSquared is only three multiplies and adds). If you did it with centralized calculation you could use one of the linear algebra libraries out there. They're pretty damn fast anymore.
      If you end up populating a bigger universe, maybe you have some function that takes into account the maximum velocity of your camera and gathers all the possible in-range objects for the next n frames, though this gets nastier when camera and objects are moving.
      Have you done some profiling? In either of your cases, there's going to be function calls between the core and the object, and that's probably where you could have a bottleneck, but I'm also a bit rusty as to what optimizations the compiler can make. How is it running now?
    • Jun 06 2012 | 8:04 pm
      Hi Peter, thanks for your answers & questions too :)
      About the distances calculation, I am not using root square but indeed distance^2 I didn't optimize that part enough, I used Taylor Series with bitwise operators in Digital Collision iOS App (from OpenFrameworks which is C++) and it work VERY fast & is totally ok.
      In the case of myUniverse, I'm afraid to be not enough precise for big numbers & distance. So yes, distance^2 is the way to go I totally agree... and what you are saying reassures me :)
      To describe a bit more. Each type of objects are doing almost the same job. The interface with the global system is the same (send/receive using the simplest possible messaging system, with broadcast busses etc)
      Some objects are moving themselves, which means their particular distances have to be calculated every time, even if the cam doesn't move. Some objects are not moving anymore, which means I can fire the calculation for them only when the cam moves.
      The number of objects wouldn't be insane! If I consider my sounds emitting objects + my sounds modifiers & my visuals modifiers objects, I would have ... VERY maximum 200. no more I don't know because I would program/compose inside that universe and I don't know til where I'll go But saying... if I'm insane, it would be 300 maximum.
      The trick about the maximum speed is a bit "dangerous" I guess. Especially about side-effects.
      Actually, I didn't profile yet. I don't have enough objects and I'm currently designing a GUI to create/move/modify my objects in that universe I'll have quite soon a way to place all objects in a huge map and I could make some tests.
      Another important point. Some objects are circular, some other more like very long lines. I'm approximating those being a bit cubic, or Parallelepiped by sphere. Indeed, if the ratio the smallest dimension / the greates is til 1/3, I'm okay with a sphere. So.. it means I have 2 cases, directly defined by my objects. Each object is, by design, in one of both cases.
      So in one case the distance is Euclidean, in the other one, I have to calculate segment to point distance. This can be done using these kind of optimizations I guess http://www.softsurfer.com/Archive/algorithm_0102/ , plus the fact I'll avoid sqrt & every trigo stuff (using lookup table or whatever)
      I'll go for my Java Core calculation and propagating the results to the objects. - a cam movement triggers a calculation+propagation to ALL objects - an object own movement triggers a calculation only for him (cheap optimization but)
      what do you think about this ?
      In case of hardcore behaviors, I could also go for a grid optimization. Each object belongs to a grid. I know the sound emitting range of each object. I'm triggering the distance calculation ONLY for the cases adjacents to the cam (or one case border outside etc) The problem is about the dimensions of objects which should be the same but by cheating a bit, it could prune a bit the huge calculation tree.
      As you wrote, "how is it running ?" is the main question. Optimizing only for the pleasure of optimizing won't be made here, at least for that project :)
    • Jun 06 2012 | 10:22 pm
      I'd definitely recommend unit testing. Figure out your worst case scenario (everything moving) and see how it does with 200 objects. Build a test unit that's just moving things around; if you can handle your worst case, you're in business. You also don't have to update these objects 500 times a second, just enough to keep up with the framerate... If you're at 30 fps, that's ~3 ms of time resolution which is very reasonable sound wise.
      I'd say just solve it in the most readable, logical fashion, then profile and see where you are. Also, I don't know if you've checked out the Java 3d libraries, but someone may have come up with a smart implementation of a lot of these things already, and that could save a bunch of time.
      If anything, I expect it's going to be the signal processing that could get intensive without a good muting scheme, especially if you have doppler shifts. (though this isn't a problem if you're synthesizing the sound...)
    • Jun 06 2012 | 10:38 pm
      About the update. Actually, I trigger distance calculations like that: - cam moves, calculation triggered for ALL objects and results propagated to all objects from the JAVA Core - if an object is moving, calculation is triggered for it only and provoked by it to the JAVA Core, that one sending back the result
      Indeed, I can "limit" the time resolution inside the JAVA (using a timer thread I guess) but, as you wrote, I would go there in case of problem.
      The first tests will be quite important to see where I am.
      About library JAVA 3D, I didn't really check that (yet) The light stuff I made seems solid because it is quite light.
      About the global storage, I didn't explain what I did, finally. I'm using an HashMap to store the higher class called myObject. That one contains the MaxBox reference (to abstraction) and around 20 variables (= properties in my case) Maybe I should more go into creating/declaring attributes to the MaxBox objects themselves.. but indeed, my stuff works very fine right now, I mean, especially about the readability, and logic.
      Then the only part I didn't touch yet, the sound. Intuitively, using an external powerful machine like Super Collider seems to be the safest stuff. No need to create thread, protected thread in Max6, everything will come and be protected by design, I mean, by separating the binaries :p
      About the muting scheme, indeed, that one will come directly from 2 things: distance between cam & objects AND range. The main tip is: if I'm outside the limit of that object's range, i don't hear it. There will be an envelope to be make non linear range but the only constraint will be, the further point of the range has a value of volume equaling ZERO.
      Doppler shifts will HAVE to occur. AFAIK, they would come naturally if I'm using delays correlated with distance between cam/objects AND relative speed. I didn't study that part, and yes, this will only be processed on synthesized sounds. Do you have lead about doppler shifts that could follow in those cases ?
    • Jun 06 2012 | 10:59 pm
      (I created a specific thread/post on the forum about doppler stuff : https://cycling74.com/forums/doppler-effect-some-leads-needed )
    • Jun 07 2012 | 11:04 am
      To be honest, i am not very much into max-java, so this might be a stupid idea. But couldn't you put all of the object positions into a matrix and then calculate the distances with the gen object? Resulting in a direct volume (0. - 1.). Does seem very fast to me. And for big objects you could use some kind of offset in a 5th plane.. (distance - radius of object = new distance)
      Redistributing of the values could happen via a forward object, using the cellcoords as indexes.