Data Structures
Hi Max Community,
I have a general question regarding Max 5’s capacity to instantiate dynamic data structures. For example, I'd like to associate numerous (+/- 100) addressable sensor nodes to virtual nodes in a data structure within Max. Each of the physical sensor nodes would communicate with each other via transceivers and their relations to each other would change over time; connecting, modifying weights of connection and disconnecting. I like the data structure in Max to update its respective virtual nodes accordingly.
I'm a beginner Java person and I know that I can instantiate such a structure in Java as a Graph of some sort. I also understand that I could embed this data structure within Max as it contains a JVM. Ideally, however, I'd like to skip this step and just manifest the nodes in Max 5 directly and have them dynamically relate as a function of the state of the physical sensor nodes. Is this feasible without having to author a bunch of new objects in Max?
Furthermore, can Jitter be used to associate pixels to these virtual nodes such that a graphic can be made to change as a function of the nodal dynamics?
I'm pretty new to Max 5, but its potential is obvious and I’m enjoying the tutorials greatly, but I want to just make sure I’m going down the right path. Any suggestions would be very welcome.
Thanks! 8)
Lots of ways to display the nodes: lcd and OpenGL are two of them. I'd use OpenGL with jit.gl.sketch:
straightforward ones (depending on how the data is sent in):
node addresses--> positions of small circles (or spheres if 3D)
node connections--> line A,B to C,D (to connect two nodes)
connection weights--> linewidth setting before a connection is drawn, or maybe change the alpha from faint to opaque, or both
a bit trickier:
node colors/sizes--> how many connections a node has (again, unless you've already determined this beforehand)
other information...?
Nodes can be drawn in 3D pretty much as easily as 2D, so if you want to represent sensors that aren't in a plane, no problem. You'll want a rotatexyz command to either the jit.gl.sketch or jit.gl.render (the overall scene). Also use a jit.gl.handle so you can easily rotate, move, and scale the scene with the mouse and control keys. Each node can be a small circle, sphere or other primitive shape. For this many nodes you might use wireframe mode (poly_mode 1 1) with a low shapeslice count (like 16) which will render a lot faster than solids. (You can put cool textures on the shapes too, but that will eat even more power...)
The main question is: How are you doing the processing of the nodes/connections/other information, is this already in a usable format? That would make things easier on the Max side, as it would essentially be a display without additional processing. If you can send packets of data that contain:
node number // node location // number of connections // index of nodes it's connected to
for each node, things will be straightforward. If the node locations don't change, even easier: you'd have a "coll" (see object) of XYZ coordinates for each node's index. Then when you want to draw connections between #5 and #87, you'd create your line command with the coordinates of each, simple.
This is not to say that it won't take some time getting used to the basics if you're new to Max. Try making a small version with (say) 5 nodes, to get used to drawing and the render commands. If you use a coll for the positions and are able to access the information for your commands, you're getting there.
Using blend_enable for jit.gl.sketch is highly recommended. Your nodes and lines can be as transparent as you want, which 1) allows you to see better in tangled networks and 2) looks cool as hell. Doing things like "pulsing" the lines and nodes when events happen is also possible: have on and off states represented by colors and switch when appropriate with the glcolor command. Simply changing the alpha would also work great for this. Maybe you want user-settable colors, line weights, and node sizes? No problem... you can set anything you want.
The jit.gl.sketch help file will be your best friend, probably 90% of your drawing-related questions are answered there. The rest is getting the data in and creating/using/updating your coll.
I can envision what this *could* potentially look like, and it's completely awesome!
BTW what are you using for sensors and their communication? ZigBee?
Best of luck and come back if you get stumped. We can help!
--CJ
The easiest way I can think of to represent the position of all of these sensors would be to set up a blank jit.matrix, scale the position information to coincide with the jit.matrix size (in other words, if your sensors are telling you their position as an x,y pair where x and y vary from 0 to 100, and your jit.matrix is a 800x800 matrix, then just multiply the x,y positions by 8 to scale it appropriately), and then turn on the pixels corresponding to the positions in the jit.matrix. This will give you a very simple, 2-dimensional, ugly starting point for visualizing the virtual sensors.
Another thing to think about is how often you are getting position information about the sensors. This will define how you refresh your display. If you get the position of every sensor at the same time, then you can just clear out the jit.matrix and repopulate it with pixels each time. However, if you're getting independent streams of information from each sensor, then you'll have to manually turn pixels on and off, which is much more involved.
Once you've mastered the pixel trick, you can fairly easily modify your patch to use the openGL methods described in the post above.
Good luck
Wow, that is lot of very useful info. In addition to the "how to" suggestions, I also really liked the alpha value suggestion. I'm a big fan of using transparency in my other work to express depth and the idea of superimposition.
I've looked at ZigBee, but I think I might start a little more from scratch. For this first iteration each physcial node (I like the idea of starting with 5, btw) will be fixed in position and hardwired to a digitizer (or arduino to start). Each node will transmit in the audio range via a Voltage Controlled Oscillator to other nodes. They will then have a rough receiver and a phase comparator to discern which nodes are transmitting in phase at any given time. I'm hoping that MSP with a little logic and some spectrum analysis can take the nodes that register a phasic match and sort through to find which nodes are in fact in phase with each other. These in phase relations are then translated to weighted connections between the virtual nodes, which then update the graphics in Jitter as you've aptly commented on. Why I'm interested in phasic relations is a longer story that I'll post on a website soon.
Thanks, CJ, for your input and feel free to comment further on the above info. I'm making a simple animation to clarify this project. I'll post it when done.
Cheers!
JB
Swieser1,
I will definitly use your scaling suggestion. Just yestarday I was wondering how I'd map a small number of nodes to a larger number of pixels. I also like your idea of updating the whole display at every iteration. I think it will make the patch much cleaner and easier to debug if I have to track an error.
Feel free to comment on my reply to CJ if something strikes you.
Thanks!
JB
Depending on how complex your dataset is going to eventually become, I would strongly recommend learning to use a jitter matrix as a data storage container. This will allow you to use the optimized operations on your data and easily pass that information along to a visualization module.
One technique I use often is to store x, y, z, and weight values in different planes. Each cell of the matrix can then represent one node. With a little conditioning, you can pass this matrix to one of the OpenGL objects as a geometry matrix.
Lot's of fun stuff you can do. Try giving the Jitter tutorials and online recipes a look for ideas. Also, the Noisy Matrix article on this site contains some techniques for storing parametric data.
Once you come up with something more concrete, it will be easier to give you more specific advice.
Best,
Andrew B.
Sounds very interesting using the phasing of audio to determine the weightings. Looking forward to reading more about the ideas behind the project, and seeing how the structure evolves! The parallels with neural nets are really intriguing too.
Been tinkering around with this and have some ideas, will post them soon. Take a look at some of the examples in the Jitter-examples folder, if you haven't already. The ones dealing with jit.gl.mesh could be particularly applicable: using matrices to control where the mesh points are in space. While it won't handle the connections between nodes as-is, the appearance and controls provided should whet your appetite for the possibilities and what it could eventually look like. Plus, using generated matrices for changing the point positions is a lot easier than using colls---I take back my suggestion about that, andrewb's note to use matrices for storing data is much better: they manage all the indexing for you, you can visualize them easily, and there are lots of pre-made calculations you can do on them without iterating through the data sets manually.
Am going to tinker around more tomorrow in between "real work". (I just wish this WAS my real work!) I'll post an example illustrating the drawing functions, as I have some pre-made elements which could save you time or give ideas. Can't wait to see what such a network will look like...!
--CJ
This is fantastic stuff everyone, thank you. I look forward to the day when I can be as helpful to others. I'll keep you all posted on my progress and look forward to your examples, CJ...I kind of had a feeling matrices would be in my future. I suppose I'll have to take that discreet math class after all 8)
Well, I have my work cut out for me.
Cheers,
Josh