Dictionaries or Buffers for large set of data at high frequency of write and read?
Hello there!
I am working on a project where I am receiving a large set of data through OSC messages: sensor data + custom depth camera data + custom computed values that head up to thousands of values.
I have been setting up a Nodejs script through wich I receive, compute and organise the data in order to then be stored in a Dict every 16ms.
Now, it might sound like a silly question but I am wandering if I should have approached the process differently like storing the information into multichannel buffers. I am only now questioning how efficient is the write and the read process with max dictionaries through nodejs compared to using buffers. Though I would probabbly still need to work with some kind of OSC routing/v8 scripts to store the data in the correct buffer, would using buffers be 'faster'?
Does any of you have any suggestion or found itself in similar situation?
Ciao!
The first thing to determine is whether the amount you are storing is enough to matter. How much are you writing every 16ms? If it's not much, then that frequency is probably low enough that you don't need to worry anyway. I would encourage you to do some tests.
If you are writing enough to matter, writing raw numbers into buffers can be much faster, depending on how it is done. In the Max SDK (the underlying C code) the buffer writing functions allow objects to lock a buffer once and the do direct sequential memory writes into the buffer for some block of memory before unlocking the buffer. I use this in Scheme for Max for allowing one to write large chunks of numbers from a Scheme vector into a Max buffer. This is definitely faster than writing to a dictionary, and is also much faster than writing to a Max array as those are actually arrays of Max atoms as opposed to blocks of numbers (i.e., C arrays). But again, whether the speed difference actually matters for your use case is a the most important question.
If you really need speed and are ok with putting some work in for it, doing so in a C or C++ external so you can write in blocks to buffers is probably the way to go. You could look at the Scheme for Max source code for the vector->buffer functions if you wanted examples.
Thanks Duncan for your response!
Yeah, I think I still need to understand, as you pointed out, whether the speed actually matters. Unfortunatelly I am not super familiar with either C or C++ so definettly could be an option to try but probably after clearelly understanding the needs.
I'll check out in case the source code of Scheme (amazing work btw!)
Thanks again!
It sounds like what you want to store is just numerical data, and although buffer~ is designed for audio data, you can really just think of it as an array of floats, and MSP does not need to be on for peek~ to work, so I would opt just to stash my data that way. In this example patch, it appears that you can easily stash a couple thousand numbers in buffer~ in less than a millisecond.
Hello Christopher,
Thanks a lot for your reply and example patch! It is really usefull to check the time difference!!!
Yeah that was my thought, that I could easily store many many float values inside a buffer in matter of fractions of ms.
The issue that I have is that I receive messages from different OSC address patterns (one sensor sends on 5 addresses and we have from 3 to 9 sensors connected + other data coming from a set of depth cameras).
Thus, I could store these data in separate buffers each of these informations, keeping in mind that some of the addresses send information related to different features of the sensors, thus resulting in a big headache when you need to access specific features of the data.
I still kinda ended up using dictionaries for the moment since I havent had yet understood when and where speed matters, as Duncan suggested. Probablly I could opt for buffers for some of the data, trying though to overcome the need for clarity creating a good logic for storing variable lengths and creating a good sample-index documentation.
Thanks a lot for your support!