Dictionaries or Buffers for large set of data at high frequency of write and read?
Hello there!
I am working on a project where I am receiving a large set of data through OSC messages: sensor data + custom depth camera data + custom computed values that head up to thousands of values.
I have been setting up a Nodejs script through wich I receive, compute and organise the data in order to then be stored in a Dict every 16ms.
Now, it might sound like a silly question but I am wandering if I should have approached the process differently like storing the information into multichannel buffers. I am only now questioning how efficient is the write and the read process with max dictionaries through nodejs compared to using buffers. Though I would probabbly still need to work with some kind of OSC routing/v8 scripts to store the data in the correct buffer, would using buffers be 'faster'?
Does any of you have any suggestion or found itself in similar situation?
Ciao!
The first thing to determine is whether the amount you are storing is enough to matter. How much are you writing every 16ms? If it's not much, then that frequency is probably low enough that you don't need to worry anyway. I would encourage you to do some tests.
If you are writing enough to matter, writing raw numbers into buffers can be much faster, depending on how it is done. In the Max SDK (the underlying C code) the buffer writing functions allow objects to lock a buffer once and the do direct sequential memory writes into the buffer for some block of memory before unlocking the buffer. I use this in Scheme for Max for allowing one to write large chunks of numbers from a Scheme vector into a Max buffer. This is definitely faster than writing to a dictionary, and is also much faster than writing to a Max array as those are actually arrays of Max atoms as opposed to blocks of numbers (i.e., C arrays). But again, whether the speed difference actually matters for your use case is a the most important question.
If you really need speed and are ok with putting some work in for it, doing so in a C or C++ external so you can write in blocks to buffers is probably the way to go. You could look at the Scheme for Max source code for the vector->buffer functions if you wanted examples.
Thanks Duncan for your response!
Yeah, I think I still need to understand, as you pointed out, whether the speed actually matters. Unfortunatelly I am not super familiar with either C or C++ so definettly could be an option to try but probably after clearelly understanding the needs.
I'll check out in case the source code of Scheme (amazing work btw!)
Thanks again!