Thread safety/blocking when node scripts update max dicts?


    Feb 17 2019 | 9:29 pm
    Hi, I'm wondering if there are any online resources or if anyone knows what the threading issues are with regard to node processes reading and updating dicts in max. As in: - will the node process block max execution until the dict is updated? - does the dict get updated as an atomic operation as far as Max is concerned? - any resources on this stuff that someone can point me at?
    thanks!

    • Feb 18 2019 | 11:02 pm
      Hi Iain,
      Continuing from your other question, node.script will send the dictionary as serialized JSON, which on the Max end will also be parsed and turned into a dictionary. That dictionary will then be output or set by id (via setDict), or update just the values in dictionary id for the key at the supplied path (via updateDict). All of that will be done in a threadsafe way using mutexes on the Max side of things, but there should be no blocking across processes--i.e. it will be an asynchronous operation, not blocking in Max and only when swapping values after parsing the incoming dictionary message does the dictionary get updated (or output if not updating an existing dictionary).
      So please note that while threadsafe, and will not crash, there can be race conditions if you're updating the same dictionary with multiple keys at once from multiple places. If you want to have some kind of transactional system, you will need to program that logic yourself somehow, by signaling to other clients that you are beginning to make a transaction and that they should stop all modifications to the same data store, wait until you hear from them that they have stopped, then update the values you desire, then let the other clients know that they can modify as before (with each of them providing the same level of communication), or some functionally equivalent transactional approach.
      Hope this helps! Joshua
    • Feb 18 2019 | 11:17 pm
      Also, in case it's helpful, if you look at the source code inside /Applications/Max.app/Contents/Resources/C74/packages/Node\ For\ Max/source/lib/exposed/max-api.js
      you can see that the way updateDict() works is that it requests the dictionary from Max (a copy, serialized over the socket), changes the value, and then sends a whole new dictionary back. So if you are updating lots of values in your code, you might want to do the same kind of thing, but set all the values before sending the dictionary back. This could be for both efficiency reasons as well as race conditions and asynchrony (which is maybe why you are asking these questions in the first place?).
      const updateDict = async (id, updatePath, updateValue) => {
      	if (!id) throw makeError("Missing name for setDict request. Please make sure you provide a (valid) name.", ERROR_CODES.INVALID_PARAMETERS);
      	if (!updatePath) throw makeError("Missing path value for updateDict request.", ERROR_CODES.INVALID_PARAMETERS);
      
      	const dict = await getDict(id);
      	const newDict = loSet(dict, updatePath, updateValue);
      	await setDict(id, newDict);
      
      	return newDict;
      };
    • Feb 19 2019 | 2:00 am
      Hi Joshua, thanks for your help, that all sounds very promising. What I want to do is be able to work on data structures in a clojurescript app running as a node process (ideally), and then be able to get content back into max without hosing max playback or having garbled data from partial updates, so this sounds like I'm covered. Am I correct in understanding that Max won't block max-access to the data structure in max, except for the atomic update when the new deserialized dict from the node process is swapped for the dict in max? As in, Max sets its internal mutex for updating the dict only after all receiving is done and just for the swap?
    • Feb 19 2019 | 2:02 am
      Also, on a related note, does max do the same kind of mutexing for moving between dicts and colls with the push_to_coll and pull_to_coll messages? I was thinking I probably want actual sequence playback running off colls for speed, with some kind of unit-of-work dirty tracking in place to update the coll from dicts when data has changed. Does that make sense?
    • Feb 19 2019 | 2:14 am
      Technically a mutex is held for every addition to the new dictionary only for the duration of insertion (which shouldn't block any other threads, since it is only a mutex that relates to the dictionary itself), and then once for the swap only for the duration of the swap.
      Similar for the coll. The coll's mutex is held for each insertion for the duration of the insertion in the rather than the whole operation, just as it does each time to add/modify an element inside of a coll.
      I think you'll be fine for what you describe, but please let us know any issues you encounter.