timeout error in long N4M process
Hi,
I'm working on a Max patch that needs to trigger a long process in a Node 4 Max script.
My process end with a timeout error ("Max API Message Request timed out. id: u39633390420").
Is there a way to fix this ? (I suppose that this timeout is a protection but is there a way to override it ?).
I've attached the patch.
You have to add a long WAV sound file (mine is 6min long) in the directory and to modify the filename in the path message.
Regards,
Jérémie
It looks like the solution is to change the reqTimeout parameter to something longer than 3000 in file /Applications/Max.app/Contents/Resources/C74/packages/Node\ For\ Max/source/lib/api/index.js (line 27).
Is there a way to set this param directly from the node script (to avoid the modification of core Node 4 Max files) ?
Hi there,
looking at the patch and code I think the issue is actually within your application code and not necessarily the timeout itself. Maybe we need to make it a bit more clear that max.post is an asynchronous call but your for-loop is basically blocking and all the bi-directional communication is only "verified" as successful once you left the loop. With that in mind, even a bigger timeout is more of a "guess" depending on whatever the input size / length of the loop is.
Instead you should be able to fix the error by altering your code to await the call to post. Easiest might be to use async / await here. Adding the adjusted code here to help others without a need to download (if that's problematic please let me know):
const max = require('max-api');
const essentia = require('essentia.js');
const load = require('audio-loader');
max.addHandler("path", async (pathname, combine, frameSize, hopSize, ratioThreshold, sampleRate, threshold) => {
load(pathname).then(async function(buffer){
await max.post("loaded");
const channel = buffer.getChannelData(0);
const inputSignalVector = essentia.arrayToVector(channel);
await max.post("vectorized");
const onset = essentia.SuperFluxExtractor(inputSignalVector, combine, frameSize, hopSize, ratioThreshold, sampleRate, threshold);
await max.post("onsets done");
const ticks = essentia.vectorToArray(onset.onsets);
await max.post("toArray");
for(var c = 0; c < ticks.length; c++) {
await max.outlet(c, ticks[c]);
}
});
await max.post("loading");
});
Hi Florian,
Thanks a lot for your answer and solution !
If I understand well, I was thinking that the problem was coming from the long call to "SuperFluxExtractor", but in fact it was due to the time taken by the communication between Node and Max via the max-api and all the max.outlet() calls. With the await on max.outlet(), all calls are done, but the results arrive when they can... (without timeout).
Am I right ?
Regards,
Jérémie
Hi Jérémie,
the long call to SuperFluxExtractor shouldn't really be a problem. The timeout does not apply to the callback of a handler. So even if your code in there takes minutes to run it's not a problem and it really shouldn't be either. The issue is that your for-loop is flooding the message queue and basically the communication times out. There are probably ways for us to handle that case, buffer internally within the API, introduce more complex forms of parallel execution and messaging and with that mask the potential issue for the API user. However, given that we are dealing with interprocess communication the sentiment that basically everything is asynchronous per nature is important for the application code as well and you cannot necessarily 100% rely on messages being output / printed by the node.script object in order unless you use await with your calls to max.post and max.outlet. In your specific case it ensures that the call within the for-loop is done fully sequentially and the results / messages are sent and received as expected.
The timeout that you tracked down in the internals of the max-api is in place in order to ensure that the communication is stable between the two processes at this point. The way we do / ensure that might change in the future but I'd refer to the user facing API and its asynchronous nature.
Does this clear up things a bit? Let me know if u have further questions.
Florian
Hi Florian,
I have a similar problem, perhaps you have an idea ...
I am running SBCL(commonlisp) scripts through Node.js via spawn-child process (see code below).
Now the scripts are running fine — in my code I actually write the result, quite heavy, into temp files — but I also need Node to print the stdout and stderr in the Max console somehow, to be able to debug from Max
Both solutions I tried below (post or outlet), beyond a certain length of calculation (like 4 seconds) end up producing the same "Request time out".
I don't know if this is related, but before timeout the expected lines print in Max console but some of them a aggregated in groups (see screenshot) while they are really printed separately in Lisp.
I'm not sure if your solution involving async await would help in my case — I'm really not trained on the sync-async topic... I'd really appreciate any guidance on this ! Thanks in advance,
Julien
const maxAPI = require("max-api");
const { spawn } = require('child_process');
const sbcl = spawn("/usr/local/bin/sbcl",
['--core',
"/Users/juvince/Documents/Max 8/Packages/MOZ/sbcl/moz.core",
'--script',
"/private/tmp/8701_tmp-in.lisp"]);
maxAPI.addHandler('run_sbcl_script', () => {
sbcl.stdout.on('data', (data) => {
// maxAPI.post(`stdout: ${data}`);
maxAPI.outlet("stdout " + data);
});
sbcl.stderr.on('data', (data) => {
// maxAPI.post(`stderr: ${data}`);
maxAPI.outlet("stderr " + data);
});
sbcl.on('close', (code) => {
// maxAPI.post(`child process exited with code ${code}`);
maxAPI.outlet("SBCLout");
});
});
maxAPI.addHandler('pkill', () => {
sbcl.kill();
maxAPI.post('pkilled');
})
Hi Julien,
if you check the N4M API Docs it shows that post and outlet are in fact asynchronous functions. So I'd start with the async / await approach, which should in theory solve your issue (let me know if it doesn't).
For your script at hand the changes might be pretty minimal:
const maxAPI = require("max-api");
const { spawn } = require('child_process');
const sbcl = spawn("/usr/local/bin/sbcl",
['--core',
"/Users/juvince/Documents/Max 8/Packages/MOZ/sbcl/moz.core",
'--script',
"/private/tmp/8701_tmp-in.lisp"]);
maxAPI.addHandler('run_sbcl_script', () => {
sbcl.stdout.on('data', async (data) => {
// maxAPI.post(`stdout: ${data}`);
await maxAPI.outlet("stdout " + data);
});
sbcl.stderr.on('data', async (data) => {
// maxAPI.post(`stderr: ${data}`);
await maxAPI.outlet("stderr " + data);
});
sbcl.on('close', async (code) => {
// maxAPI.post(`child process exited with code ${code}`);
await maxAPI.outlet("SBCLout");
});
});
maxAPI.addHandler('pkill', async () => {
sbcl.kill();
await maxAPI.post('pkilled');
})
Hi Florian,
Thank you for your quick reply.
I copied your solution but for some reason the timeout happens now even earlier 🙁
I read your remark to Jérémie about outlet and post being asynchronous, but I don't really understand what that means to be honest. All I understand for now is that spawn is also asynchronous
(https://nodejs.org/api/child_process.html#child_process_asynchronous_process_creation)
so I'm not sure if it makes them incompatible somehow.
If you see another solution that'd be super helpful.
Thanks again !
Julien
Hi Julien,
I have a few ideas what might be going on or what we could do here but that gets slightly detailed and it might be easier for us to handle that with access to the actual code / project.
Would you mind getting in touch with our tech support and also provide your patch, JS script etc? Please mention this thread in your inquiry so it's get passed on correctly right away.
Thanks
Florian
Hi Florian,
Sure, I understand, will send the last beta of my package MOZ, with some explanations
Thanks !
Julien
Just getting back to this publicly once more in case anyone else stumbles across this in the future looking for help on a similar issue.
Julien's issues arose mostly due to the fact that listening to the I/O from the spawned child process does not give line-by-line reads on the data listener and therefore creates inconsistent prints in the Max console while sort of flooding the IPC messaging.
One could get around this by using readline.createInterface and posting line by line, however the simplest way in this specific scenario is to just use the inherit option for the I/O when spawning the child process. Node For Max already takes care of properly piping stdin /stdout / stderr for every node.script instance and one can consume and direct the output via the right outlet of the object with the help of a route object. Consult the stdin-out tab of the helpfile and the guide on this topic for further details.
Thanks again for the huge help Florian ! With the inherit option everything works beautifully.
There's one other thing I'm not figuring out so far, is how to stop more efficiently a spawned process.
Either with "script stop" message (to node.script) or using process.exit() or process.kill() commands (in Node.js itself), the process keeps running and printing for a little while (up to 3-4 seconds) before stopping for good.
I wonder if this has to do with the asynchronous nature of NodeforMax + in general how spawning works.
Because of that, the interaction becomes much less reactive than the shell object which allows to pkill a process immediately and restart a new one with literally no latency.
If you have any suggestions on this topic I'm interested !
Hi Julien,
Without looking into all the details, one pointer is for example that process.kill() sends a SIGTERM signal by default (which can be overridden), also writing to stdout/stderr might be buffered etc.
Generally none of that might be specific or introduced by N4M and I'd advise to harden and test your process management straight from the CLI using a local install of Node.js. Seems to me once that works as expected you can connect the script with calls to the provided API from N4M.
Hi Jérémie,
thanks for reaching out on this again and sorry for the delay in my response. First off we will be looking into ways to improve the API for this in order to make using the max-api module a bit more friendly when it comes to async processing.
I can not straightforwardly reproduce the issue you are seeing (is it the same timeout error?) but I took a stab at your script that might help. A few things I noticed that might relate to issues you might be seeing
* the reject calls in getContent do not return so you end up executing the code that I assume isn't meant to run
*when downloading big zip files and it might be better to stream the download to a temp dir on disk and then unzipping from there instead of loading the entire content into memory by consistently storing the buffers in an array
* packages likes node-fetch, axios etc already do a set of abstractions on top of the native http module, that might come in handy here (with arraybuffer support for download if streams aren't what you might want / need)
Please see the attached files and the inline comments. I'd appreciate hearing back if that helped with the behaviour you were seeing or if you still run into specific errors related to the Node For Max IPC.
Thanks, Florian
Hi Florian,
Thanks a lot for your reply, it works ! (I'll do some more tests in the next few weeks and let you know if I discover other problems).
The only thing I had to fix is the "remove" function that didn't remove directories (I had to add "rmdir(p)" after iteration on all files the directory).
Regards,
Jérémie