JS SQLite - lags when writing to a large db
I'm using the SQLite object in a JS script to insert 3-element data rows into an SQLite database file and running into lag issues within Max.
Each data row is relatively simple – timestamp and 2 data values. I've set up a trigger in the SQLite database that essentially turns the database into a ringbuffer, holding the most recent 2.5 million entries.
Entries are inserted every 2 seconds. Problem is, each new insert to the SQLite database locks Max up for around 100ms or so. Which means UI elements in Max such as sliders stutter every 2 seconds as a result.
I realise that everything in a JS object gets executed in the low priority scheduler, so tricks like delay 0, pipe 0 or using a Tasker object won't work. Since Max UI updates are also processed by the low priority scheduler, what can be done?
Is there any way to process the SQLite query in a separate thread?
Should I just give up the convenience of JS and write my own SQLite wrapper in Java/C?
I'd say moving things to Java/C (and probably a faster db than SQLite (MySQL e.g ) is your best bet. 2.5 million entires is a lot for SQLite. JavaScript in Max is always single threaded and there is no way to move things out of the low priority queue. If you stay inside JS youcould try to execute your db queries with a Tasks object. I believe the callbacks from Task are running asynchronously....
J
maybe a quick and easy way would be to make your sql interface a separate max standalone with a udp or tcp port and that way the standalone will run on a separate thread; send your data entries/queries from max patch to/from the standalone
Good Idea! I may not even have to be a standalone - separate top level patches run in separate threads, right?
Thanks Jan and Terry, all good suggestions.
My js/sqlite calls are happening in a bpatcher as required by design. I'll give JS Taskers another go with the asynchronous callbacks – I was using execute() to fire off single calls as determined by a metro. That did not eliminate the lag.
The separate sqlite standalone proxy does sound like the easiest approach without requiring too much of a rewrite. The project is released as a standalone application, having another alongside it should be fine, although I wonder about the system memory consumption of 2 concurrent standalones.
If we pick at that Jan's suggestion a little bit further, how might a single standalone launch separate top level patches? Definitely worth a try.
there would be a lot of unnecessary duplication of resources with two standalones-- if it's for some sort of distribution probably better to go with Jan's suggestion using the Task approach (at least write some test code to see if it would work)
Unfortunately, Task does not work.
Tested both execute() and repeat(1) methods to trigger the SQLite insert query; with the repeat(1) method the callbacks were tested with delays ranging from 10-200ms. Both exhibit the same lag. The source of the lag stems from the database query, in particular the ring buffer trigger that executes upon each db write.
Re: testing the other possibility (top level patchers in separate threads), how can that be done for standalones?
There's also the option of writing my own externals (Java/C) to handle SQLite in a separate thread, but I don't want to go there unless I really have no choice.
re top level patchers in separate threads in standalones, I think they have to be separate standalones, not just separate files within the one standalone...
but...
I've never used these objects (thread.fork, thread.join)-- I don't even know if they would apply to a js object and they're pretty old (2009), but they might be worth checking out:
http://cnmat.berkeley.edu/content/explicit-thread-level-parallelism-event-domain-computation-maxmsp-threadfork-and-threadjoin
thread.fork/join would have been a great possibility, unfortunately sending a symbol (either by itself, or as the first item in a list) to thread.fork immediately crashes max 7.
Interestingly, sending a list of integers/floats results only in the first element in the list making it to the JS object without crashing. Likewise if the first element is an integer/float:
So close! Thanks for the suggestion though. Until I get a bit more time to grind and optimise the project, I'll have to go ahead with a second standalone object for now. Really hoping to avoid writing an external just for this, but I suppose I don't have any other alternatives now.
Regards
I haven't downloaded those thread.fork/join objects, but will have a look at your patch later.
2 things--
did you try launching max in 32-bit mode? if not, then that might be the cause of the crash (long shot)
if you are on a mac, maybe download MySQL and build a database via the shell object (which I believe is a mac-only external) (again I've never used it myself)
[edit: probably not viable if it's to be a distributable standalone]
yup, 32/64-bit didn't matter.
I'm now trying out the mysql approach via mxj and jdbc driver. This in lieu of sqlite which I don't have the need to stick to, now that it's no longer worth pursuing in JS:
http://www.mdhosale.com/mysqljbdc
looks like you exhausted all possibilities within the sqlite environment-- when you've got the system running maybe post the outcome in this thread; I'd be curious to know if it fixes your problem
Indeed I have, thanks Terry for running through this with me.
After some initial tests, simple db UPDATEs (one per second) using MySQL and mysqljdbc seem lag-free – however this is also the case with JS/SQLite.
The key culprit in my case was my SQLite trigger that handled the logic for the circular buffer. The agonising bit about all this testing with MySQL is that it's giving me an idea to scrap the SQLite trigger, and implement the circular buffer partially in JS. My 'doh!' moment, right there.
I might end up sticking to JS/SQLite if taking out the trigger improves the performance sufficiently for my needs.
But with that all said, since I've gone through the tedium of setting up a working MySQL db and JDBC driver/mxj, I'll go ahead and test MySQL performance – eventually.
This might become 'plan B' for me.
Ok, scrap that doh! moment. After further tests I have given up completely on JS-->SQLite.
The good news is, MySQL is handling large rowcount databases in Max quite well.
By default, the mxj MySQLJDBC processes requests in the queue thread, so there are observable, proportionally increasing penalties in the form of UI lag when retrieving large datasets (i.e. > 4000 rows). The thread blocks however are much, much quicker than JS/SQLite. Reading off 500 rows at a time, for example, introduced extremely little lag time in the low priority thread.
Also, while the MySQL/JDBC approach is decidedly faster, everything is still tempered by how the sql query is written (i.e. avoiding large LIMITs and OFFSETs is a helpful rule of thumb). This is regardless of SQLite/MySQL, although I'm sure some database experts here can offer more precise commentary.
Hope this helps someone else along the way.
all grist for the mill; emerging out of the rabbit warren, did you ever condsider doing all this in jitter? storing and retrieving a structure such as a timestamp with two data points from within a matrix would be simple, and setting it up as a circular buffer wouldn't be hard...
Yes I'm also using a jitter matrix and buffer~, using the latter dumping out raw files as 2nd backups – that was quite handy. Indeed the circular buffers for those two objects are a quick thing to do. To keep it light they only have non-timestamped data for realtime processing (just chars / normalised data), and in turn get populated by database loads, on demand.
I have to incorporate a database for persistent storage so I can make backups outside of Max, and also extract temporal data for future analysis. It's the nature and requirements of the project that shaped this eventual structure.