Handling large csv files with [coll] / hints or alternatives
Hello everyone!
I am working on a project and I need to handle very very large csv files.
A row of data would look like this:
1 , 0400.wav -0.13030312276463407 -0.8619091109806594 -0.09349469537032416 -0.27364718172646590;
and the number of entries could be tens of millions.
I am using the [coll] object to handle these csv. It works fine until the number of elements goes over a certain quantity, above 5 million rows it just won't read the csv.
I am not sure if the limit is in the csv file dimension or the number of listed elements.
Is there anybody who ever encountered such a necessity?
Or do you know any alternatives to [coll] for such an application?
Thanks a lot