Discussion of article "Handling ZIP Archives in Pure MQL5" - page 10

 

And there is also a link to a large file that can't be unpacked properly. I set the size for them to 0, and then the calling program understands by this 0 that there is an error and it is necessary to use another archiver.
Perhaps you can think of something better instead of 0.

https://quote-saver.bycsi.com/orderbook/linear/BTCUSDT/2025-05-09_BTCUSDT_ob500.data.zip

 
Forester #:

I sorted out the files (unpacked) exceeding a certain volume (for different files from 1.7Gb to 2136507776 - i.e. almost to MAX_INT=2147483647, and arrays can't have more elements) and which are cut off at the output. It turned out that all of them were marked as erroneous at:

I.e. output value = 0.
But CZIP does not control this. I made zeroing of the output array size.
So in my functions I can determine with 100% guarantee that the file is successfully unpacked.
Before that I checked the correct end of JSON file }\r\n - but this solution is not universal and it seems that several files out of ~1000 were accidentally cut off by an intermediate line and were accepted as successfully decompressed, but the data in them is not complete.

New version of the function:

The new one is highlightedin yellow .

Perhaps the developers should also reset the array to zero, because the trimmed data is hardly needed by anyone. And may lead to hard-to-see errors.

And there is also a link to a large file that can't unpack properly. I set the size for them to 0, and then the calling program understands by this 0 that there is an error and it is necessary to use another archiver.
Perhaps you can think of something better instead of 0.

https://quote-saver.bycsi.com/orderbook/linear/BTCUSDT/2025-05-09_BTCUSDT_ob500.data.zip

 
Thank you. I've uploaded the file, I'll look into it.