You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Any updates on this? I am running into errors
Jacobie, Stanislav Korotky has published a pack that contains this zip library here, with some fixes.
I have tested it, and so far is working pretty well.
I came across an archive that CZip could not decompress. At the same time 7ZIP and Windows archiver decompress without problems.
Printed out the size of compressed data - it turned out to be tens of megabytes less than the archive (and there is only 1 file in it).
CompressedSize:76964920I started looking for where it is calculated, and found it in the FindZipFileSize() function.Experimented...
It turned out that if you return all end_size data as data size, the archive is unpacked correctly. Apparently, when unpacking, the code itself determines the end of data, instead of relying on the response from this function. The main thing is that it should not be smaller. You could leave it like that, but it turns out that the function is useless, which is unlikely. And perhaps some other archives will fail...
One more experiment showed that if you comment out the lines
The archive starts unpacking too. The amount of data is close to 100%.
CompressedSize:106638447It turns out that the archive has uint cdheader = 0x504b0102; and this is a part of the compressed data, not a label of its end.Did you make a mistake with the label? I found such a label in the Internet search. Maybe it should be processed in some other way instead of cutting the data by it, I cut 30MB.
I can send the archive file to you in a private message, if you are interested in figuring it out.Function working with this file: (file \Include\Zip\Zip\Zip.mqh)
Another error with another file. This time commenting the lines helped
I.e. the code uint header = 0x504b0304; also occurs in the archive contents and it is successfully unpacked by 7Zip, Windows and this corrected version of CZip.
Since both loop exits are disabled, the loop has become superfluous, it can be deleted and returned:
There is obviously some defect in this function. After all, the condition
condition will never be fulfilled, because when exiting the loop when the end of data is reached, size == end_size will be size == end_size, not 1 less.
As a result, I shortened the function to one line:
The programme with this version of the function has already downloaded 110 archives for 12 Gg in total and successfully unpacked everything.
If anyone has problems with unpacking, you can try this version of the function.
The file should have 1.8 billion char elements, but it is unpacked cut to 1.5 billion. Some data is lost. It is strange that it is cut so short, arrays can have up to 2147483647 elements.
The terminal function
produces cut data.
There is nothing to be done about it...
I thought that it is possible to decode in even kb/mb blocks - I supplied 1024, 1024*1024 and 1024*1024*10 - it didn't work.
Is there any way to use the Windows archiver? Using WinExec to unpack into a file, then read it line by line. That way automation will remain. But not for the market.I will have to save archives, then manually decompress them and then process them. It will be inconvenient - without automation ((
Is there any way to use the Windows archiver? Use WinExec to unzip to a file, then read it line by line.
You can, obviously. What's the problem? Maybe I misunderstood? There have been console archivers UnRAR.exe, UnZip.exe, etc. for a long time.
Another error with another file. This time commenting the lines helped
I.e. the code uint header = 0x504b0304; also occurs in the archive contents and it is successfully unpacked by 7Zip, Windows and this corrected version of CZip.
Since both loop exits are disabled, the loop has become superfluous, it can be deleted and returned:
There is obviously some defect in this function. After all, the condition
condition will never be fulfilled, because when exiting the loop when the end of data is reached, size == end_size will be size == end_size, not 1 less.
As a result, I shortened the function to one line:
The programme with this version of the function has already downloaded 110 archives for 12 Gg in total and unpacked everything successfully.
If anyone has problems with unpacking, you can try this version of the function.
I can assume that you still need to search for labels, but not in the archive body, but between files, if there are several of them. Probably the archive lengths should be recorded somewhere....
In general, my solution is private for my task with 1 file in the archive, if there are several of them - perhaps you need to do something else.
I can assume that the labels still need to be searched, but not in the archive body, but between files, if there are several of them. Probably the lengths of the archive should be recorded somewhere....
In general, my solution is private for my task with 1 file in the archive, if there are several of them - perhaps it is necessary to do something else.
I made a printout of compressed and uncompressed sizes that should be in the headers.
Those files that I downloaded - have 0 0. If size=0, then FindZipFileSize() discussed above is called.
I created an archive with the 1st file using a normal archiver. Sizes in the header:
46389587 376516461
And another archive with 2 files, including the one added to the first archive:
46981880 314725045
46389587 376516461
Both have the sizes written in the headers and did not call FindZipFileSize()
And the files I downloaded (with 0 0 in sizes) were apparently created by software that didn't write the sizes in the headers.Perhaps my solution to shorten FindZipFileSize() is universal.
We can, obviously. What's the problem? Maybe I misunderstood? There have been console archivers UnRAR.exe, UnZip.exe, etc. for a long time.
I made unpacking via 7-zip (UnZip.exe has not been updated since 2009 about support even Win 7 is not written).
I compared the speed:
This CZIP library:
2025.06.14 20:59:06.758 Archive successfully opened. Total files: 1.
2025.06.14 20:59:07.345 UnZipped
587ms
7-zip:
2025.06.14 21:00:07.312 Start unzip by 7-Zip.
2025.06.14 21:00:09.274 UnZipped by 7-Zip. File size: 428.22 MB
1962 ms
This is only for unzipping the file with reset to SSD disc. You also have to read the file from the disc line by line.
Total time from start to end of parsing:
Total Time: 10s 709ms
and
Total Time: 12s 892ms
The difference is 2s 183 ms.
In general, it is preferable to use this CZIP library for speed and if the file is too large, use other archivers.
To me for ~1000 files at 2secs each = 33 minutes of savings. Actually more, since this is an example with the smallest file at 428 mb, CZIP unzips files up to ~1.7 GB.
Anything larger (up to 4GB) is handled by 7-zip. So there is 1-1.5 hours of savings.
Tested CZIP with my edits - unpacked 600+ files on 100GB without errors.