Another Backtester Limitation & Script Problem.

 
A. If I prepare a fxt file with 3 years worth of ticks (53 million) up to the 1 of Dec this year, the backtester just stops around the 5th month of this year.
If I prepare smaller fxt files the tester completes the entre date range, this seems odd as 53 million is well short of the integer wrap value.
It seems abitary that it stops at all, have I missed something, or is this (yet another) bug?

B. If I use the csv -> fxt script supplied on this site and try and import the hst file generated, it seems the hst file is complete garbage (zero values, weird dates).
 
Craig:
A. If I prepare a fxt file with 3 years worth of ticks (53 million) up to the 1 of Dec this year, the backtester just stops around the 5th month of this year.
If I prepare smaller fxt files the tester completes the entre date range, this seems odd as 53 million is well short of the integer wrap value.
It seems abitary that it stops at all, have I missed something, or is this (yet another) bug?

B. If I use the csv -> fxt script supplied on this site and try and import the hst file generated, it seems the hst file is complete garbage (zero values, weird dates).
1. 53 million seems a bit too much for me, it should be around 6-8 million for a pair, for 3 years. I guess 5000-10000 ticks are normal for a day.
2. You can have an error in the csv file (e.g. not proper (8-bit) coding for one or more parts, not expected spaces, wrong time format...) so the fxt file did not compile okay. The fxt file should be more in size than the raw csv file, you can roughly check it this way.
3. You cannot really use the csv2fxt file as is, it still contains some errors. The fxt generating part is okay though, so I guess the problem is in your csv file.
 
1. 53 million seems a bit too much for me, it should be around 6-8 million for a pair, for 3 years. I guess 5000-10000 ticks are normal for a day.
>> I don't know as I have little experience in these matters, when compared on screen with the live feed, the data certainly looks correct.

2. You can have an error in the csv file (e.g. not proper (8-bit) coding for one or more parts, not expected spaces, wrong time format...) so the fxt file did not compile okay. The fxt file should be more in size than the raw csv file, you can roughly check it this way.
>> I wrote a C++ program to filter out the spaces etc, the fxt file is slightly bigger than the CSV file, processed tick count matches exactly.

3. You cannot really use the csv2fxt file as is, it still contains some errors. The fxt generating part is okay though, so I guess the problem is in your csv file.
>> What are the errors in the csv2fxt script? Why don't Metaquotes fix it??

>> I'm not sure how your reaching your conculsion, if I generate a 2 year file, the backtester works fine. The two year file contains
>> the same data range (5th month of 2006) as the one that stops, so logically it would seem to be a size problem (to me).
 
1. If you think it is size error, try to delete the ticks that are the same price as the previous one. This will trim your size a lot. The data can look correct, still contain errors that are not visible. You won't spot even 10.000.000 time/tick/false tick errors with a 53.000.000 tick file. I had 1.000.000 errors in my 8.000.000 tick file, although I thought that it is okay, when I first saw it running. This is a tricky database for sure! :)

2. This is not true. E.g. if tick#2 is earlier (time error) than tick#1, the compiler ignores tick #2. As the compiler writes the file while running, and not only on successful compilation, you can have a working fxt file not covering the whole raw csv. I only wrote you to look at the csv file, because I would have never imaged what sort of errors can be in it.

3. I don't know what the specific bug is, when I saw the total mess in the hst file, I completely rewrote that function. It was easier than looking for the bug.

If you compile the second half, does it run okay as well, or you just tried the first half?
 
1. I think you mean remove the duplicates, this was also done in the C++ program. How did you spot your errors?
2. I will adjust the C++ program to look for out of order ticks as well, will let you know what happens.
3. Was this the header writing function or the bar function?

The CSV file I have goes back to 1998, I split it into intervals as such [2006], [2006-2005], [2006-2004], only the last one fails to complete.
This means the data where the 3rd file stops is common to all 3 files.

I think I should point out that this CSV file is not from the Gain Capital site, I have had first hand experence of the amount of c**p contained in those files.
 
I spot the errors by writing a script to spot anything I can think of. The Gain Capital data is not enogh as is, you are right, too much c**p inside.

I rewrote both the WriteHstHeader() and the WriteBar(). I had no patience to look for the error, and hst files are not that important anyways if you have tickdata.

Do you have the max. amount of bars in the terminals options set to max.?
 
Max amount of bars is set.
Hst files would be good for hand checking data, don't suppose you are willing to share?
 
Check email. ;)
Reason: