Gogetter EA - page 11

 

I think You still need to improve your modeling quality, as I've talled you before (on page6). Try to reread and do everything as it was written here http://www.strategybuilderfx.com/showthread.php?t=15309

 

oy, ever feel like you're talking to a blonde?

Tatyana,

Your answer shows that still you do not understand my question, but you may be getting closer.

I do not want my expert code checked. I do not believe the problem is in my code. I believe the problem is in your platform. I am looking for a way to verify that your platform does not have a data processing bug in it.

The question is about recalculating.

Your explaination about data being modeled afresh because of new quotes can't possibly account for the vast difference of these two tests.

The data being tested is stored in the data history center correct? What new quotes will arrive in this HISTORICAL data? I am testing a specific date range starting at 2005.09.09 because that is where I start having 1 minute data in the history center. The only 'new quotes' are those which are added to the present time or end of the data file.

These tests show vast variance long before the test gets anywhere near the present time. Before the tester has covered even one day in this history data file the two tests are modeling differently. That is the date from 2005.09.09. No new quote is entered in that day, why would it? There are no NEW quotes for one day many months ago. Only new quotes for the present day.

The fact that it is modeling the same historical data file differently is what has me concerned. It should model the same data exactly the same each fresh test. That is why I want to know how to verify that it is processing the data consistently.

If the data were changing because of new quotes for this one day many months ago it would have to be changing a great deal to make this much difference as these tests show. If you are going to insist that the data is changing (VERY unlikely) then what evidence can you give me that this is the case? Where would it get these new historical quotes from to update a past day? I had to go to great lengths to download and install historical data and now you expect me to believe that the platform is doing this itself for many days in the past just because I check the recalculate box? Furthermore, When I look at the data file in the history center it shows the same number of saved records for that specific day after the test as it did before the previous test so no new quotes were added between the two tests for that day. It shows new quotes for the present day but not for any previous day. I simply cannot verify your explaination nor is it logical. I am not stupid. If I could believe that data variation was responsible I would not be asking the question I am asking.

Once again this is why I am asking to know ....

HOW TO VERIFY THAT THE TESTER MODELS THE SAME WAY EACH TIME USING THE SAME DATA, THE SAME EA CODE WITH THE SAME SETTINGS!

Please answer THAT question and that question only. If you cannot answer that question then please refer me to some technical assistant who IS qualified to answer that question so that we may track down this bug without further diversion.

Thankyou,

Aaragorn

ps.

as you suggested...

I have read this article https://www.mql5.com/en/articles/1511 It does not answer the question.

I have made a post on this forum http://forum.mql4.com/3906 there have been no replies.

MetaQuotes HelpDesk (Tatyana) wrote:

> Hello Aaragorn,

>

> Sorry for delay.

>

> 1. Please try to ucheck the Recalculate field.

> The matter is that every time you launch the expert testing with enabled "Recalculate" option, the data will be modeled afresh.

> Since the new quotes have already come by this moment, the data modeled based on these new quotes will be different.

>

> 2. Unfortunately, we cannot check your expert code. Please try to refer to our community at http://forum.mql4.com/

>

> 3. Please refer to https://www.mql5.com/en/articles/1511

>

>

> Best regards, Tatyana Vorontsova

> MetaQuotes Software Corp.

> www.metaquotes.net

>

> ----- Original Message -----

> From: "Aaragorn"

> To: support@metaquotes.net

> Sent: 2006.08.25 00:37

> Subject: Bugtrack (MetaTraderDataCenter, 4.00)

>> I have not received any reply to my last 3 emails to support@metaquotes.ru this is my question.

>>

>> Three things must be verified to be stable for the strategy backtester to work.

>> 1- the data itself

>> 2- the EA code

>> 3- the way the platform processes the data

>>

>> I have done two strategy tests on the same EA and gotten very different results each time.

>>

>> I can verify that the EA code didn't change in each test.

>> I can assume that it used exactly the same historical data from the history center because the date range was not changed either.

>>

>> How can I verify that the platform is processing the data exactly the same way in each test?

>> My results seem to suggest that it is not processing the data the same way each time....see this link for details of my results

>> https://www.mql5.com/en/forum/general

>>

>> I have already read these articles: https://www.mql5.com/en/articles/mt4/tester/

>> I do not see anything in any of the articles to help answer this question about how the platform processes the data and how I can verify it's stability.

 

Language is always a problem with Metaquotes. It takes about 3 to 4 emails to ensure that they understood the problem correctly. Sometimes, they are in denial too, and that is frustrating.

 
asmatic:
I think You still need to improve your modeling quality, as I've talled you before (on page6). Try to reread and do everything as it was written here http://www.strategybuilderfx.com/showthread.php?t=15309

did you not see where I answered you on page six that I have already done all those things? https://www.mql5.com/en/forum/general

I also invite you to do this and show me your success getting better modeling quality on this EA. Please don't ask me to repeat myself to you again. I already answered you.

 
Maji:
Language is always a problem with Metaquotes. It takes about 3 to 4 emails to ensure that they understood the problem correctly. Sometimes, they are in denial too, and that is frustrating.

I come to the same conclusion. I don't know how to be more direct or more specific. I guess some frustration is the price of progress sometimes.

 

...

> MetaQuotes HelpDesk (Tatyana) wrote:

> Hello Aaragorn,

>

> Sorry for delay.

>

> 1. Please try to ucheck the Recalculate field.

> The matter is that every time you launch the expert testing with enabled "Recalculate" option, the data will be modeled afresh.

> Since the new quotes have already come by this moment, the data modeled based on these new quotes will be different.

Isn't that a good answer enough? I mean if the missing data is made up or modeled differently each time new data is available, running a test over the same time period would obviously give you different results ...

Why do you believe this is not the reason for the problem?

Patrick

 

Aaragorn,

I have tested your expert all day and here is what I see:

If the platform is connected and I select recalculate etc ... I can run the test over and over and over I will still get the same result.

If I close the platform start it and I'm not connected I will get a mcuh different result with the same settings, but I can run the test over and over I still get the same result.

If restart the platform and get connected I get the same result as my prior tests while connected ...

So yes the values are different with the same settings wether I'm connected or not ... would you mind checking that it is the same issue you are having?

 
Mistigri:
> MetaQuotes HelpDesk (Tatyana) wrote:

> Hello Aaragorn,

>

> Sorry for delay.

>

> 1. Please try to ucheck the Recalculate field.

> The matter is that every time you launch the expert testing with enabled "Recalculate" option, the data will be modeled afresh.

> Since the new quotes have already come by this moment, the data modeled based on these new quotes will be different.

Isn't that a good answer enough? I mean if the missing data is made up or modeled differently each time new data is available, running a test over the same time period would obviously give you different results ...

Why do you believe this is not the reason for the problem?

Patrick

Because we are talking about HISTORICAL data not current data. AND because I can see no evidence in the data file that any new quotes are being added. There ARE no new quotes being added to the past data unless it's not only adding them but also erasing them after the test so that they don't appear in the history center. How likely is that?

To be completely clear yes it adds the current most recent quotes. But it doesn't go back to 2005.09.09 and add new quotes to that day. It doesn't go back to 2005.09.14 and add new quotes to that day either. The only new quotes which are added are relative to today....MONTHS later....do you see what I'm saying?

Why do you believe that it's going to go back in the history data and fill in all the blanks that exist every time I click recalculate? Why would those blanks suddenly become available yet not show in the history data center after the test? I simply can't verify this shallow assumption that new quotes are miraculously being filled in clear back to the start of the date range. There is no evidence. It doesn't wash. That's why. Show me the evidence. Show me in the history data center where these 'new quotes' have filled in anything but the most recent data, cause it's not doing that in my account.

 

I have gotten different results this afternoon after the market closed than I did before the market closed. But you see that only adds to my sense that the backtester is not processing the historical data file the same way. That may be one variable which is changing...being connnected to the server...or if the market is open or not....but those things are instabilities which by rights should not impact the outcome of a backtest on historical data which is not changing.

That still leaves how to account for the test result which went to plus 1 million. Why does it not repeat THAT outcome?

The point is that either it processes the data EXACTLY the same EVERY time or it does not. What do we conclude?

In all your tests of this today what did you do to verify that the data file you are using is not changing. if you have verified that it is not changing and you still get different results what does that tell you? That there is some other variable which impacts the test? I am just thinking logically and wanting to eliminate guesswork and assumptions. Guessing what is going on will never resolve this. It has to be verifiable.

I apologize that I am out of time for today. I'll check back here later this evening.

 

Well let me say that I only get 2 different results that's it ... not 10 different results.

I get one result when I'm connected and I get a different result when I'm disconnected. The tester uses the files from:

C:\Program Files\Interbank FX Trader 4\tester\history

Now open windows explorer and look at your GBPUSDm30_0.fxt while you are connected, it is about 50 mb, now close the platform, reopen it do not connect and run the test with recalculate selected and refresh your explorer view ... What do you see now? Your file should now say it is 1k - 0k

So yes the data file seems to be different. I guess my question is about your historical data ... How do you use it with the tester ...

By the way I'm trying to help, I hope you don't mind that I use the forum, but mind as well get any help we can get ...

Reason: