Algorithms, solution methods, comparison of their performance - page 23

 
Andrey Khatimlianskii:

You could just have a longer interval. At least 30 seconds for the test.

With normalisation.

pass 0 returned result 100000.000000 in 0:00:35.296
pass 1 returned result 100000.000000 in 0:00:29.361
pass 2 returned result 100000.000000 in 0:00:24.549
pass 3 returned result 100000.000000 in 0:00:25.067
pass 4 returned result 100000.000000 in 0:00:24.578
pass 5 returned result 100000.000000 in 0:00:24.634
pass 6 returned result 100000.000000 in 0:00:25.079
optimization finished, total passes 7
optimization done in 3 minutes 09 seconds
shortest pass 0:00:24.549, longest pass 0:00:35.296, average pass 0:00:26.937


Without normalization.

pass 0 returned result 100000.000000 in 0:00:33.035
pass 1 returned result 100000.000000 in 0:00:26.020
pass 2 returned result 100000.000000 in 0:00:20.137
pass 3 returned result 100000.000000 in 0:00:20.859
pass 4 returned result 100000.000000 in 0:00:21.130
pass 5 returned result 100000.000000 in 0:00:20.664
pass 6 returned result 100000.000000 in 0:00:21.001
optimization finished, total passes 7
optimization done in 2 minutes 50 seconds
shortest pass 0:00:20.137, longest pass 0:00:33.035, average pass 0:00:23.263


Same 20%.

 
fxsaber:

This is how one Agent works, consistently counts the same thing. If you take away all the randomness, the net performance is close to the shortest.

Net is not interesting, as it is not achievable in reality.

Thanks for the tests.

 
fxsaber:

With normalization.

Without normalisation.

The same 20%.


20% for a dummy EA that does nothing... It's not very significant. In real code, the figure would be many times less. Is it worth wasting time on such trivialities.

And speaking of optimising calculations, we should start with the fact that there is no need to constantly monitor levels of all pending orders. We only need to check the closest one. If it is reached, then the next level and so on.

 
Alexey Navoykov:

20% for a dummy EA that does nothing... It's not very meaningful. In real code the figure would be many times less. Is it worth wasting time on such trivialities.

The observation is fair. On my normal robot I see too much lag in the Tester. There are many reasons for this. And this is one of them. One pass is 100 million ticks. Take the standard genetics for 10K passes. That's a trillion ticks at least. On every tick the tester does at least one normalization. When it could do none at all. What is the saving on such optimization? Moreover, to bother is to do a normalisation at every comparison, as is happening now. It's actually easier and more efficient to normalise only the incoming prices.

And speaking of the optimization of calculations, we must start with the fact that we do not need to constantly monitor levels of all pending orders. We only need to check the closest one. If it is reached, the next level is checked, etc.

The built-in tester lags dramatically when the number of orders increases. The grid TS are its "killers". I have suggested such algorithmic optimization. I do not think they will undertake it.

Here we are not discussing a large amount of internal calculations accompanying each tick of the tester.

Reason: