Your symbols and your datafeeds in Metatrader 5 - page 13

 
Renat:

That is to say, there is no corroboration of your words.

This is to say nothing of the possibility of a normal criticism of your claims on a theoretical level.

Renat I am not criticising that the MT tester works somehow wrong. I was describing what I encountered when I had to reinvent the bike myself. And when I saw the difference of this annealing method from about 5 GA variants, which I tried before, I was delighted, as this method clearly won both in speed and accuracy. But I did it all about 1.5-2 years ago and forgot a lot of what and how I did it, I had to do it willy-nilly and I wasn't eager to memorize the maths and algorithms. And now I just find it difficult to pick it all up. And there are still some fundamental differences to make comparisons. Once again, I didn't write about it out of criticism, but from something that made me very happy at the time. Besides by the signs of how your algorithm works in the tester - it seems to me that it is not GA, but a kind of Monte Carlo or some simplified GA, which is probably justified for mass use, for approximate estimation of results.

 
ANG3110:
Renat I am not criticising that the MT tester works somehow wrong.

You have clearly thrown around unsubstantiated claims and tried to give the impression that GA in Metatrader "does not find results unlike other fabulous methods".

 
Renat:

You have clearly made unsubstantiated claims and tried to give the impression that GA in Metatrader "does not find results unlike other fabulous methods".

I didn't immediately write that the statements were nonsense and asked for proof. Expectedly, there was none.

So now I took and ran my variant of 663.552 combinations - counted 27 seconds. I put it into tester MT4, 2 minutes passed, the tester still considers and counts 610 combinations, of 10496(663552) - writes that it can calculate up to 1:26:25. So about the accuracy is not possible yet, as the tester does not know yet how much will count. But I know that the result will be better with this one with the annealing method - I have repeatedly checked before. While writing the tester has counted 1173 passes 0:10:15 - that is already counts 10 minutes.

P.S. Method of brute force section to find the maximum- it was 9304 - the best result after optimizing the method of annealing - 9304. Tester so far has fastened 3327 combinations - counts 23 minutes. So far the best result is 18559 - I said that at the moment it is difficult for me to correctly calculate, because I have counted Asks, and the average spread in the tester - incidentally, look at the difference - see how the tester is lying because of the lack of Asks. It considers all night spikes on the spread expansions as successful deals, but they are not there.

Once again P.S. The tester has finished account 0:29:25 best result 21460 (I wrote why he lies), to make objective conclusions on the accuracy, I need to disable Asks in his "tester" and plus still have to reconcile some values. When I wrote the algorithm, I made sure that the results of the tester and my "tester" would fully coincide. And then I compared them. Accuracy of annealing method is much higher.

 
ANG3110:
Well, now I took and ran my variant 663.552 combinations - counted 27 seconds. I put it into tester MT4, 2 minutes passed, tester is still counting and has counted 610 combinations, of 10496(663552) - writes that it can count up to 1:26:25. So about the accuracy is not possible yet, as the tester does not know yet how much will count. But I know that the result will be better with this one with the annealing method - I have repeatedly checked before. While writing the tester has counted 1173 passes 0:10:15 - that is already counts 10 minutes.

The proof is the provision of a description of the test + its public reproducibility.

And you're all playing the "I have something, but I won't get into the proof zone until the last minute" description.

 
Renat:

The proof is the provision of a description of the test + its public reproducibility.

And you're all playing the "I've got something, but I won't get into the proof zone until the last minute" description.

Renat, I'm not criticising or convincing you. I was just trying to show that it's a good thing. I understand you that to give an order to rework the GA algorithm, you must have good reasons for making such a decision and at least some set of experimental data, the more detailed the better. And I would be happy to do such work especially for you. But at the moment I am very busy with trading and plus some other distracting circumstances. So, please don't be discouraged.
 
Objectively, what we have. There are two completely unfounded statements: "the regular GA is the best heuristic algorithm" and "there are better heuristics than the regular GA". Again, these statements are completely unsubstantiated. We haven't received a single piece of evidence from either side.

Apart from the human factor, there are several reasons for this circumstance. Firstly, there is not even a clear formal criterion agreed on both sides for comparing any two heuristics.

However, the statement "GA is the best" implies that some serious research work has been done in which this very criterion for comparing heuristics was worked out. If so, it would not do any good for the authors of the study to provide it here (the criterion itself).

Obviously, the first statement is problematic to prove. Having said that, disproving it is an easier task: just find a single better heuristic.

So let us formalise the task of refutation. Be specific about what data needs to be provided for the rebuttal to be considered correct without a lot of caveats and such. Theoretical argumentation does not work, because not everyone is good at theory. So what exactly do you have to show to convince?

Well and the adherents of "GA is the best". On what basis does this assertion of yours rest? Where is a single comparative study?

Yes, there is still more of a theoretical question. Is there a fundamental difference when comparing heuristics between analytically given target matrix functions and algorithmically given target TS functions?
 
zaskok:

And the adherents of "GA is the best". On what basis does this assertion of yours rest ? Where is even one comparative study?

Not "the best". It's just the one that exists, and yet gives quite acceptable results.

No one disputes that there are probably finer and better algorithms for finding optimal solutions. But where are they? If someone comes up with something more effective - then Renat is right - we need a clear description of the algorithm. If the algorithm is worthwhile, I think it will be taken quite seriously. (If the author wants to make a profit from it - then he should contact MetaQuotes directly, not through forum).

Also, again, the question - how many users need these miracle algorithms? In many cases, will the time gain or the quality of results be much better than the same GA?

Renat, I apologize for prying, so how is it with debugging on historical data? When do we expect it approximately ?

 
Laryx:

Not "the best". It's just the one that exists, and it gives quite acceptable results.

No one disputes that there are probably finer and better algorithms for finding optimal solutions. But where are they? If someone comes up with something more effective - then Renat is right - we need a clear description of the algorithm. If the algorithm is worthwhile, I think it will be taken quite seriously. (If the author wants to make a profit from it - then he should contact MetaQuotes directly, not through forum).

Also, again, the question - how many users need these miracle algorithms? In many cases, will the time gain or quality of results be much better than the same GA?

I think most users have not even heard of GA. Therefore, miracle algorithms are not needed much more than anyone.


There is a feeling of deja vu that never leaves the forum. When they ask for proof of some assertion. But at the same time none of them is accepted, because it is not at all clear what it is necessary to provide in order to convince. This applies to many things. For example, the forum owner mentioned for the thousandth time that the tester doesn't have axes, which is why it terribly lies at night on spread expansions. This has been known for years. There may even have been evidence of it. But there is no acknowledgement of this obvious fact.


How is it possible to prove anything when even the simplest case of acs is ignored? What to say about much more complex cases of GAs. Plus, all the time intervenes human factor in the form of paranoid vision in the opponents of the enemy on Renat's part. All sorts of nonsense is being imagined. Perhaps that would be true if someone really wanted him. But the people voicing criticism here are not engaged in any way and simply have their own independent point of view. They sincerely want to see any platform better. You can criticise the solutions of other platforms, but this is an MT forum.

 

I'll supportzaskok.

Renat, it is not up to us to prove this, it is up to you. You argue that this is the best solution, it's your platform .... Prove it.

Try to disprove this scientist.

There are many sceptics as to the usefulness of genetic algorithms. For example, Stephen S. Schiena, a professor of computer science at Stony Brook University, a renowned algorithm researcher, winner of the IEEE prize, writes[16]:

I have personally never encountered a single problem for which genetic algorithms have proven to be the most appropriate tool. Moreover, I have never encountered any computational results obtained with genetic algorithms which impressed me positively.
Генетический алгоритм — Википедия
Генетический алгоритм — Википедия
  • ru.wikipedia.org
Генети́ческий алгори́тм (англ.  ) — это эвристический алгоритм поиска, используемый для решения задач оптимизации и моделирования путём случайного подбора, комбинирования и вариации искомых параметров с использованием механизмов, аналогичных естественному отбору в природе. Является разновидностью эволюционных вычислений, с помощью которых...
 

I have spent many years studying and developing evolutionary algorithms for purely practical applications. This is not just words - I have dug an unknown amount of literature (which I have collected for many years and which I put in the free access), wrote articles and published source codes of my developments. I developed special test functions, started my own threads and actively participated in other people's threads in this area. Publish examples of training neural networks, etc. Proposed many times - if someone is interested, I am very interested in comparing my search algorithms with others, but no one has accepted my challenge. My algorithm has countless modifications (people edit for themselves) and thousands of people use these algorithms.

Why am I saying this? Because I know what I'm saying. And I'm saying the following:

1. The native GA is very good and accurate enough for a wide range of tasks both for traders and any other areas of knowledge. It was designed to be as easy to use as possible.

2. My GA is even better. :)

Yes, a lot of people have questions like "how good is the regular GA?". Therefore it would be very instructive and revealing to organize a comparative testing of the staff algorithm and any other algorithms whose authors want to try their hand with the brainchild of MQ.

Criteria for comparison could be:

1. You should make 100 test runs of optimization. The best algorithm will be the one, which average value of the maximum function will be higher than the others at the given number of runs in the test run.

2. a point system in which points are awarded for: 1) Number of function runs (the fewer the better) 2) Search accuracy (average of 100 control runs). 3) Existence of a special function. 4)other.

It is worth to emphasize that it is one thing to search for the optimum of a smooth test function (with which the evolutionary and calculus-based functions such as gradient descent and others cope very well), and another thing to search for the optimum of a function that is not differentiable in its entire definition area, just such as the optimization of Expert Advisors (not smooth functions). To account for such a feature, we can add noise to a (smooth) test function, run a full search once with a specified step and save results to a file that will later be used by the algorithms under study.

So, anyone who wants to not only babble, but also provide their algorithms for testing and comparative analysis, to close the subject of "which algorithm is better" once and for all?

You do not need to open the source code of the algorithm, it is sufficient to provide the compiled core of the algorithm and the inludes (to eliminate possible cheating), which will display the calls of the algorithm and prescribed fitness function itself.

A special welcome to those who criticize MT, be my guest.

As soon as more than 3 people including me want it, we can open separate thread for testing purposes. That in the future to poke everyone in this branch, and that professor including, who "never saw acceptable results.

PS. Thanks to all who directly or indirectly helped in the development of the algorithm.

Reason: