Discussing the article: "Population optimization algorithms: Simulated Annealing (SA) algorithm. Part I" - page 2

 
Aleksey Nikolayev #:
Exactly.

It is an extremely inconvenient technology both for coding and for further use in practice. This is confirmed, for example, by the fact that the in-house optimiser does not use it.

This approach can hardly be applied when multiple optimisations (an indefinite number of times and possibly with an indefinite set of parameters at each time) have to be performed. For example, this could be ensemble MO models.

What can be said here... OpenCL is not so terrible and not convenient, the code on it syntactically does not differ from MQL5 (unless you use MQL5-specific functions). You can parallelise not only separate logical code sections, but also, for example, the whole logic of an Expert Advisor in OpenCL, organising runs through history in the manner of a standard optimiser on agents. Thus, it is possible to organise optimisation/training in the online mode of the Expert Advisor.

MetaQuotes has provided parallelisation capabilities, but if native language features become available, it would be great. I think it's easier for developers to implement function trites (it's faster to wait for users) than automatic parallelisation of code sections. As a wish for developers, I hope it will be heard.

 
Andrey Dik #:
What can be said here... OpenCL is not so terrible and not convenient, the code in it syntactically does not differ from MQL5 (unless you use MQL5-specific functions). You can parallelise not only separate logical code sections, but also, for example, the whole logic of an Expert Advisor in OpenCL, organising runs through history in the manner of a standard optimiser on agents. This is how you can organise optimisation/training in the online mode of the Expert Advisor.
The problems are not so much in the coding itself, although they will probably be due to lack of manuals. As far as I know, there are problems when porting programs to GPUs other than the one on which they were debugged. Again not sure if this will take off when MT5 is running in linux via wyne. Found solution to problems can always break due to unexpected MT updates etc.
.
Andrey Dik #:
MetaQuotes has provided parallelisation features, but if there are native language features, it would be great. I think it's easier for developers to implement function trites (it's faster to wait for users) than automatic parallelisation of code fragments. As a wish for developers, I hope it will be heard.

Imho, the possibilities are so poor.

 
Andrey Dik #:

A question has arisen about population annealing. Would it make sense for each solution from the population to choose its annealing parameters (randomly within reasonable limits). Could this a) improve convergence and b) be some analogue of selecting optimal metaparameters?

 
Aleksey Nikolayev #:
The problem is not so much in the coding itself, although they will probably be due to the lack of manuals. As far as I know, there are problems when porting programs to GPUs other than the one on which they were debugged. Again not sure if this will take off when MT5 is running in linux via wyne. The found solution to problems can always break due to unexpected MT updates, etc.

OpenCL was developed precisely as a universal way to organise parallel computations on multi-core devices (it doesn't matter whether GPU or CPU). The probability of problems of OpenCL programs on different devices is not higher (and may be even lower) than that of ordinary Windows applications on computers with different hardware.

I don't know how things are with vyne, there have always been problems with it, it depends on the specifics and quality of virtualisation of the Windows environment.

 
Aleksey Nikolayev #:

A question has arisen about population annealing. Would it make sense for each solution from the population to choose its annealing parameters (randomly within reasonable limits). Could this a) improve convergence and b) be some analogue of selecting optimal metaparameters?

Good question. When testing algorithms and selecting the algorithms' outer parameters, I proceed from the overall aggregate performance on a set of test functions, although on each individual one the best parameters may be different (and usually are, different). In addition, different external parameters may also be the best for different problem dimensions. Therefore, yes:

a) will improve convergence accuracy on different types of problems and reduce the probability of getting stuck.

b) yes

The only thing is that this technique is likely to reduce convergence speed a little (or a lot, you should look) (while increasing convergence accuracy).

 
Andrey Dik #:

Good question. When testing algorithms and selecting external parameters of algorithms, I proceed from the overall aggregate performance on a set of test functions, although the best parameters may be different for each individual function (and usually they are, different). In addition, different external parameters may also be the best for different problem dimensions. Therefore, yes:

(a) will improve convergence accuracy on different types of problems and reduce the probability of getting stuck.

b) yes

The only thing is that this technique is likely to reduce convergence speed a little (or a lot, you need to look) (while increasing convergence accuracy).

Thanks for the informative reply. If it comes to practical experiments and there will be interesting results, I will write here. For now I am just getting acquainted with your series of articles on optimisation out of curiosity.