
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Hello, I won't ask any questions.
Just want to say a huge thank you for your work! Thank you!
It's very helpful.
Hello, I too would like to express a big thank you. GA has come to my attention recently. All the sources found "about GA" made me despondent. I didn't know where to go.... And then I came across your article through some link. In a word, hurrah! ))
I have a couple of questions. I realise that the article was written a long time ago. But still...
1. declared variables are not used anywhere.
Is this a plan for the future?
2. I ran UGA several times and got different values, i.e. once correct, second time not quite correct. Question - which of the UGA parameters should be set "bigger" to increase the number of correct answers.
In the trailer observed... The left picture shows an incorrect result.
...The question is which of the GA parameters should be set "bigger" to increase the number of correct answers.
In general, unfortunately, there is not and cannot be an unambiguous answer. Everything depends on the task.
In your particular case, try twisting these parameters:
:)
Changing the first two parameters upwards unambiguously increases convergence, but, of course, increases the search time.
Thanks for the reply. Played around, observed.... In general, there is no point in sharing all observations. The "extreme variants" are quite enough.
With the first two parameters equal to 50 and 2, respectively, the running time of the algorithm is 170-200 ms.
...., equal to 100 and 50 the running time was 103203 ms. Yes, the algorithm produced an absolute match in terms of results, "as it should" and "as it turned out".
Alas, as it seems to me, the time consumption does not correspond to the achieved results at all.
Thanks for the reply. Played around, observed.... In general, there is no point in sharing all observations. The "extreme variants" are quite enough.
With the first two parameters equal to 50 and 2, respectively, the running time of the algorithm is 170-200 ms.
...., equal to 100 and 50 the running time was 103203 ms. Yes, the algorithm produced an absolute match in terms of results, "as it should" and "as it turned out".
Alas, as it seems to me, the time-consumption does not correspond to the achieved results at all.
If the problem is solvable by Newtonian methods, it should be solved by them. You will get accurate results in a short time.
If not, then welcome to GA. The saying about a microscope and nuts comes to mind here.
If the problem is solvable by Newtonian methods, then it should be solved by them. Get accurate results in a short time.
If not, then go to the GA. It brings to mind the saying about a microscope and nuts.
Unfortunately, there's no simple methods. Whatever task you take, the problem of optimisation comes to the fore. Since all BP hypostases, called indicators, are parametric. Even the same zigzag, it would seem....
And since there are parameters, it means that they should be re(sub)beaten. The simplest variant is nested cycles. As practice shows, it is most often not an option. That's why we use GA. And not from the evil one at all. )) It is an urgent necessity.
A year or two ago I made a pure experiment. If you take two mashki with long periods, the difference of them (in a separate window) will look like a fairly smooth graph of clearly sinusoidal form. I wanted to see
if the sum of several sinusoids can repeat this graph. I made a script that was supposed to pick up 3 sinusoids. At first I wanted 5, then I changed my mind to three (experiment, why get hot?). The result was 3 cycles (2 nested ones)
Of course, if you remove comments from cycles, cyclic algorithms work faster. But not 100 times, unfortunately, and the possibility of tracking the process disappears completely.
That's why the experiment was cancelled without starting. But the scale and gravity of a seemingly simple situation "was heard" surprised and almost frightened me. ))
Andrew there are 2 more questions:
1. In the alternative zigzag example you used the phrase "...when the genotype of a chromosome does not correspond to the phenotype". What does this mean in relation to GA?
2. As I understand it, the ranking is done in descending order. I.e. the higher the value in Colony[0][chromos], the more adapted the individual?
I.e. if I understood correctly, in case of working with crivulins in FF, one can (should) use correlation coefficients r or R as they tend to 0->1. And one cannot use MSE as it is ->0.
Andrew there are 2 more questions:
1. In the alternative zigzag example you used the phrase "...when the genotype of a chromosome does not correspond to the phenotype". What does this mean in relation to GA?
2. As I understand it, the ranking is done in descending order. I.e. the higher the value in Colony[0][chromos], the more adapted the individual?
3. i.e. if I understood correctly, in the case of working with curvulins in FF, one can (should) use correlation coefficients r or R, as they tend to 0->1. and one cannot use MSE, as it is ->0.
1. If the genes of a chromosome are the same as the arguments of the function being optimised - the genotype corresponds to the phenotype (genotype - gene values, phenotype - argument values). If genes and arguments are not equal (some kind of transformation is used) - then they do not correspond. Google the concepts of genotype and phenotype in biology.
2. Yes. But it is not fundamental. You can explicitly specify the direction of ranking, or you can multiply the FF value by -1.
3. Sorry, I didn't understand the question.
1. If the genes of a chromosome are the same as the arguments of the function being optimised - the genotype corresponds to the phenotype (genotype - values of genes, phenotype - values of arguments). If genes and arguments are not equal (some kind of transformation is used) - then they do not correspond. Google the concepts of genotype and phenotype in biology.
2. Yes. But it is not fundamental. You can explicitly specify the direction of ranking, or you can multiply the FF value by -1.
3. Sorry, I didn't understand the question.
Thanks.
1. That's what I guessed. But guessing is a cloudy and dangerous thing. It's much more desirable to know for sure. About googling, thanks for the advice, but I will not use it, I think. Since school I remember recessive, dominant, allele, phenotype, genotype, homozygous, heterozygous.... My goodness, what a load of rubbish in my head. M-yes, they knew how to give knowledge in the Soviet school.... ))
2. Also understandable.
3. But there was no third point. Just the third line (a continuation of the second paragraph), in which the thought out loud, like, "If I am right...". It's not a question. Although it does look like it might be. Well, then, rhetorical question. I'll explain the meaning. For example, purely synthetically. We have a curve signal of sinusoidal form and we know that it is a sum of three sinusoids. I am almost absolutely sure that the GA will cope with this task easily and produce periods of all three sinusoids. But in the process it is somehow necessary to measure how much the sum of the three sinusoids is similar to the reference. In my arsenal there are three such measures: r (Pearson), R (p-square) and MSE. Especially since these things are also used in neural network paradigms as measures of "fitness".
Thanks again, that makes it clearer, more transparent. Again, I have only recently dived into the topic of GAs. So all epithets like novice, dummy, valenok are, alas, all mine.... ))
P.S. I know about biological analogues, of course. But this knowledge is purely for nurturing understanding of the essence of processes described by high-level programming languages. And that, not everywhere and not always. I do not tie biological analogues with neurocomputer analogues in a tight knot. Somehow in biology (nature) everything works on a one-time basis. But in neurocomputing, for some reason, most often it does not work.
Andrew, one more question.
Can we say that such values of these variables are suitable for most optimisation problems? As they say in some sources "... 90% of problems can be solved using an ordinary perceptron".