Hybrid neural networks. - page 4

 
joo >> :

to dentraf

MQL4

to IlyaA

Yes, I do. And also about 200-300 books by different authors. But I thought I will master NN and GA on my own faster than read this library. And so it turned out. Faster.

By mastering I mean practical application, not familiarity with terminology.


then Haykin p. 330, chapter 4.19 I have a good command of this thing. You'll get the hang of it in no time.
 
joo >> :

I would also like to advise you on this. Create an additional population into which you place the best individuals from each era (I call it the "Epoch Gene Pool", or GE). When mating, take individuals from the current population and from the GE. This drastically reduces the number of ff starts. This method should not be confused with elite selection.


Let's get to the bottom of crossing. I choose mine by probability, which reflects how much this perceptron did better than the others. Accordingly, the best one gets an 80% chance of participating in each pair, and the worst one gets a 20% chance. That's how they live. What is meant by your method of supplementary population.
 
IlyaA >> :
>>Thank you. >> Very detailed. Basically yes, if you have already run the algorithm several times with different parameters, then use the results. So 200... Okay, let's keep it that way. Then the next point. We should look for the profitable "fake" (combination of candlesticks and indicators) searching for it not with our eyes but with perceptron. Let it build linearly separable groups for us. Search criteria Profit => max. Stopping at will. Then analysis of the weights and identification of the "feint". Then a normal indicator and trading system. Quite complicated, but that's at first glance. Fumbling with scales is very interesting (at least for me). Question :) I have to run the history for 5 years on candlesticks + indicators (optional) through each individual, and there are now 200 on each population. This is a HUGE resource consumption, besides we don't know when we will stop. Let's try to reformulate the problem or otherwise preserve the most important property of this design - detection of a "fink" by a machine.

5 years on what TF? If M1, yes, a long time. If W1, then very fast.

There is no need to make a wunderkind of the network and try to teach the network all the historical moments (profitable phenoms). Most of them will never happen again, well at least 99%.

I consider 1000-3000 bars of history as optimal. To retrain when the error during operation rises above the specified level. Although on the number of examples (bars), there are different opinions, perhaps now there are people who oppose this issue.

 
joo >> :

5 years on what TF? If M1, yes, a long time. If W1, then very fast.

There is no need to make a wunderkind of the network and try to teach the network all the historical moments (profitable phenoms). Most of them will never happen again, well at least 99%.

I consider 1000-3000 bars of history to be optimal. To retrain when an error during operation rises above a specified level. Although the number of examples (bars), there are different views, perhaps now there are people who are opponents on this issue.


Well I guess I'll start. Why do I suggest a lot of bars? Because (self asked, self answered) as has been correctly noted, most fennecks are disposable and the net can 'notch it up'. And it will turn out as usual, on the interval being tested chocolate, on all the others - feed for sustenance. My suggestion is that the best filter for disposable moments is to integrate them. The grid will have a better chance of isolating the repetitive fink. What's your idea?
 
gumgum >> :

Yesterday I wrote a 10-15-10-1 grid

>> go on...


Have you checked the perceptron for excessive learning (overlearning, cogging)?
 
IlyaA >> :


Then Haykin p. 330, chapter 4,19 I own this thing well. You'll get the hang of it in no time.

Don't have time to read yet, maybe I will. Thank you.

IlyaA wrote(a) >>.

Let's uncover the essence of crossings. I choose mine by probability, which reflects how much this perceptron did better than the others. Accordingly, the best one gets an 80% chance of participating in each pair, and the worst one gets a 20% chance. That's how they live. What is meant by your method of supplemental population.

Yes. The offspring of these parents are introduced into the new population. The remaining individuals die ignominiously without ever experiencing love. :)

Here's the catch! We don't know if there were any decent specimens among the dead. So what if they look like freaks, they could have produced a new, strong generation.

Approach the GA question as if you were trying to breed a swan from Drosophila flies.

The algorithm is this:

1 Create a population of random individuals (most of them ugly).

2 We determined adaptability.

Three, we replicated the population into the gene pool

4 Cross individuals from the population and gene pool (by selecting from there and there).

5 Place new individuals in the new population

6 Determine the fitness of each individual

7 Take the best individuals from the old population and from the new population and place them into the gene pool if they are better than those in the gene pool (replace)

8 Replaced the old population with individuals from the new population.

9 repeat with p4

and so on, until better than the best in the gene pool stops appearing.

 
Shit, while I'm writing the post, you're writing three!, settle down a bit :)
 
joo >> :

Don't have time to read yet, maybe I will. Thank you.

Yes. The offspring of these parents are introduced into the new population. The rest die ignominiously without ever experiencing love. :)

This is the tricky part! We do not know, may be there were worthy individuals among those who died? So what if they look ugly, they may have given birth to a new, strong generation.

Approach the GA question as if you were trying to breed a swan from Drosophila flies.

The algorithm is this:

1 Create a population of random individuals (most of them ugly).

2 We determined adaptability.

Three, we replicated the population into the gene pool

4 Cross individuals from the population and gene pool (by selecting from there and there).

5 Place new individuals in the new population

6 Determine the fitness of each individual

7 Take the best individuals from the old population and from the new population and place them into the gene pool if they are better than those in the gene pool (replace)

8 Replaced the old population with individuals from the new population.

9 repeat with p4.

and so on, until no better than the best in the gene pool emerges.




Ok. I'll wait and see :) This is very similar to ray-finding. I compared the performance of ray-finding algorithm with probabilistic crossing. The results were better (less populations) with probabilistic crossing. However, if you use it there is a high probability of it working well. You need to do an experiment. How do you look at it. Let's say on XOR?
 
IlyaA >> :


Well, I guess I'll start. Why am I suggesting a lot of bars? Because (self asked, self answered) as has been correctly noted, most pheneks are disposable and the net can "rote" it. And it will turn out as usual, on the interval being tested chocolate, on all the others - feed for sustenance. My suggestion is that the best filter for disposable moments is to integrate them. The grid will have a better chance of isolating the repetitive fink. What's your idea?

Think of NN as if you were thinking of your brain. How will you "integrate" this? Imagine that you know about painting. Maybe you really are.

Well, you know the characteristic techniques of famous masters (profitable fics). Now paste the work of all the famous masters in photoshop into one layer. Do you know any of the known artist's tricks? I doubt it. So does NN.

 
joo >> :

Think of NN as if you were thinking of your brain. How will you "integrate" this? Imagine that you know about painting. Maybe you actually do.

So, you know the characteristic techniques of famous masters (profitable chips). Now paste the work of all the famous masters in photoshop into one layer. Will you find a single famous author's trick you know? I doubt it. So does NN.


What is integrating noise? (Again, I asked and answered :) You have a random variable m(t) which is uniformly distributed on intervals [-1; 1]. Correspondingly the expectation = 0. That's great. We also have a Signal s(t) of complex configuration. The amplitude of signal and noise have comparable values. The problem is to extract s(t) from s(t) + m(t) provided that s(t) + m(t) can be repeated an unlimited number of times. The interference will be new each time, the signal curve will be very different from the previous curve. The method is surprisingly simple: find the average value of the signal s(t) + m(t) over 1000 repetitions. In this case the interference that had 0 mathematical expectation was integrated and removed. The more times we can average the signal s(t) + m(t), the less interference will remain. Here's my idea again in more detail.

To limit the network from discovering all the features at a time (let's call it abstractionism effect), we reduce the number of neurons, thus obtaining an advantage in generalization ability of the network and impossibility for perceptron to learn a large amount of material. We'll aim the network at finding only one, but the most profitable thing in the data. What do you think?

Reason: