Finding a set of indicators to feed into the neural network inputs. Discussion. A tool for evaluating the results. - page 4

 
rip >> :
Kgm ... You forget that for maximum learning efficiency the network inputs must be statistically independent, there must be no correlation between the data fed to each input. All machines are corrected with each other, you can check it. There is a quite handy and simple software - AtteStat, it is an add-on to Exel, but very handy.

Everything is brilliantly simple... I could have figured it out myself... Thank you!!!

Take available indicators and look at the correlation between them... analyse, think, maybe you get some useful ideas :)

 
IlyaA >> :


The public needs to see a graphical relationship between learning error and time (number of epochs).

we must be talking about different things... I don't teach with a teacher (there is a learning error in this way of teaching)... I teach to the maximum of the target function and I don't know what the maximum possible value of the target function is.

 
Urain >>

Go here where each indicator has a detailed description and calculation formulas.

in two days you'll have your own opinion.

I have been looking at it for a long time now.

I've looked at it, read it, maybe I didn't spend enough time or pay enough attention to it, maybe something else... if nothing else, I will probably go back there... :))) taking the long way round, eh?

 
iliarr >> :

No. I only pass the value of the target function to the genetic algorithm and the genetic algorithm outputs a vector of values for each gene, which I convert into a matrix of neural network weights.

Right, genetic algorithm doesn't use error function to adjust weights.

As far as I understand, you could mark up the m5 by the maximum profit that can be on the history and use this markup as a fitness function.

I'm just wondering what the function you're using to estimate an individual looks like.

 
iliarr >> :

What error are we talking about? The target function is bigger - so the gene is more appropriate...

It is a question of test sampling error. That is, you take the next month after the training sample. You mark it according to your algorithm. You feed the outputs to the trained network. You compare the results. That's the graph of these errors I'm interested in.


You can also get the error graph of the training sample and thus estimate how your network learns (or there is generational development in the gene algorithm)

 
iliarr >> :

It's all brilliantly simple... I could have thought of it myself... Thank you!

Take available indicators and look at the correlation between them... analyze, think, maybe you get some useful ideas :)

Show the result when you compare all the sets you feed to the inputs :) I think all will be highly correlated. All indicators given use the same input data for the calculation.

 
iliarr >> :

we must be talking about different things... I don't teach with a teacher (there is a learning error in this way of teaching)... I teach to the maximum of the target function and I don't know what the maximum possible value of the target function is.

How do you estimate the efficiency of the trained network? This is the graph I want to see.

 
Stupid question, I almost know the answer. found a set of indicators, found the weighting coefficients and started to make a profit. If the market changes, the advisor will be able to adapt to the new conditions .
 
joo писал(а) >>

GA is just an optimisation tool (screwdriver for the machine). With minimal differences you can use it or any other optimization algorithm (screwdriver).

Yes a screwdriver, but there are some screwdrivers that can undo a small screw and some that can't...

No, with minimal differences you can't use it for NS, you can't seem to see those differences.

ORO, when a neuron becomes oversaturated, practically stops "training" it, while GA can easily oversaturate a neuron and continue to increase neuron weights further.

 
joo >> :

... at least this function: (2/(1-2^(-x))-1

My mistake. It should be: (2/(1+2^(-x))-1

StatBars >> :

No, with minimal differences you can't use it for NS, it seems you can't see these differences.

ORO when a neuron gets oversaturated practically stops "training" it, while GA can easily oversaturate a neuron and keep on increasing neuron weights.

Why can't they be seen? The differences are visible. There is no oversaturation of neurons with the right search range. "You just don't know how to cook them."(c) :)

For differently complex tasks there will be differently optimal tools, as you correctly noted (screwdrivers).

Reason: