neural network and inputs - page 10

 
Figar0:
NS is complex.

1)Everyone "reaches" the inputs (some go through them until they are grey balls),

2) Some people think about the output, choose the network task, its type and architecture,

3) only a few take a serious plunge into network training.

There is nothing minor in NS, hence lack of results.


  1. I tried different inputs - from increase of quotes and indices to FFT transformations and spectral components feeding. Tried feeding from trained Kohonen cards.
  2. The architecture is a multilayer peseptron with hyperbolic tangent activation function.
  3. Various algorithms from simple Back Prop, toLevenberg - Marquardt algorithm with exact calculation of the Hessian.
 
TimeMaster:
  1. different algorithms from the simple Back Prop, tothe Levenberg-Marquardt algorithm with exact calculation of the Hessian.
The learning algorithm doesn't care three times, even if it's a gen algorithm.)
 
TheXpert:
The learning algorithm doesn't care about the algorithm three times, even if it's a gen algorithm )


I agree. The only difference is learning speed. I don't see the point in "catching" 6-7 decimal places with MSE, so I use a simple Back Prop with three samples more often. A training sample, a validate sample and a test sample. Valid is selected in different ways, or valid is selected immediately after the training one, according to the time interval, or I "grab" random examples from the training one, respectively with removal of examples from the training sample.
 

About gene algorithms...

It is hard to imagine the "usefulness" of using it on neural networks with more than 10000 synapses. Requires a population of about 10000*1000 individuals, which is not "good" in terms of speed. I am silent about epochs...

 
<br/ translate="no">

For example if I want to train a network. that 2*2 = 4, not 5 or 6 or 7 . then it's up to me must understand clearly. what to teach it. Not so that sometimes 2*2=4 and sometimes 5 ...

The problem statement in general is important. What do I want ? And so on.


With the market changing, "2*2" doesn't always equal 4, so that's the problem. As the market changes, conditions change. Inconsistent data starts to be present in the training samples. The network does not learn. If you reduce samples in order to "capture" only the current state, you will face the size of samples themselves - it is too small to train a more or less volumetric neural network. Overtraining takes place. Using simple neural networks is also not an option, it is VERY difficult to "cram" useful data into the input, since the input is small.
 
TimeMaster:

About gene algorithms...

It is hard to imagine the "usefulness" of using it on neural networks with more than 10000 synapses. Requires a population of about 10000*1000 individuals, which is not "good" in terms of speed. About epochs I am silent...

Last year I wrote a genetic optimization inside an owl on 4. For fun, I entered 88 input parameters in the range 0...200. Trained on 15 minutes for 2 weeks ~20 min (P-4 3GHz, 2GB). Got the plummer - no strategy, just an experiment. If interested, I can tell you about it.

 
icas:

Last year I wrote a genetic optimization inside an owl in 4. For fun, I entered 88 input parameters in the range 0...200. Training on 15 minutes for 2 weeks ~20 min (P-4 3GHz, 2GB). Got the plummer - no strategy, just an experiment. I can tell you about it if you're interested.


And if there are 10000 input parameters, then the problem in its BEST form will become 10000/88 times more complicated, respectively ~20min*100=2000min ....

That's roughly a day and a half...

Genetics rules, when you have billions of years, you can look at yourself in a mirror. :-)

 
solar:
it is the data collection, data preparation, noise whitening, rationing and so on that needs to be done. This is how non-stationarity in the market is fought. (In theory )) )


Data collection is a well researched field, there are ways and places to download the history of relatively "good" quotes.

Data preparation, too, is an area of study. I can tell you that even applying a side-by-side conversion to each dimension of the input vector does not eliminate the problem of inconsistent data.

It's more complicated with noise, the interval is IMPORTANT here, minutes versus weeks is naturally "noisy", but 15 minutes versus hours is a matter of digging...

Normalisation is also a trivial issue.

 
TimeMaster:

About gene algorithms... It is hard to imagine the "usefulness" of using it on neural networks with more than 10000 synapses. Requires a population of about 10000*1000 individuals, which is not "good" in terms of speed. I'm not talking about epochs...

10000 synapses? Imho, this is serious overkill, I maxed out at about 500-700. I didn't need more. By the way, it is GA that I teach NS. Yes, it is slow, but it is more convenient for me and has its own advantages.
TimeMaster:

With the market changing, "2*2" does not always equal 4, so therein lies the problem. As the market changes, conditions change. The training samples start to have inconsistent data. The network does not learn. If you reduce samples in order to "capture" only the current state, you will face sampling size - it is too small to train a more or less volumetric neural network.

2*2 isn't always 4? It's enough for me that 2*2=4 for example in 70% of cases, and I don't even filter the remaining 30% of examples, where 2*2 is not equal to 4. The net can handle it fine by itself... If 2*2 equals 4 in 50% of cases, you should try to change something, for example inputs.... I think I've made my point)

icas:

Last year I wrote a genetic optimization inside an owl on 4. For fun, I entered 88 input parameters in the range of 0...200. Trained on 15 minutes for 2 weeks ~20 min (P-4 3GHz, 2GB). Got the plummer - no strategy, just an experiment. If interested, I can tell you about it.

Tell me about it, of course it's interesting.
 
Figar0:
10000 synapses? Imho, this is a serious overkill, I had at most 500-700. I did not need more. By the way, it is GA that I teach NS. Yes, it is slow, but for me it is more convenient and has its own advantages.

2*2 is not always 4? It's enough for me that 2*2=4 for example in 70% of cases, and I don't even filter the remaining 30% of examples where 2*2 is not equal to 4. The net can handle it fine by itself... If 2*2 equals 4 in 50% of cases, you should try to change something, for example inputs.... I think I made my point)

Tell me about it, of course it's interesting.

Can you give me an example when 2*2 is not equal to 4?

I've often read this in the literature, but unfortunately there are no examples.

Reason: