Neural network - page 4

 
joo писал(а) >> the only right solution would be 'learning without a teacher'.

Absolutely. Because there is no perfect teacher for the network. Or rather, we can't know what the perfect teacher for the network might be.....))))

 
LeoV >> :

Absolutely. Because there is no perfect teacher for the network. Or rather, we can't know what the perfect teacher for the network might be.....))))

Exactly! With a teacher, it is only possible to teach a grid on a function we already know, such as a sine wave. Here we can, without hesitation, feed the net the following for

of the trained point as a teacher. This trick will not work with the market.

 
joo писал(а) >> Exactly! You can train the grid with a teacher only on a function we already know, e.g. a sine wave. Here we can, in good conscience, feed the grid the next to the trained point as a teacher. This will not work with the market.

Yes, and in this case, the smaller the error on the training sample, the better for the network. That's not going to work with the market.

 

It is not advisable to build a network strictly on training with a teacher, it is easier to describe all the patterns manually in code, there will be fewer errors.

The true purpose of the neural network is to teach without a teacher, to teach what the teacher doesn't know,

You will be able to identify patterns that you personally do not see (and no one else does, which is an advantage).

Blonde to friend: you know what "I do not know" means.

Girlfriend: "I do not know".

Blonde: Well, no one knows.

 
LeoV >> :
gpwr wrote(a) >> It's written everywhere that the network needs to be trained until the error on the sample under test stops decreasing.

It's actually much more complicated than that. When you train it up to the minimum error on the sample under test, you're likely to get an over-trained network......

Not likely, it is. Truth. And reducing mains power for better generalisation doesn't help. The minimum error on the test is a failure on the forward.
 
muallch писал(а) >>
Not likely, but it does. Truth. And reducing mains power for better generalisation does not help. The minimum error on the test is a failure on the forward.

Agreed. I applied the phrase "most likely" in the sense of unambiguous overtraining .....)))) And reducing mains power does sometimes help though......

 
muallch >> :
It's not likely, it's true. >> Truth. And reducing network power for better generalisation does not help. The minimum error on the test is a failure on the forward.

You citizens are getting wise to something. Either you've invented something new, or we don't understand each other.

All textbooks say that training a network with a teacher is done by splitting the data into a training sample and a test sample. The network is trained by minimizing the error on the training sample, while there is an error on the test sample (out-of-sample test or verification). Learning stops when the error on the test sample stops decreasing (shown with a dotted line below). And the error on the training sample may continue to decrease as shown in this figure


You claim that the training of the network must be stopped even earlier than the dotted line in the picture. Where exactly? Then why train the network at all? By your logic, choose any values of weights and go ahead and use the network for trading. That's when you'll have a sure flush.

 
LeoV >> :

Agreed. I applied the phrase "most likely" in the sense of unambiguous overtraining .....)))) And reducing mains power does sometimes help though......

Exactly. That's the thing, sometimes. Grails on MAs sometimes give a profit on a forward position too. There is no (at least I haven't fumbled) clear system dependence of forward-looking results on the power (puffiness, neuronicity, etc.) of a trained grid. I'm not saying that the type of grid does not affect anything - it's only one of criteria. But the size of profit outside the test sample has a clear dependence (non-linear) on the degree of learning - numerous experiments with "end-to-end" testing confirm it.

 
gpwr >> :

You argue that network training must be stopped even earlier than the dotted line in the picture. Where exactly? Then why train the network at all? By your logic, choose any values of weights and go ahead and use the network for trading. That's when you'll have the right drain.

No, I don't.


And it's hard to argue with that...

 
gpwr >> :

You citizens are getting wise to something. Either you have invented something new, or we do not understand each other.

It is written in all textbooks that training a network with a teacher takes place by dividing the data into a training sample and a test sample. The network learns by minimizing the error on the training sample while observing the error on the test sample (out-of-sample test or verification). Learning stops when the error on the test sample stops decreasing (shown with a dotted line below). By the same token the error on the training sample may continue to decrease as shown in this picture

First there is an out-of-sample - to adjust the grid. And then there is none - the real future lies ahead, it must be predicted. What is the criterion for stopping the training - a certain error or the number of training runs? Or something else?

Reason: