How to form the input values for the NS correctly. - page 9

 
sergeev писал (а) >> 3. the question of network overtraining

The question of overtraining is not a simple one and there is no clear answer. To avoid overtraining, crosschecking is sometimes used, but it doesn't always help if the training period is too short. But in general, the best check against overtraining is a real or an OOC.

 
TheXpert писал (а) >>

Yep, I'll have something to read tonight, quite possibly I'll break out the code soon :)

Ehhh. I really don't understand why they don't want to "quickly" believe the idea and then sit down to code.

If you go back to the topic, it turns out that have already found the "right" inputs, they are normalized and the only thing left ... ... is to be in time for the "championship". Everything (in the sense of tools, not inputs) has already been invented. In this context - Neurosolutions or Neuroshel 2 (and many other programmes). At least make sure that AND inputs are "wrong" and "normalisation" distorts them even more, will be quick.

Yes. There is one argument - all programs are outdated, algorithms are covered with moss, but ... maybe the inputs are wrong after all :)

'

Here's me, from the obsolete, "nailed" "network" "Polynominal Net (GMDH)" (from NeuroShell 2) - ten hours of training/learning and the market formula is ready :)

 
sergeev писал (а) >> 9 Recursive networks

The good thing about recurrent networks is that there are no 'teachers'. Thus, we exclude one very important variable - the "teachers" of the network. Since it is possible to make a mistake with the output data (on which the network will be trained), having excluded it, we can focus only on finding the inputs.

 
LeoV писал (а) >>
Cross-validation is when, for example, a network is trained at the interval of 2007 and the best result, gained at the interval of 2007 is "tested" at the interval of 2008, and if it is better than the previous one (also "tested" in 2008), this network is left. And so on. In the same way, you don't get better results in 2007, but you don't have to worry about it, because the network is checked in 2008. This way we avoid overtraining (for the network) or overoptimization (for the TC).

It's forward-testing, EMMNIP :), I think you should read Haykin too.

And in general your last posts are not informative, can you finally start expressing really useful thoughts?

 
TheXpert писал (а) >>

It's forward testing, EMNIP :), I think you should read Haykin too.

And in general, of your recent posts there is not a single informative, can you finally start to express really useful thoughts?

Sorry, sorry, not again. I'm getting a little carried away.....)))))

 
LeoV писал (а) >>

The good thing about recurrent networks is that there are no 'teachers'. Thus, we exclude one very important variable - the "teachers" of the network. Since the output data (which the network learns from) can be erroneous too, by excluding it, we can focus only on finding inputs.

What? Ooh Recurrent networks don't have a teacher? Recurrent networks differ from MLPs in the presence of feedbacks, but in no way in the absence of a teacher. RTFM about Elman and Jordan's models.

 
TheXpert писал (а) >>

It's forward testing, EMNIP :)

Last point, sorry. Forward testing is different. But maybe I didn't explain it well? But I reread it - it seems to make sense. You just didn't get it.....

 
TheXpert писал (а) >>

Whaa??? Ooo Recurrent networks have no teacher? Recurrent networks differ from MLPs in the presence of feedbacks, but not in the absence of a teacher. RTFM about Elman and Jordan's models.

Well if there is, then there is! I don't mind ))))

 
SergNF писал (а) >>

Eh. I really don't understand why they don't want to "quickly" believe an idea and then get down to coding.

If we go back to the topic, it turns out that they have already found the "right" inputs, normalized them and all that's left... ... is to be in time for the "championship". Everything (in the sense of tools, not inputs) has already been invented. In this context - Neurosolutions or Neuroshel 2 (and many other programs). At least make sure that AND inputs are "wrong" and "normalisation" distorts them even more, will be quick.

Yes. There is one argument - all programs are outdated, algorithms are covered with moss, but ... maybe the inputs are wrong after all :)

'

Here's me, from the obsolete, "nailed" "network" "Polynominal Net (GMDH)" (from NeuroShell 2) - ten hours of training/learning and the market formula is ready :)

That's what I do, but since I have my own software, I use it.

And about the code - will Neurosolutions or Neuroshell 2 port the code to MQL4? I'll write a couple of functions, which I think will be useful for local people, and maybe for me too. Especially since it takes an hour to write a hundred lines of code.

 
LeoV писал (а) >>

One last point, sorry. Forward testing is different. But maybe I didn't explain it well. But I reread it - it seems to make sense. You just didn't get it.....

Never mind, sorry if I'm wrong.

Reason: