Discussion of article "Third Generation Neural Networks: Deep Networks" - page 14

 
Rashid Umarov:
Insert the code correctly, please. I fixed it
Thank you. I didn't realise at once what button it was.
 
Vladimir Perervenko:

Good afternoon.

What script are we talking about?

Could you describe in a little more detail what is in the script?

I understand you managed to run the script with the R process in the tester?

If so, that's interesting.

Please take your time and describe it in as much detail as possible. Is the R process executed in a client-server bundle or in a single Rterm?

Yes. It runs in a client-server bind.

How can I explain it as simply as possible?

I took the code from the OnTimer() function into a common function for OnTick() and OnTimer(). The only thing I added was a custom mode switch and a tick counter.

All other startup procedures remain the same. A little later I will implement the function in the script attached to the forum and post it.

PS: MQL4 documentation says that the OnTimer() function simply does not work in the tester.

 
kimkarus:

Yes. It's client-server.

How can I explain it as simply as possible?

I took the code from the OnTimer() function into a common function for OnTick() and OnTimer(). The only thing I added was a custom mode switch and a tick counter.

All other startup procedures remain the same. A little later I will implement the function in the script attached to the forum and post it.

PS: MQL4 documentation says that the OnTimer() function simply does not work in the tester.

I understand OnTimer().

Have you made any additional moves on the client-server connection?

I still haven't got it working. And not only me, judging by the posts on the English-speaking thread.

Good luck

 

As promised, I have attached the local SAE to MQL4 to work in the strategy tester.

i_SAE

e_SAE

Replace the originals, recompile *.ex.

Start the tester, select e_SAE, set Enable timer = false and Count ticks = 120 (for me it was optimal). Start.

We add speed, wait for the magic message "OPP = CLOSE...." on the left side, and reduce speed. After, add i_SAE to the graph with Send to server = true. Add a little bit of speed. We wait for the results to be finalised.

My R was version 3.2.2. Be sure to compare your version in both files!

Good luck with your experiments!

e_SAE

 
Hello have you found a way to fix the problem with server socket?
 

Hi, Attached to the article, an updated expert.

Attached to the article, an updated expert.

Get out of there.

Get out of there.

Vladimir

 

Good afternoon.

That's a good one. Thank you.

Now let's check how it works in the tester and in future examples with R I will include this feature.

Attached to the new DNRBM article is a redesigned version of this DNSAE EA with self-learning, but without a server.

Please test it.

Good luck

 
Hi I see you used 11 oscilator indicator for input, I have some indicators in Mt4 and they are not oscilator, How can i add or replace these indicator likes in your article 

Stacked RBM (DN_SRBM) https://www.mql5.com/en/articles/1628

Deep neural network with Stacked RBM. Self-training, self-control
Deep neural network with Stacked RBM. Self-training, self-control
  • 2016.04.26
  • Vladimir Perervenko
  • www.mql5.com
This article is a continuation of previous articles on deep neural network and predictor selection. Here we will cover features of a neural network initiated by Stacked RBM, and its implementation in the "darch" package.
 
Fascinating.

Its interesting to note , if a human is immersed in a task the human will improve
while if a machine does the same it may stick on a local optimum.

Maybe the algorithmic immersion could evolve from a "Study" paradigm to an "Execute" paradigm.

Great Article.Props
 
Vladimir Perervenko:


Again we have a profitable phase of about 5 weeks until the model deteriorates.

This is normal. The model can and should be periodically re-learn.

I believe the splitting into test and training data is unnecessary: we can use all data for training.

Can. It is important to remember a few important points:
1. training and test sets should not be crossed.
2. The training set should be mixed

3. If the ratio of classes of balance - to make the adjustment.

I am glad that there were colleagues using R.

Best Regards

Vladimir

Hi,

please help me to clarify some my negative prejudgements about neural networks (NN).

  1. Is it correct that you should firstly optimize the indicators to be in putted into the NN?
  2. Then you optimize the parameters of the NN?
  3. Or do you optimize the parameters of the NN and the indicators at the same time?
  4. Isn't it true that the more variables you have to optimize the greater is the danger of over adapting?
  5. If the data-sets for 1. and 2. are the same wouldn't that lead me to a kind of over adapting to the data set?
  6. Isn't exactly that indicated by "Again we have a profitable phase of about 5 weeks until the model deteriorates."
  7. a) Lets assume we have a bunch of indicators all together optimized by the tester and now
    b) we run a second optimization by the tester only to check which of the optimized indicators we need(*)
    c) so that we have a smaller bunch of our optimized indicators
    d) for what do I need the NN?
  8. Do you know an estimation about how big the data set has to be for a NN due to the number of inputs, layers and perceptrons?


(*) Unfortunately if you run mt4' optimizer in genetic mode and you want to try to bypass certain parameter sets (e.g. don't test if "indicator-A" is 'on') by returning from OnInit() with "INIT_PARAMETERS_INCORRECT" the genetic algorithm still counts this as a valid pass and that reduces the number of actually executed passes before this algorithm stops due to the number passes which is one of termination criteria.