
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The inputs are unity per second.
When reading the network from the file, the random weights generation function should not have any value. The weights are read from the file. But in your function the generation is not random and creates the same weights at each restart, that's why the result converges. Check your code. It seems that after reading you overwrite the trained network with random weights.
Hi Dmitriy
Please you would have a code example of an LSTM with the version of the neuronet.mqh file of article 13?
I tried to use article 4 fractal_lstm.mq5 file, but without success ... an error occurs in training ...
cheers
Neuer Artikel Neuronale Netze leicht gemacht (Teil 13): Batch Normalization ist erschienen:
Autor: Dmitry Gizlyk
For me as beginner with NN it was very enlighting. I want to use your proposals to code an EA. It should be an constructions set for DNN to try out different functions and topologies and learn, which are better.
So I modified your last example (MLMH + Convolutional).
I added many different activation functions (32 functions - gaussian, SeLU, SILU, Softsign, Symmetric Sigmoid,...) and their derivates,
I changed the error/success calculation (Buy, Sell, DontBuySell) because I think "Don't trade" isn't undefined. So if the NN recognizes no buy and no sell and this is correct in real it should be rewarded in feedback loop.
Maybe someone has already solutions or can help with following questions:
I'm not able to create functions which need weights from complete layer: Softmax, Maxout, PRelu with learned alpha.
Also I'm not able to do different optimizations (AdaBound, AMSBound, Momentum).
I'm thinking of a DNN-Builder-EA for testing to find the best net-topology.
1. how can I find the in-/out-count of neurons and weights per layer?
2. What topology do you suggest? I tried many variations:
A) A few Neuron layers with count=19000 then descending count in next layers *0.3
B) 1 Convolutional + 12 layers MLMH with each 300 neurons
C) 29 layers with each 300 neurons
D) 29 layers which each 300 neurons and Normalization between each layer.
I get forecasts to max 57%, but I think it can/has to be be better.
Should there be layers with rising neuron count and then descending again?
3. How can I make a back test? There is a condition to return false when in test modi - I tried to remark it, but no success.
There are many explanations in very detail but I don't understand some overview.
4. Which layer after which? Where should be BatchNorm layers?
5. How much output neurons has convolutional or all the multi head like MLMH when layers=x, step=y, window_out=z? I have to calculate count of next Neuron layer. I want to avoid too big layers or bottlenecks.
6. What about LSTM_OCL? Is it too weak in relation to attention/MH, MHML?
7. I want to implement eta for each layer, but had no success (lack of know how about classes - I'm an good 3rd gen coder).
8. What should be modified, to get an error rate < 0.1. I have constant 0,6+.
9. What about bias neurons in this existing layer layouts?
I studied already many websites for weeks, but didn't found answers for these questions.
But I'm looking forward, to solve this, because of the positive feedback of others, which had already success.
Maybe there is coming up part 14 with solutions for all these issues?
Best regards
and many thanks in advance
HI. I am getting this error
CANDIDATE FUNCTION NOT VIABLE: NO KNOW CONVERSION FROM 'DOUBLE __ATTRIBUTE__((EXT_VECTOR_TYPE92000' TO 'HALF4' FOR 1ST ARGUMENT
2022.11.30 08:52:28.185 Fractal_OCL_AttentionMLMH_b (EURJPY,D1) OpenCL program create failed. Error code=5105
when using EA since article part 10 examples
Please any guess???
Thank you
HI. I am getting this error
CANDIDATE FUNCTION NOT VIABLE: NO KNOW CONVERSION FROM 'DOUBLE __ATTRIBUTE__((EXT_VECTOR_TYPE92000' TO 'HALF4' FOR 1ST ARGUMENT
2022.11.30 08:52:28.185 Fractal_OCL_AttentionMLMH_b (EURJPY,D1) OpenCL program create failed. Error code=5105
when using EA since article part 10 examples
Please any guess???
Thank you
Hi, can you send full log?
Hi. Thanks for help
Rogerio
Hi. Thanks for help
Rogerio
Hello Rogerio.
1. You don't create model.
2. Your GPU doesn't support double. Please, load last version from article https://www.mql5.com/ru/articles/11804
Hi Dmitriy
You wrote: You don't create model.
But How do I create a model ? I compile all program fonts and run the EA.
The EA creates a file on folder 'files' whith the extension nnw. this file isn't the model ?
Thanks
Hi Teacher Dmitriy
Now none of the .mqh compiles
for example when i try to compile the vae.mqh i obtain this error
'MathRandomNormal' - undeclared identifier VAE.mqh 92 8
I will try to start from the begning again.
One more question:When you put a new version of NeuroNet.mqh this version is fully compatible with the other olders EA ?
Thanks
rogerio
PS: Even deleting all files and directories and start with a new copy from PART 1 and 2 I can not more compile any code.
For exemple when a try to compile the code in fractal.mq5 a obtain this error:
cannot convert type 'CArrayObj *' to reference of type 'const CArrayObj *' NeuroNet.mqh 437 29
Sorry I realy wanted to understand your articles and code.
PS2: OK i removed the word 'const' on 'feedForward', 'calcHiddenGradients' and 'sumDOW' and now i could compile the Fractal.mqh and Fractal2.mqh