Discussion of article "Neural networks made easy (Part 5): Multithreaded calculations in OpenCL" - page 5

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Hello, you don't need install the .cl file. You just load it to main programm like string
Edit(SOLVED): Although it seems AMD has quit supporting OpenCL there is a free Microsoft OpenCL and OpenGL Compatibility Pack and now it compiles.
Good day Dmitry, thank you very much for your publications, which helped me a lot to improve my knowledge in the field of development. I would be very grateful for your advice. I am currently studying the article about multithreaded computing (#5). I tried to run a compiled file from the attachments, it does not run and crashes the whole terminal. I tried to rebuild from sources, the result is the same, which makes me think that it is not because of sources. I tried to run it on the tester: it crashes when CNet::backPropOCL method is called on line 1486 (neuron.calcHiddenGradients(nextLayer.At(0));), and there already on predefined function from OpenCL catalogue (CLExecute(kernel_handle,work_dim,work_offset,work_size)). The most interesting thing is that there is no error in the logs, but the tester crashes as well. Can you tell me what it can be because of? Thanks.
If I understand correctly, it crashes when running OpenCL programme. Try to run in debug mode and look at the sizes of the buffers being transferred. Perhaps the error is in the array overrun in OpenCL programme.
Hi Tobias,
Just copy the .cl file(from the downloads) and paste it into "include" folder in Meta Editor. The compiler will find it.
Cheers
In the method "bool CNeuron::feedForward(CLayer *prevLayer)", of the file "NeuroNet.mqh", there is a line:
"outputVal=activationFunction(MathMin(MathMax(sum,-18),18));"
I don't understand why the result of the activation function is from -18 to 18, shouldn't it be from -1 to 1 or from 0 to 1?
I don't understand why the result of the activation function is from -18 to 18, shouldn't it be from -1 to 1 or from 0 to 1?
This constrains the argument of the activation function, not its value. Added so that the gradient of the activation function is not driven to values close to 0.
Can you please tell me, recentAverageSmoothingFactor = 10000 is this based on 2 years?
365 days * 2 years * 24 hours = 17,520 hour candles(sample length).
I am using a sample of 1 year, then I need to reduce to 8,760 (365 * 24 = 8,760)?
In tests I have dForecast jumping from 0 to 23, and the error is 0.32 and remains constant, is it normal or is it because recentAverageSmoothingFactor is wrong? 😀
Hi Dimitri,
I love your articles and I'm starting to work on it.
Where is the CBuffer class? I can't find it.
Best regards,
Benjamin
Hello Dimitri,
I love your articles and I'm starting to work on them.
Where is the CBuffer class? I can't find it.
Best regards,
Benjamin
Never mind, I found the solution to change it to CBufferFloat as described by you in other articles :)