Potential issues: Limited training: The MaxEpochs parameter is set to 1, which limits the number of network training iterations on each tick. It may be worth increasing this value for better optimization. Risks with high spread: The function that opens trades blocks them if the spread is too high, but there is no logic to retry if the spread normalizes. Minimum volume normalization in the input normalization function: When normalizing the volume of candles, the inputs are divided by their values with the addition of a small constant (EPSILON), which can lead to ineffective normalization when working with low volumes. Penalty models: If the daily profit is below the target value, a penalty is activated, which reduces the learning rate. However, there is no detailed logic that would explain how this affects the EA's performance in the long term. Recommendations: Consider improving the neural network training process by increasing the number of epochs. Add retries to open a trade when the spread is normalized. Think more carefully about the penalty mechanism to prevent excessive reduction in the learning rate.
IGOR IAREMA # :
Possible problems: Limited training: The MaxEpochs parameter is equal to 1, which limits the number of iterations of network training on each tick. It may be worth increasing this value for better optimization. Risks with high spreads: The function that opens trades blocks them if the spread is too high, but there is no logic to try again if the spread normalizes. Minimum volume normalization in the input data normalization function: When normalizing the volume of candles, inputs are divided by their values with the addition of a small constant (EPSILON), which can lead to ineffective normalization when working with low volumes. Penalty models: If the daily profit is below the target value, a penalty is activated that reduces the learning rate. However, there is no detailed logic that would explain how this affects the performance of the adviser in the long term. Recommendations: Consider improving the neural network training process by increasing the number of epochs. Add repeated attempts to open a trade when the spread is normalized. More carefully consider the penalty mechanism to prevent excessive reduction in learning rate.
Possible problems: Limited training: The MaxEpochs parameter is equal to 1, which limits the number of iterations of network training on each tick. It may be worth increasing this value for better optimization. Risks with high spreads: The function that opens trades blocks them if the spread is too high, but there is no logic to try again if the spread normalizes. Minimum volume normalization in the input data normalization function: When normalizing the volume of candles, inputs are divided by their values with the addition of a small constant (EPSILON), which can lead to ineffective normalization when working with low volumes. Penalty models: If the daily profit is below the target value, a penalty is activated that reduces the learning rate. However, there is no detailed logic that would explain how this affects the performance of the adviser in the long term. Recommendations: Consider improving the neural network training process by increasing the number of epochs. Add repeated attempts to open a trade when the spread is normalized. More carefully consider the penalty mechanism to prevent excessive reduction in learning rate.
Hello IGOR IAREMA ,
Thank you for your detailed feedback and the insights into the potential issues. We have carefully reviewed your points:
- Limited Training: We plan to increase the MaxEpochs parameter to allow for better optimization.
- Risks with High Spreads: We will implement logic to retry trades when the spread normalizes.
- Normalization of Minimum Volume: We are optimizing the normalization function for low volumes to achieve more effective results.
- Penalty Models: The logic controlling the learning rate will be refined to ensure long-term performance improvements.
A comprehensive update addressing these improvements is already in progress. It is taking a bit longer because the changes are quite complex, but we are confident that the wait will be worthwhile. Thank you for your patience and understanding!
Best regards,
SM.S
After downloading the neurobook and the sources, I would like to know if a version entirely in Python exists? The version provided poses a problem and I think especially if the openCl executions cannot be done on the machine. I am currently trying a conversion, but it is a bit titanic!
Best regards to anyone who has already undertaken such work, or who knows where to find the sources for a Python version.
Best regards to anyone who has already undertaken such work, or who knows where to find the sources for a Python version.
Every time I get this in log file: No saved neural network parameters found. Starting fresh. What can be reason for this?
Encho Enev #:
Every time I get this in log file: No saved neural network parameters found. Starting fresh. What can be reason for this?
Every time I get this in log file: No saved neural network parameters found. Starting fresh. What can be reason for this?
I've not run it yet, but according to the code you'd have to train the neural network in the strategy tester before putting it on a live chart. Have you done that?
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Neurotest:
is a text for the neutral network would like to know your opinion.
Author: Mustafa Seyyid Sahin