Anddy Cabrera / Профиль
- Информация
|
8+ лет
опыт работы
|
1
продуктов
|
8275
демо-версий
|
|
0
работ
|
0
сигналов
|
0
подписчиков
|
What I need:
• Real account funded with $3k
• Share the account tracking link so we can see live results together
What you get:
• The EA for FREE 🎁
• My full support throughout the test
⚠️ Martingale involves risk – only reach out if you're comfortable with that.
DM me or comment here if you're interested! Let's make it happen. 🚀
Контролируемый Мартингейл Контролируемый Мартингейл EA — это полностью автоматизированный советник для MetaTrader 5. Он использует сеточный подход мартингейла, основанный на чистом ценовом сигнале без индикаторов. Сигнал входа определяется по средней точке диапазона максимума и минимума предыдущего бара. Шаг сетки рассчитывается динамически с помощью индикатора ATR, поэтому система автоматически адаптируется к текущей волатильности рынка. Как это работает Советник разделяет новые входы в
Here's a high-level overview of how to implement this approach:
Define grid levels: Set grid levels at 5-pip intervals. This distance will be used to create the state space and action space for the Q-learning model.
Define the state space: The state space consists of the grid levels and the number of open positions. Each state in the Q-table will be represented as a tuple (grid level, number of open positions).
Define the action space: The action space represents the possible actions the agent can take at each state. In this case, the actions include:
Open trade at grid level i
Hold
Where i represents the index of the grid level.
Initialize the Q-table: Create a Q-table that maps each state (grid level, number of open positions) to the possible actions (open trade at grid level i, hold). Initialize the Q-table values to zero.
Define the reward function: The reward function should be based on the difference between the maximum drawdown in pips and profit in pips. This reward function encourages the Q-learning model to find actions that minimize drawdown while maximizing profit.
Determine the initial trade direction: Based on your market analysis or the Q-learning model's suggestion, determine the initial trade direction (buy or sell).
Train the Q-learning model: Train the model using historical data and the defined reward function. When updating the Q-table, consider the Martingale component by doubling the trade size after a loss and reverting to the initial trade size after a win. Ensure that the model only opens trades in the same direction as the initial trade during the training process.
Implement an exploration-exploitation strategy: Use an epsilon-greedy approach to balance exploration (trying new actions) and exploitation (using the best-known action based on the Q-table) during the training process.
Test and optimize: Test your Q-learning model with the state representation including grid level and number of open positions on out-of-sample data. Make any necessary adjustments to improve performance.
Implement the strategy: Deploy your strategy to a trading platform and monitor its performance in real-time. Ensure that the system only opens trades in the same direction as the initial trade (either all buys or all sells). Be cautious with the Martingale component, as it can lead to significant losses if a losing streak occurs. Consider using a stop-loss or other risk management measures to protect your trading account.
Статья познакомит вас с глубокой нейронной сетью, написанной на MQL, и с различными функциями активации этой сети, такими как функция гиперболического тангенса для скрытых слоев и Softmax для выходного слоя. Мы будем изучать нейросеть постепенно, двигаясь от первого шага до последнего, и вместе создадим глубокую нейронную сеть.
Programming a Deep Neural Network from Scratch using MQL Language
