Anddy Cabrera / 프로필
- 정보
|
8+ 년도
경험
|
1
제품
|
8269
데몬 버전
|
|
0
작업
|
0
거래 신호
|
0
구독자
|
Anddy Cabrera
출시돈 제품
제어된 마틴게일 제어된 마틴게일 EA는 MetaTrader 5용 완전 자동화 전문가 어드바이저입니다. 지표 없이 순수한 가격 액션 신호를 기반으로 한 그리드 마틴게일 방식을 사용합니다. 진입 신호는 이전 봉의 고가와 저가 범위의 중간점에서 도출됩니다. 그리드 간격은 ATR 지표를 사용하여 동적으로 계산되므로, 시스템이 현재 시장 변동성에 자동으로 적응합니다. 작동 방식 EA는 새로운 바스켓 진입과 그리드 연속을 두 개의 독립적인 코드 경로로 분리합니다. 새로운 바스켓은 가격 신호가 허용된 방향과 일치할 때만 열립니다. 바스켓이 열리면, 신호 조건 없이 오직 마지막 진입으로부터의 가격 거리를 기반으로 추가 레벨이 추가됩니다. 이러한 분리는 시장 방향이 바뀔 때 그리드가 멈추는 것을 방지합니다. 바스켓이 최대 그리드 레벨에 도달하면, 하드 바스켓으로 기록됩니다. 설정된 수의 하드 바스켓이 발생한 후, EA는 거래 방향을 반전하고 다음 사이클의 lot 크기를
Anddy Cabrera
Hi Guys, I'm planning to do the following Expert Advisor using Q-Learning that is a Machine Learning reforcement learning. The descriptio fo the EA is below. I want to check how many of yours are interested so I can start the project:
Here's a high-level overview of how to implement this approach:
Define grid levels: Set grid levels at 5-pip intervals. This distance will be used to create the state space and action space for the Q-learning model.
Define the state space: The state space consists of the grid levels and the number of open positions. Each state in the Q-table will be represented as a tuple (grid level, number of open positions).
Define the action space: The action space represents the possible actions the agent can take at each state. In this case, the actions include:
Open trade at grid level i
Hold
Where i represents the index of the grid level.
Initialize the Q-table: Create a Q-table that maps each state (grid level, number of open positions) to the possible actions (open trade at grid level i, hold). Initialize the Q-table values to zero.
Define the reward function: The reward function should be based on the difference between the maximum drawdown in pips and profit in pips. This reward function encourages the Q-learning model to find actions that minimize drawdown while maximizing profit.
Determine the initial trade direction: Based on your market analysis or the Q-learning model's suggestion, determine the initial trade direction (buy or sell).
Train the Q-learning model: Train the model using historical data and the defined reward function. When updating the Q-table, consider the Martingale component by doubling the trade size after a loss and reverting to the initial trade size after a win. Ensure that the model only opens trades in the same direction as the initial trade during the training process.
Implement an exploration-exploitation strategy: Use an epsilon-greedy approach to balance exploration (trying new actions) and exploitation (using the best-known action based on the Q-table) during the training process.
Test and optimize: Test your Q-learning model with the state representation including grid level and number of open positions on out-of-sample data. Make any necessary adjustments to improve performance.
Implement the strategy: Deploy your strategy to a trading platform and monitor its performance in real-time. Ensure that the system only opens trades in the same direction as the initial trade (either all buys or all sells). Be cautious with the Martingale component, as it can lead to significant losses if a losing streak occurs. Consider using a stop-loss or other risk management measures to protect your trading account.
Here's a high-level overview of how to implement this approach:
Define grid levels: Set grid levels at 5-pip intervals. This distance will be used to create the state space and action space for the Q-learning model.
Define the state space: The state space consists of the grid levels and the number of open positions. Each state in the Q-table will be represented as a tuple (grid level, number of open positions).
Define the action space: The action space represents the possible actions the agent can take at each state. In this case, the actions include:
Open trade at grid level i
Hold
Where i represents the index of the grid level.
Initialize the Q-table: Create a Q-table that maps each state (grid level, number of open positions) to the possible actions (open trade at grid level i, hold). Initialize the Q-table values to zero.
Define the reward function: The reward function should be based on the difference between the maximum drawdown in pips and profit in pips. This reward function encourages the Q-learning model to find actions that minimize drawdown while maximizing profit.
Determine the initial trade direction: Based on your market analysis or the Q-learning model's suggestion, determine the initial trade direction (buy or sell).
Train the Q-learning model: Train the model using historical data and the defined reward function. When updating the Q-table, consider the Martingale component by doubling the trade size after a loss and reverting to the initial trade size after a win. Ensure that the model only opens trades in the same direction as the initial trade during the training process.
Implement an exploration-exploitation strategy: Use an epsilon-greedy approach to balance exploration (trying new actions) and exploitation (using the best-known action based on the Q-table) during the training process.
Test and optimize: Test your Q-learning model with the state representation including grid level and number of open positions on out-of-sample data. Make any necessary adjustments to improve performance.
Implement the strategy: Deploy your strategy to a trading platform and monitor its performance in real-time. Ensure that the system only opens trades in the same direction as the initial trade (either all buys or all sells). Be cautious with the Martingale component, as it can lead to significant losses if a losing streak occurs. Consider using a stop-loss or other risk management measures to protect your trading account.
소셜 네트워크에 공유 · 6
Arnaud Bernard Abadi
2023.07.03
Looking forward to reading your code ! Many thanks in advance. Will it be shared tru an article ?
Anddy Cabrera
게재된 기고글 MQL 언어를 사용하여 아무것도 없는 상태에서 심층 신경망(Deep Neural Network) 프로그래밍 하기
이 기사는 MQL4/5 언어를 사용하여 심층 신경망을 만드는 방법을 열려주는 것을 목표로 합니다.
소셜 네트워크에 공유 · 6
538
Anddy Cabrera
Introduction Since machine learning has recently gained popularity, many have heard about Deep Learning and desire to know how to apply it in the MQL language...
소셜 네트워크에 공유 · 43
24459
35
Anddy Cabrera
3D Cartesian plane. The derivative and the tangent line at a point on the given function curve. The gradient points in the direction of the greatest rate of increase of the function, and its magnitude is the slope of the graph in that direction. This graphic has been developed by me from scratch, using only mathematical formulas for its creation.
소셜 네트워크에 공유 · 5
Anddy Cabrera
2D Cartesian plane. The derivative and the tangent line at a point on the given function curve. This graphic has been developed by me from scratch, using only mathematical formulas for its creation.
소셜 네트워크에 공유 · 8
:
