Specification
# Multi-Layer Probabilistic EA — Design Plan (MQL5)
Author: Custom-Expert-Advisors
Status: Draft v3
Scope: MQL5-only (no Python)
## 1. Purpose & Scope
Design a robust, real-time, probabilistic Expert Advisor for MT5 that adapts to market regimes, filters noise using information theory, and sizes risk via Bayesian beliefs and fractional Kelly. The design prioritizes stability, latency control, and survivability in live trading environments.
## 2. High-Level Architecture
Layers and main modules:
```
┌─────────────────────────────────────┐
│ Data Processing Layer │
├─────────────────────────────────────┤
│ Information Theory Module │
│ • Mutual Information Calculator │
│ • Shannon Entropy Estimator │
│ • Signal Quality Assessor │
├─────────────────────────────────────┤
│ Bayesian Belief System │
│ • Prior/Posterior Management │
│ • Market Regime Probabilities │
│ • Dynamic Belief Updates │
├─────────────────────────────────────┤
│ Markov State Engine │
│ • HMM Regime Detection │
│ • State Transition Matrix │
│ • Regime-Adaptive Strategies │
├─────────────────────────────────────┤
│ Decision & Risk Management │
│ • Kelly Criterion Position Sizing │
│ • Entropy-Based Signal Filtering │
│ • Multi-Regime Trade Logic │
└─────────────────────────────────────┘
```
## 3. Requirements & Non-Goals
- MQL5 only; no external runtimes.
- Operate on new-bar events and periodic timers; avoid per-tick heavy compute.
- Multi-symbol capable; independent state per symbol/timeframe.
- Deterministic, debounced regime transitions; online updates only.
- Non-goals: ML model training on live terminal, heavy matrix libs, cloud dependencies.
## 4. Event Model & Data Flow
- OnInit: load configuration; warm-up historical buffers; restore persisted states.
- OnTimer (e.g., 1s): refresh rolling metrics; pre-compute next-bar signals.
- OnTick: gate by new-bar for target TF(s); evaluate entry/exit; manage orders.
- OnDeinit: persist states (beliefs, HMM α, rolling stats) to files.
Data refresh policy:
- Batch CopyRates/CopyBuffer; pre-allocate arrays; update ring buffers incrementally.
- Warm-up: require N bars (≥500 per TF) before trading.
- If NN indicator is enabled, pull predictions via iCustom on new bars (price/regime probs, quantiles, uncertainty) and cache per symbol/TF.
## 5. Information Theory Module
Objectives:
- Rank features by Mutual Information (MI) with target (future returns/sign).
- Assess market entropy to gate trading and classify noise vs trend.
- Provide signal quality score to suppress unstable features.
Key techniques:
- Quantile binning (8–16 bins), Laplace smoothing, Miller–Madow bias correction.
- Rolling window MI with exponential decay re-ranking; cap MI change per update (≤20%).
- Shannon entropy on log-returns; use rolling baseline to compute z-score; require K-bar debounce for regime flips.
- Tie-break via Spearman |ρ| or distance correlation; drop features with high MI variance.
Contract (pseudocode):
```cpp
class CInformationTheory {
private:
int m_bins;
double m_eps;
public:
void Init(int bins=12,double eps=1e-12) { m_bins=bins; m_eps=eps; }
// Mutual information with bias correction (Miller–Madow)
double MutualInformation(const double &x[], const double &y[], int n);
// Shannon entropy (natural log)
double Entropy(const double &x[], int n);
// Rolling z-entropy vs baseline (mean,std)
double EntropyZScore(const double &entropy_hist[], int n, double &mean_out, double &std_out);
// Quantile bin utility in [0..m_bins-1]
int QuantileBin(const double &sorted[], int n, double v);
};
```
Operational rules:
- Minimum samples per bin ≥ 5 before trusting MI.
- Debounce regime classification: require K consecutive confirmations.
- Maintain ring buffers to avoid re-sorting per bar.
## 6. Bayesian Belief System
Regimes: Strong Bull, Weak Bull, Neutral, Weak Bear, Strong Bear.
Approach:
- Dirichlet priors α0 from history; exponential forgetting λ (0.98–0.995).
- Likelihoods derived from calibrated indicator maps (logistic curves) frozen during live.
- Posterior smoothing (stickiness) to reduce whipsaws.
- Fusion with HMM: Optional; compute unified regime score as weighted average of posteriors and HMM probs (e.g., 0.6*post + 0.4*hmm_prob); configurable weights and dynamic regime count (3–7) via data-driven clustering (e.g., k-means on historical states) calibrated per asset in WFO.
Contract (pseudocode):
```cpp
enum Regime { REG_STRONG_BULL=0, REG_WEAK_BULL, REG_NEUTRAL, REG_WEAK_BEAR, REG_STRONG_BEAR, REG_COUNT };
class CBayesianEngine {
private:
double m_alpha0[REG_COUNT]; // priors
double m_post[REG_COUNT]; // posteriors
double m_lambda; // forgetting
double m_stick; // smoothing
double m_eps;
public:
void Init(const double priors[REG_COUNT], double lambda=0.99, double stick=0.85, double eps=1e-9);
// likelihoods in [0,1]
void Update(const double likelihoods[REG_COUNT]) {
double temp[REG_COUNT];
for(int i=0;i<REG_COUNT;i++){
double prior = m_lambda*m_post[i] + (1.0-m_lambda)*m_alpha0[i];
temp[i] = MathMax(m_eps, prior * MathMax(m_eps, likelihoods[i]));
}
double sum=0; for(int i=0;i<REG_COUNT;i++) sum+=temp[i];
for(int i=0;i<REG_COUNT;i++){
double p = temp[i]/sum;
m_post[i] = m_stick*m_post[i] + (1.0-m_stick)*p;
}
}
int Dominant() const { int k=0; double m=m_post[0]; for(int i=1;i<REG_COUNT;i++) if(m_post[i]>m){m=m_post[i];k=i;} return k; }
double Confidence() const { double H=0, Hmax=MathLog(REG_COUNT); for(int i=0;i<REG_COUNT;i++) H -= (m_post[i]>0 ? m_post[i]*MathLog(m_post[i]) : 0.0); return 1.0 - H/Hmax; }
const double& Posterior(int i) const { return m_post[i]; }
};
```
## 7. Markov State Engine (HMM)
Approach:
- Online forward-filtering α (not Viterbi) for real-time probabilities.
- Sticky transitions: self-transition bias κ (boost diagonal of A) and min-hold bars.
- Degeneracy guard: floor probs with ε and renormalize.
- Integrate with Bayesian: Optional fused score for final regime decision; support dynamic state counts via clustering to adapt to market data, reducing rigidity.
Contract (pseudocode):
```cpp
class CHmm {
private:
int m_N; // states
double m_A[5][5]; // transitions
double m_alpha[5]; // filtered probs
double m_eps;
public:
bool Init(int states, const double A[][5], const double init_alpha[]);
// b[i] = emission likelihood per state
void Step(const double b[]) {
double next[5]={0};
for(int i=0;i<m_N;i++){
double s=0; for(int j=0;j<m_N;j++) s += m_alpha[j]*m_A[j][i];
next[i] = MathMax(m_eps, s)*MathMax(m_eps, b[i]);
}
double sum=0; for(int i=0;i<m_N;i++) sum+=next[i];
for(int i=0;i<m_N;i++) m_alpha[i]=next[i]/sum;
}
int State() const { int k=0; double m=m_alpha[0]; for(int i=1;i<m_N;i++) if(m_alpha[i]>m){m=m_alpha[i];k=i;} return k; }
double Prob(int i) const { return m_alpha[i]; }
};
```
## 8. Decision & Risk Management
Entry gate (all must pass):
1) Market tradable: spread < X, liquidity ok, session filter, no red news within N minutes.
2) Entropy low and stable (z-score below threshold, debounced K bars).
3) HMM regime posterior > p_min and Bayesian confidence ≥ c_min.
Position sizing (fractional Kelly with uncertainty throttle and drawdown guard):
```cpp
double CalcLot(double equity, double risk_per_trade, double win_p, double rr,
double conf, double dd_factor, double min_lot, double max_lot)
{
double kelly = win_p - (1.0 - win_p)/MathMax(1e-6, rr);
kelly = MathMax(0.0, MathMin(0.5, kelly));
double throttle = conf * dd_factor; // 0..1
double risk_amt = equity * risk_per_trade * kelly * throttle;
double ptv = SymbolInfoDouble(_Symbol, SYMBOL_TRADE_TICK_VALUE);
double ptsz = SymbolInfoDouble(_Symbol, SYMBOL_POINT);
double sl_pts= MathMax(10*ptsz, CurrentSLPoints());
double v_per_lot = (sl_pts/ptsz)*ptv*SymbolInfoDouble(_Symbol, SYMBOL_TRADE_CONTRACT_SIZE)/SymbolInfoDouble(_Symbol, SYMBOL_TRADE_TICK_SIZE);
double lots = risk_amt / MathMax(1e-6, v_per_lot);
return NormalizeDouble(MathMax(min_lot, MathMin(max_lot, lots)), (int)SymbolInfoInteger(_Symbol, SYMBOL_VOLUME_DIGITS));
}
```
Global controls:
- Max risk per trade: 0.25–0.5% of equity.
- Daily loss stop and pause; weekly soft stop.
- One-trade-per-bar per symbol per strategy; cooldown after clusters (e.g., 2 losses → wait M bars).
- Immediate SL/TP on order open; pre-validate against StopLevel/FreezeLevel.
- Correlation-aware: For multi-symbol, adjust via portfolio Kelly using ALGLIB covariance matrix; cap total exposure.
## 9. Safeguards & Circuit Breakers
- Kill switches: posterior confidence below floor for M bars; entropy z-score persistently high; daily loss/slippage cap exceeded.
- Black-swan guards: Detect extreme volatility spikes (e.g., entropy z > 5) and pause trading; fallback to minimal risk mode.
- State persistence: Save/load with redundancy (e.g., dual files, checksums); if file I/O fails, reset to safe defaults and log/alert.
- Execution: CTrade with normalized prices; retry with reduced volume on insufficient margin; handle requotes and partial fills.
## 10. Performance & Stability
- Compute on new bars and OnTimer, never per tick for heavy tasks.
- Static arrays and ring buffers; avoid ArrayResize in hot paths.
- Batch CopyRates/CopyBuffer; align data timestamps.
- Quantile binning with rolling rank approximations; cap per-update MI change.
- Log telemetry only on state changes in release; full logs in tester.
## 11. Strategy Wiring by Regime
- Bull: breakout/momentum, ATR trailing, pyramiding only when posterior>thr and entropy low.
- Bear: mean reversion toward VWAP/EMA bands; wider stops; smaller size.
- Neutral: range trades with fades; or stand aside if costs dominate.
## 12. Feature Selection & Evidence Mapping
- MI-based feature ranking; select top 3–5 informative indicators.
- Exponential decay on historical MI to prevent flapping selections.
- Likelihood mapping: indicator → probability via calibrated logistic functions, frozen during live; refreshed only on scheduled walk-forward.
- Optional: include NN embeddings (hidden activations) as additional features; re-rank with MI to prevent flapping selections.
## 13. Testing & Validation
- Backtests with walk-forward (anchored OOS segments); lock evidence maps per segment.
- Monte Carlo: shuffle trade sequences; randomize spread/slippage to stress execution.
- OOS paper/live-sim: broker costs and realistic rejection rates.
- Numeric unit checks: MI stability, entropy z baseline, posterior normalization.
- Robustness: Adversarial inputs (data gaps, noise injection); sensitivity analysis for key params (λ, stickiness).
- Calibration: KL divergence for drift; min 20% OOS data per WFO segment.
- Edge cases: Simulate black-swan events (e.g., flash crashes), correlated symbol failures, broker quirks (e.g., variable StopLevels); test file I/O reliability with mock failures.
## 14. Configuration (Inputs)
- Symbols/TFs; warm-up bars; timer interval.
- MI bins (8–16), window length, K debounce bars, MI change cap.
- Dirichlet α0, λ (forgetting), stickiness, ε floor.
- HMM transitions A (diagonal bias κ), min-hold bars.
- Risk: base risk %, Kelly fraction cap, min/max lot, drawdown throttle.
- Gates: spread limit, p_min, c_min, entropy z-threshold.
- Session and news blackout windows.
- NN (optional): enable_nn, nn_mode (indicator/include), ensemble_size, temp_scale_T, uncertainty_max, model_path (Files/ProbEA/models/), scaler_path, outputs (probs, mu_sigma, quantiles), min_confidence.
## 15. Directory Layout (proposed)
- `Include/Custom Include/ProbabilisticEA/CInformationTheory.mqh` — MI/Entropy utils
- `Include/Custom Include/ProbabilisticEA/CBayesianEngine.mqh` — Bayesian updater
- `Include/Custom Include/ProbabilisticEA/CHmm.mqh` — HMM online filter
- `Include/Custom Include/ProbabilisticEA/CRisk.mqh` — sizing, guards, kill switches
- `Include/Custom Include/ProbabilisticEA/CPriceNN.mqh` — tiny MLP interface (optional)
- `Experts/Custom Expert Advisors/ProbabilisticEA.mq5` — EA wiring and orchestration
- `Files/ProbEA/state/` — persisted JSON/CSV states as `[symbol]_[tf]_state.csv` with versioned backups.
- `Files/ProbEA/models/` — NN weights, scalers, config per symbol/TF
- `Indicators/PriceNN.mq5` — NN indicator exposing probs/quantiles/uncertainty via buffers (optional)
## 16. Development Plan (Phases)
Phase 1 — Core Infrastructure:
- Build information theory library; MI/entropy ring buffers.
- Implement Bayesian updating; HMM forward filter.
- State persistence; unit tests in tester.
- Prioritize minimal viable modules; make advanced features (e.g., NN, fusion) optional add-ons.
Phase 2 — Strategy Logic:
- Regime-specific rules and gates; entropy filtering.
- Fractional Kelly sizing with confidence and drawdown throttles.
- Visual panel for beliefs and regime status.
- Simplify: Use configurable flags to toggle complexity (e.g., disable fusion for initial testing).
Phase 3 — Optimization & Testing:
- Walk-forward analysis; parameter sweeps; genetic optimization (safeguarded).
- Stress tests with slippage/spread; Monte Carlo trade order shuffles.
- Live-sim dry run with telemetry review.
## 17. KPIs & Monitoring
- Trade expectancy by regime; hit rate vs predicted win_p calibration.
- Posterior confidence distribution; regime dwell times vs design.
- Slippage and spread drift monitors; rejection rates.
- Drawdown containment: daily/weekly adherence to limits.
- NN calibration (if enabled): Brier score, Expected Calibration Error (ECE), predictive entropy distribution, ensemble disagreement.
## 17.5 Monitoring and Logging
- Real-time dashboards: Use ChartObjects for regime visualization, belief charts.
- Structured logging: CSV exports for beliefs/entropy on changes; anomaly alerts (e.g., high slippage variance).
## 18. Risks & Mitigations
- Computational load: incremental updates, batching, static buffers.
- Belief instability: stickiness, debounce, floors, min-hold.
- Parameter drift: scheduled WFO; ensemble/bootstrapped validation.
- Execution variance: pre-flight checks, retries, smaller fallback size.
- NN overconfidence: temperature scaling, entropy gate, ensemble averaging, fallback to rule-based likelihoods if NN unavailable.
- Overcomplexity: Mitigate by modular design with toggles; phased rollout starting with core (no NN/fusion); regular code audits for simplicity.
- Rigid assumptions: Use dynamic regimes and WFO to adapt; fallback to 3-state model if clustering fails.
- Implementation risks: Expanded testing for edges; robust I/O with backups; broker-agnostic checks (e.g., dynamic StopLevel queries).
## 19. Notes
- Favor clarity and debouncing over hyper-reactivity; prioritize survival.
- Keep live config conservative; only widen risk after sustained calibration stability.
---
Appendix A — Original Class Sketches (for reference)
```cpp
class CInformationTheory {
double CalculateMutualInformation(double &feature[], double &target[], int period);
double CalculateEntropy(double &returns[], int period);
bool IsLowEntropyRegime(double entropy_threshold);
};
class CBayesianEngine {
double priors[5];
double posteriors[5];
double likelihoods[5];
void UpdateBeliefs(double macd_signal, double rsi_value, double volatility);
int GetDominantRegime();
double GetRegimeConfidence();
};
```
## 20. Neural Network Integration (Optional)
### 20.1 Roles in the Architecture
- Evidence mapping: feature vector → regime probabilities via softmax; used as likelihoods for `CBayesianEngine.Update()` and emissions for `CHmm.Step()`.
- Distributional forecast: next-bar return mean/variance or quantiles (q10/q50/q90) for win-prob and SL/TP shaping.
- Meta-labeling: probability of success for a gated trade; feeds Kelly p.
### 20.2 MQL5 Design Patterns
- Indicator engine (recommended): `Indicators/PriceNN.mq5` exposes buffers for probs, μ/σ or quantiles, and uncertainty. EA reads via `iCustom` on new bars.
- Embedded include: `CPriceNN.mqh` tiny MLP run on `OnTimer`, cached per symbol/TF.
- Ensemble: 3–5 small NNs averaged; use std as uncertainty.
### 20.3 Model Spec (MQL5 Friendly)
- Small MLP: inputs 16–32, hidden 16–32, outputs 3–6; sigmoid/tanh activations.
- Live: prediction only; training occurs in Tester/WFO and weights serialized to `Files/ProbEA/models/`.
- Temperature scaling T for probability calibration.
- ALGLIB Backend: Use alglib::mlpcreate1 for MLP creation, alglib::mlptrainlm for Tester/WFO training, alglib::mlpprocess for prediction. Pros: Native MQL5, efficient matrices/solvers; no DLLs. Sketch: real_2d_array weights; mlpnetwork net; mlpcreate1(inputs, hidden, outputs, net); // train and serialize.
### 20.4 Features & Normalization
- Inputs: returns (r1,r2,r3,r5,r10), volatility (ATR_z, realized_vol_z), momentum (EMA slopes, MACD_z, RSI_z), microstructure (spread_z, tick_vol_z), regime cues (entropy_z, Hurst proxy).
- Normalize with rolling mean/std or median/IQR; store scalers with the model.
### 20.5 Calibration & Uncertainty
- Temperature-scaled softmax; validate T per WFO segment.
- Metrics: Brier score, ECE. Uncertainty via predictive entropy and ensemble std.
- Use uncertainty to throttle or skip trades.
### 20.6 Persistence & Scheduling
- Files: `weights.[symbol].[tf].csv`, `scaler.[symbol].[tf].csv`, `config.json` (T, thresholds) under `Files/ProbEA/models/`.
- OnInit: load; if missing, disable NN gracefully and fall back to rule-based likelihoods.
- OnDeinit: persist last states; robust to restarts.
### 20.7 EA Flow Using NN Outputs
1) On new bar, build features and run Predict.
2) Gate by entropy z (debounced) and NN uncertainty.
3) Bayesian.Update() with NN-derived likelihoods; HMM.Step() with emissions.
4) If posterior > p_min and Confidence ≥ c_min, compute p_win from μ/σ or meta-prob.
5) CalcLot with confidence/drawdown throttles; place order with immediate SL/TP.
### 20.8 Training vs Inference Separation
- In Strategy Tester/WFO, train models and export weights/scalers/config to `Files/ProbEA/models/`.
- In live/sim, load and Predict only; no backprop.
- Prefer outputs as classification probabilities and optional return quantiles.
- Apply temperature scaling and (optional) small ensemble; expose uncertainty via buffers or accessor functions.
### 20.9 Guardrails
- NN outputs must pass existing gates (entropy, spread, session, HMM posterior).
- Never scale up risk under high entropy or high ensemble disagreement; only reduce or skip.
### 20.10 Interfaces (API Contracts)
This section defines the minimal, testable contracts between the EA and the NN component so data flows are predictable and failures are safe.
#### A) Include-based API (compiled MLP)
Inputs/Outputs and behavior:
- Inputs: `features[]` of length N (finite, normalized), symbol/timeframe context set at init.
- Outputs: regime probabilities `probs[K]` (sum=1, bounded by ε), distribution metrics (`mu`, `sigma`), optional quantiles (`q10`, `q50`, `q90`), and `uncertainty` in [0..1].
- Timing: call on new bar or via OnTimer; cache per (symbol, TF, bar time).
- Failure: return false and set last error; EA uses safe defaults and skips or reduces risk.
Sketch (MQL5-style pseudocode):
```cpp
enum NnError {
NN_OK=0,
NN_MODEL_NOT_LOADED=1,
NN_FEATURE_SIZE_MISMATCH=2,
NN_INVALID_INPUT=3,
NN_STALE_CONTEXT=4,
NN_NUMERIC_FAIL=5
};
struct NnOutputs {
double probs[]; // size K, normalized
double mu; // next-bar return (define: log-return or pips)
double sigma; // >= 0
double q10, q50, q90; // optional; q10<=q50<=q90
double uncertainty; // 0..1 (entropy or ensemble std normalized)
};
class CPriceNN {
public:
bool Init(const string symbol, ENUM_TIMEFRAMES tf,
const string weightsPath, const string scalerPath,
const double tempScaleT);
bool IsReady() const;
int FeatureCount() const; // expected N
bool Predict(const double &features[], const int n, NnOutputs &out);
int LastError() const;
string LastErrorMsg() const;
};
```
Safe defaults on failure:
- `probs` = uniform (1/K), `uncertainty` = 1.0, `mu` = 0, `sigma` = large (clamped), quantiles unset or mirrored around 0.
- EA logs once per bar and falls back to rule-based signals; no trade or reduced size per policy.
#### B) Indicator-based API (iCustom)
Buffer mapping (example):
- 0: p_bull, 1: p_neutral, 2: p_bear
- 3: mu, 4: sigma
- 5: q10, 6: q50, 7: q90
- 8: uncertainty (0..1)
- 9: status code (>=0 OK; negative = error)
Usage rules:
- Read buffers only on new bar; verify all are not `EMPTY_VALUE`.
- Treat `status<0` or any invalid/NaN as failure; apply same safe defaults and skip/throttle.
- Cache outputs with their bar time; avoid duplicate reads within the same bar.
#### C) Multi-symbol/TF context & caching
- Maintain a per-(symbol, TF) cache of `NnOutputs` keyed by the last completed bar time.
- Invalidate cache on `OnInit`/`OnDeinit`/symbol change; re-load models.
- Persist model/scaler versions; reject mismatches with `NN_STALE_CONTEXT`.
#### D) Validation checklist (implementation-time)
- Feature length matches `FeatureCount()`.
- Inputs finite (no NaN/Inf) and within expected ranges after normalization.
- `probs` sum≈1 within tolerance; clamp and renormalize if needed.
- Monotonic quantiles; fix ordering if violated and flag `NN_NUMERIC_FAIL`.
- Temperature scaling applied consistently across live/test.
## 21. Glossary
- Debounce: Require consecutive confirmations to filter noise.
- WFO: Walk-Forward Optimization – rolling train/test segments.
- KL Divergence: Measure of distribution difference for calibration checks.
- Etc. (expand as needed).
Responded
1
Rating
Projects
907
76%
Arbitration
25
16%
/
68%
Overdue
99
11%
Free
Published: 1 article, 6 codes
2
Rating
Projects
34
24%
Arbitration
3
0%
/
33%
Overdue
2
6%
Working
3
Rating
Projects
5
20%
Arbitration
0
Overdue
0
Working
4
Rating
Projects
1
0%
Arbitration
0
Overdue
0
Free
5
Rating
Projects
550
49%
Arbitration
56
39%
/
36%
Overdue
227
41%
Working
6
Rating
Projects
0
0%
Arbitration
0
Overdue
0
Free
7
Rating
Projects
0
0%
Arbitration
0
Overdue
0
Free
8
Rating
Projects
0
0%
Arbitration
1
0%
/
0%
Overdue
0
Free
9
Rating
Projects
3
0%
Arbitration
0
Overdue
0
Busy
10
Rating
Projects
1
0%
Arbitration
0
Overdue
0
Free
Similar orders
Bainanans
500+ USD
Bainanan good f المؤشر. ينبغي إضافة نقطة صفراء عند أعلى نقطة في الشموع في منطقة ذروة الشراء - وهي نقطة H. ينبغي إضافة نقطة خضراء عند النقطة المنخفضة للشموع في منطقة ذروة البيع - وهي نقطة L. إذا وُجدت نقطة L واحدة على الأقل بين نقطتي H، فابحث عن نقطة LL في الفترة الفاصلة بينهما. ستكون الشمعة ذات أدنى سعر قاع هي نقطة LL. بشكل عام، لا تُعتبر نقطة LL بالضرورة نقطة L. ابحث عن الشموع ذات أدنى سعر قاع. إذا كانت هناك نقطة H
Døsh forex
30 - 200 USD
I want a robot that will help me and trade the the robot will be very good I don’t want to loose money I repeat I don’t want to loose money
We are looking for an experienced MT5/MQL5 developer to build a data bridge that connects an external market data feed (tick-by-tick from a third-party provider) into the MT5 platform. Requirements: Capture real-time tick-by-tick market data (bid/ask/last price, volume) from an external API (source: WebSocket/REST or DLL). Push this data into MT5 in a format compatible with native charts and order execution
Ai spike Indicator
30 - 35 USD
Create an Ai based indicator that is able to identify sudden market movements known as spikes on boom and crash indices on the deriv market. The Ai should incorporate these strategies for better precision on getting signals, these strategies include support and resistance on 4 hour time frame SMC, CRT, ICT, Strategies volume trend, volatility pure price action tick velocity, momentum and key points on fibbonacci tool
Development of a trading other
35+ USD
Here are the requirements for a potential developer: 1. *Task*: Create a detailed specification for image editing tasks. 2. *Key Features*: - Describe the type of image (e.g., photo, graphic). - Specify edits (add, remove, change elements). - Define desired output format and resolution. 3. *Deliverables*: - A clear, concise document outlining the task. - Estimated complexity and cost assessment. -
Category: Trading robots (Expert Advisors) Platform: MetaTrader 5 Budget: $300 (fixed) Description: I need an experienced MQL5 developer to build a **high-performance scalper EA** for MT5 designed to **pass a prop firm challenge within one week** while fully complying with prop firm rules (daily drawdown, max loss, profit target). This is a paid job with a strict requirement for **full source code delivery and IP
Hello, I’m looking for a skilled developer to create a Telegram-to-MT4 & MT5 signal copier bot/EA with the following features: Core Features: Copy signals (both text and images) from Telegram and execute trades in MT4 and MT5 (two separate versions). AI parsing mode: Option to enable AI to read and interpret signals before execution. Fully optimized, stable, and bug-free performance. Prop firm compatibility
Modify bot mt5
100+ USD
i nee a change the bot velocity vector 2.0 modification indicator profitable ems trading bot I want to find a Developer to perform this work and settle payments in this Application. I undertake not to communicate with Applicants anywhere else except this Application, including third-party messengers, personal correspondence or emails. I understand that violators will be banned from publishing Orders in the Freelance
Develop Custom Trade Panel for MT5 (with Source Code) – Budget $50 📋 Project Description: I am looking for a skilled MQL5 developer to create a Trade Panel tool for MetaTrader 5 , similar to the one listed here on MQL5 Market: Reference Product: https://www.mql5.com/en/market/product/35049 ✅ Core Requirements: The panel must support the following features: 🔹 Trade Entry Functions: One-click Buy / Sell at market
I am the developer of CYP Trade Manager Pro, an advanced MT5 trade & risk management utility designed for traders who enter positions manually or with EAs but want automated execution, protection, and management of those trades. I am looking for a creative freelancer to: Design marketing materials (images, banners, mockups) Create short promo videos showcasing the utility’s features Help improve the visual
Project information
Budget
30+ USD
Deadline
from 1 to 250 day(s)
Customer
Placed orders6
Arbitrage count0