Spezifikation
# Multi-Layer Probabilistic EA — Design Plan (MQL5)
Author: Custom-Expert-Advisors
Status: Draft v3
Scope: MQL5-only (no Python)
## 1. Purpose & Scope
Design a robust, real-time, probabilistic Expert Advisor for MT5 that adapts to market regimes, filters noise using information theory, and sizes risk via Bayesian beliefs and fractional Kelly. The design prioritizes stability, latency control, and survivability in live trading environments.
## 2. High-Level Architecture
Layers and main modules:
```
┌─────────────────────────────────────┐
│ Data Processing Layer │
├─────────────────────────────────────┤
│ Information Theory Module │
│ • Mutual Information Calculator │
│ • Shannon Entropy Estimator │
│ • Signal Quality Assessor │
├─────────────────────────────────────┤
│ Bayesian Belief System │
│ • Prior/Posterior Management │
│ • Market Regime Probabilities │
│ • Dynamic Belief Updates │
├─────────────────────────────────────┤
│ Markov State Engine │
│ • HMM Regime Detection │
│ • State Transition Matrix │
│ • Regime-Adaptive Strategies │
├─────────────────────────────────────┤
│ Decision & Risk Management │
│ • Kelly Criterion Position Sizing │
│ • Entropy-Based Signal Filtering │
│ • Multi-Regime Trade Logic │
└─────────────────────────────────────┘
```
## 3. Requirements & Non-Goals
- MQL5 only; no external runtimes.
- Operate on new-bar events and periodic timers; avoid per-tick heavy compute.
- Multi-symbol capable; independent state per symbol/timeframe.
- Deterministic, debounced regime transitions; online updates only.
- Non-goals: ML model training on live terminal, heavy matrix libs, cloud dependencies.
## 4. Event Model & Data Flow
- OnInit: load configuration; warm-up historical buffers; restore persisted states.
- OnTimer (e.g., 1s): refresh rolling metrics; pre-compute next-bar signals.
- OnTick: gate by new-bar for target TF(s); evaluate entry/exit; manage orders.
- OnDeinit: persist states (beliefs, HMM α, rolling stats) to files.
Data refresh policy:
- Batch CopyRates/CopyBuffer; pre-allocate arrays; update ring buffers incrementally.
- Warm-up: require N bars (≥500 per TF) before trading.
- If NN indicator is enabled, pull predictions via iCustom on new bars (price/regime probs, quantiles, uncertainty) and cache per symbol/TF.
## 5. Information Theory Module
Objectives:
- Rank features by Mutual Information (MI) with target (future returns/sign).
- Assess market entropy to gate trading and classify noise vs trend.
- Provide signal quality score to suppress unstable features.
Key techniques:
- Quantile binning (8–16 bins), Laplace smoothing, Miller–Madow bias correction.
- Rolling window MI with exponential decay re-ranking; cap MI change per update (≤20%).
- Shannon entropy on log-returns; use rolling baseline to compute z-score; require K-bar debounce for regime flips.
- Tie-break via Spearman |ρ| or distance correlation; drop features with high MI variance.
Contract (pseudocode):
```cpp
class CInformationTheory {
private:
int m_bins;
double m_eps;
public:
void Init(int bins=12,double eps=1e-12) { m_bins=bins; m_eps=eps; }
// Mutual information with bias correction (Miller–Madow)
double MutualInformation(const double &x[], const double &y[], int n);
// Shannon entropy (natural log)
double Entropy(const double &x[], int n);
// Rolling z-entropy vs baseline (mean,std)
double EntropyZScore(const double &entropy_hist[], int n, double &mean_out, double &std_out);
// Quantile bin utility in [0..m_bins-1]
int QuantileBin(const double &sorted[], int n, double v);
};
```
Operational rules:
- Minimum samples per bin ≥ 5 before trusting MI.
- Debounce regime classification: require K consecutive confirmations.
- Maintain ring buffers to avoid re-sorting per bar.
## 6. Bayesian Belief System
Regimes: Strong Bull, Weak Bull, Neutral, Weak Bear, Strong Bear.
Approach:
- Dirichlet priors α0 from history; exponential forgetting λ (0.98–0.995).
- Likelihoods derived from calibrated indicator maps (logistic curves) frozen during live.
- Posterior smoothing (stickiness) to reduce whipsaws.
- Fusion with HMM: Optional; compute unified regime score as weighted average of posteriors and HMM probs (e.g., 0.6*post + 0.4*hmm_prob); configurable weights and dynamic regime count (3–7) via data-driven clustering (e.g., k-means on historical states) calibrated per asset in WFO.
Contract (pseudocode):
```cpp
enum Regime { REG_STRONG_BULL=0, REG_WEAK_BULL, REG_NEUTRAL, REG_WEAK_BEAR, REG_STRONG_BEAR, REG_COUNT };
class CBayesianEngine {
private:
double m_alpha0[REG_COUNT]; // priors
double m_post[REG_COUNT]; // posteriors
double m_lambda; // forgetting
double m_stick; // smoothing
double m_eps;
public:
void Init(const double priors[REG_COUNT], double lambda=0.99, double stick=0.85, double eps=1e-9);
// likelihoods in [0,1]
void Update(const double likelihoods[REG_COUNT]) {
double temp[REG_COUNT];
for(int i=0;i<REG_COUNT;i++){
double prior = m_lambda*m_post[i] + (1.0-m_lambda)*m_alpha0[i];
temp[i] = MathMax(m_eps, prior * MathMax(m_eps, likelihoods[i]));
}
double sum=0; for(int i=0;i<REG_COUNT;i++) sum+=temp[i];
for(int i=0;i<REG_COUNT;i++){
double p = temp[i]/sum;
m_post[i] = m_stick*m_post[i] + (1.0-m_stick)*p;
}
}
int Dominant() const { int k=0; double m=m_post[0]; for(int i=1;i<REG_COUNT;i++) if(m_post[i]>m){m=m_post[i];k=i;} return k; }
double Confidence() const { double H=0, Hmax=MathLog(REG_COUNT); for(int i=0;i<REG_COUNT;i++) H -= (m_post[i]>0 ? m_post[i]*MathLog(m_post[i]) : 0.0); return 1.0 - H/Hmax; }
const double& Posterior(int i) const { return m_post[i]; }
};
```
## 7. Markov State Engine (HMM)
Approach:
- Online forward-filtering α (not Viterbi) for real-time probabilities.
- Sticky transitions: self-transition bias κ (boost diagonal of A) and min-hold bars.
- Degeneracy guard: floor probs with ε and renormalize.
- Integrate with Bayesian: Optional fused score for final regime decision; support dynamic state counts via clustering to adapt to market data, reducing rigidity.
Contract (pseudocode):
```cpp
class CHmm {
private:
int m_N; // states
double m_A[5][5]; // transitions
double m_alpha[5]; // filtered probs
double m_eps;
public:
bool Init(int states, const double A[][5], const double init_alpha[]);
// b[i] = emission likelihood per state
void Step(const double b[]) {
double next[5]={0};
for(int i=0;i<m_N;i++){
double s=0; for(int j=0;j<m_N;j++) s += m_alpha[j]*m_A[j][i];
next[i] = MathMax(m_eps, s)*MathMax(m_eps, b[i]);
}
double sum=0; for(int i=0;i<m_N;i++) sum+=next[i];
for(int i=0;i<m_N;i++) m_alpha[i]=next[i]/sum;
}
int State() const { int k=0; double m=m_alpha[0]; for(int i=1;i<m_N;i++) if(m_alpha[i]>m){m=m_alpha[i];k=i;} return k; }
double Prob(int i) const { return m_alpha[i]; }
};
```
## 8. Decision & Risk Management
Entry gate (all must pass):
1) Market tradable: spread < X, liquidity ok, session filter, no red news within N minutes.
2) Entropy low and stable (z-score below threshold, debounced K bars).
3) HMM regime posterior > p_min and Bayesian confidence ≥ c_min.
Position sizing (fractional Kelly with uncertainty throttle and drawdown guard):
```cpp
double CalcLot(double equity, double risk_per_trade, double win_p, double rr,
double conf, double dd_factor, double min_lot, double max_lot)
{
double kelly = win_p - (1.0 - win_p)/MathMax(1e-6, rr);
kelly = MathMax(0.0, MathMin(0.5, kelly));
double throttle = conf * dd_factor; // 0..1
double risk_amt = equity * risk_per_trade * kelly * throttle;
double ptv = SymbolInfoDouble(_Symbol, SYMBOL_TRADE_TICK_VALUE);
double ptsz = SymbolInfoDouble(_Symbol, SYMBOL_POINT);
double sl_pts= MathMax(10*ptsz, CurrentSLPoints());
double v_per_lot = (sl_pts/ptsz)*ptv*SymbolInfoDouble(_Symbol, SYMBOL_TRADE_CONTRACT_SIZE)/SymbolInfoDouble(_Symbol, SYMBOL_TRADE_TICK_SIZE);
double lots = risk_amt / MathMax(1e-6, v_per_lot);
return NormalizeDouble(MathMax(min_lot, MathMin(max_lot, lots)), (int)SymbolInfoInteger(_Symbol, SYMBOL_VOLUME_DIGITS));
}
```
Global controls:
- Max risk per trade: 0.25–0.5% of equity.
- Daily loss stop and pause; weekly soft stop.
- One-trade-per-bar per symbol per strategy; cooldown after clusters (e.g., 2 losses → wait M bars).
- Immediate SL/TP on order open; pre-validate against StopLevel/FreezeLevel.
- Correlation-aware: For multi-symbol, adjust via portfolio Kelly using ALGLIB covariance matrix; cap total exposure.
## 9. Safeguards & Circuit Breakers
- Kill switches: posterior confidence below floor for M bars; entropy z-score persistently high; daily loss/slippage cap exceeded.
- Black-swan guards: Detect extreme volatility spikes (e.g., entropy z > 5) and pause trading; fallback to minimal risk mode.
- State persistence: Save/load with redundancy (e.g., dual files, checksums); if file I/O fails, reset to safe defaults and log/alert.
- Execution: CTrade with normalized prices; retry with reduced volume on insufficient margin; handle requotes and partial fills.
## 10. Performance & Stability
- Compute on new bars and OnTimer, never per tick for heavy tasks.
- Static arrays and ring buffers; avoid ArrayResize in hot paths.
- Batch CopyRates/CopyBuffer; align data timestamps.
- Quantile binning with rolling rank approximations; cap per-update MI change.
- Log telemetry only on state changes in release; full logs in tester.
## 11. Strategy Wiring by Regime
- Bull: breakout/momentum, ATR trailing, pyramiding only when posterior>thr and entropy low.
- Bear: mean reversion toward VWAP/EMA bands; wider stops; smaller size.
- Neutral: range trades with fades; or stand aside if costs dominate.
## 12. Feature Selection & Evidence Mapping
- MI-based feature ranking; select top 3–5 informative indicators.
- Exponential decay on historical MI to prevent flapping selections.
- Likelihood mapping: indicator → probability via calibrated logistic functions, frozen during live; refreshed only on scheduled walk-forward.
- Optional: include NN embeddings (hidden activations) as additional features; re-rank with MI to prevent flapping selections.
## 13. Testing & Validation
- Backtests with walk-forward (anchored OOS segments); lock evidence maps per segment.
- Monte Carlo: shuffle trade sequences; randomize spread/slippage to stress execution.
- OOS paper/live-sim: broker costs and realistic rejection rates.
- Numeric unit checks: MI stability, entropy z baseline, posterior normalization.
- Robustness: Adversarial inputs (data gaps, noise injection); sensitivity analysis for key params (λ, stickiness).
- Calibration: KL divergence for drift; min 20% OOS data per WFO segment.
- Edge cases: Simulate black-swan events (e.g., flash crashes), correlated symbol failures, broker quirks (e.g., variable StopLevels); test file I/O reliability with mock failures.
## 14. Configuration (Inputs)
- Symbols/TFs; warm-up bars; timer interval.
- MI bins (8–16), window length, K debounce bars, MI change cap.
- Dirichlet α0, λ (forgetting), stickiness, ε floor.
- HMM transitions A (diagonal bias κ), min-hold bars.
- Risk: base risk %, Kelly fraction cap, min/max lot, drawdown throttle.
- Gates: spread limit, p_min, c_min, entropy z-threshold.
- Session and news blackout windows.
- NN (optional): enable_nn, nn_mode (indicator/include), ensemble_size, temp_scale_T, uncertainty_max, model_path (Files/ProbEA/models/), scaler_path, outputs (probs, mu_sigma, quantiles), min_confidence.
## 15. Directory Layout (proposed)
- `Include/Custom Include/ProbabilisticEA/CInformationTheory.mqh` — MI/Entropy utils
- `Include/Custom Include/ProbabilisticEA/CBayesianEngine.mqh` — Bayesian updater
- `Include/Custom Include/ProbabilisticEA/CHmm.mqh` — HMM online filter
- `Include/Custom Include/ProbabilisticEA/CRisk.mqh` — sizing, guards, kill switches
- `Include/Custom Include/ProbabilisticEA/CPriceNN.mqh` — tiny MLP interface (optional)
- `Experts/Custom Expert Advisors/ProbabilisticEA.mq5` — EA wiring and orchestration
- `Files/ProbEA/state/` — persisted JSON/CSV states as `[symbol]_[tf]_state.csv` with versioned backups.
- `Files/ProbEA/models/` — NN weights, scalers, config per symbol/TF
- `Indicators/PriceNN.mq5` — NN indicator exposing probs/quantiles/uncertainty via buffers (optional)
## 16. Development Plan (Phases)
Phase 1 — Core Infrastructure:
- Build information theory library; MI/entropy ring buffers.
- Implement Bayesian updating; HMM forward filter.
- State persistence; unit tests in tester.
- Prioritize minimal viable modules; make advanced features (e.g., NN, fusion) optional add-ons.
Phase 2 — Strategy Logic:
- Regime-specific rules and gates; entropy filtering.
- Fractional Kelly sizing with confidence and drawdown throttles.
- Visual panel for beliefs and regime status.
- Simplify: Use configurable flags to toggle complexity (e.g., disable fusion for initial testing).
Phase 3 — Optimization & Testing:
- Walk-forward analysis; parameter sweeps; genetic optimization (safeguarded).
- Stress tests with slippage/spread; Monte Carlo trade order shuffles.
- Live-sim dry run with telemetry review.
## 17. KPIs & Monitoring
- Trade expectancy by regime; hit rate vs predicted win_p calibration.
- Posterior confidence distribution; regime dwell times vs design.
- Slippage and spread drift monitors; rejection rates.
- Drawdown containment: daily/weekly adherence to limits.
- NN calibration (if enabled): Brier score, Expected Calibration Error (ECE), predictive entropy distribution, ensemble disagreement.
## 17.5 Monitoring and Logging
- Real-time dashboards: Use ChartObjects for regime visualization, belief charts.
- Structured logging: CSV exports for beliefs/entropy on changes; anomaly alerts (e.g., high slippage variance).
## 18. Risks & Mitigations
- Computational load: incremental updates, batching, static buffers.
- Belief instability: stickiness, debounce, floors, min-hold.
- Parameter drift: scheduled WFO; ensemble/bootstrapped validation.
- Execution variance: pre-flight checks, retries, smaller fallback size.
- NN overconfidence: temperature scaling, entropy gate, ensemble averaging, fallback to rule-based likelihoods if NN unavailable.
- Overcomplexity: Mitigate by modular design with toggles; phased rollout starting with core (no NN/fusion); regular code audits for simplicity.
- Rigid assumptions: Use dynamic regimes and WFO to adapt; fallback to 3-state model if clustering fails.
- Implementation risks: Expanded testing for edges; robust I/O with backups; broker-agnostic checks (e.g., dynamic StopLevel queries).
## 19. Notes
- Favor clarity and debouncing over hyper-reactivity; prioritize survival.
- Keep live config conservative; only widen risk after sustained calibration stability.
---
Appendix A — Original Class Sketches (for reference)
```cpp
class CInformationTheory {
double CalculateMutualInformation(double &feature[], double &target[], int period);
double CalculateEntropy(double &returns[], int period);
bool IsLowEntropyRegime(double entropy_threshold);
};
class CBayesianEngine {
double priors[5];
double posteriors[5];
double likelihoods[5];
void UpdateBeliefs(double macd_signal, double rsi_value, double volatility);
int GetDominantRegime();
double GetRegimeConfidence();
};
```
## 20. Neural Network Integration (Optional)
### 20.1 Roles in the Architecture
- Evidence mapping: feature vector → regime probabilities via softmax; used as likelihoods for `CBayesianEngine.Update()` and emissions for `CHmm.Step()`.
- Distributional forecast: next-bar return mean/variance or quantiles (q10/q50/q90) for win-prob and SL/TP shaping.
- Meta-labeling: probability of success for a gated trade; feeds Kelly p.
### 20.2 MQL5 Design Patterns
- Indicator engine (recommended): `Indicators/PriceNN.mq5` exposes buffers for probs, μ/σ or quantiles, and uncertainty. EA reads via `iCustom` on new bars.
- Embedded include: `CPriceNN.mqh` tiny MLP run on `OnTimer`, cached per symbol/TF.
- Ensemble: 3–5 small NNs averaged; use std as uncertainty.
### 20.3 Model Spec (MQL5 Friendly)
- Small MLP: inputs 16–32, hidden 16–32, outputs 3–6; sigmoid/tanh activations.
- Live: prediction only; training occurs in Tester/WFO and weights serialized to `Files/ProbEA/models/`.
- Temperature scaling T for probability calibration.
- ALGLIB Backend: Use alglib::mlpcreate1 for MLP creation, alglib::mlptrainlm for Tester/WFO training, alglib::mlpprocess for prediction. Pros: Native MQL5, efficient matrices/solvers; no DLLs. Sketch: real_2d_array weights; mlpnetwork net; mlpcreate1(inputs, hidden, outputs, net); // train and serialize.
### 20.4 Features & Normalization
- Inputs: returns (r1,r2,r3,r5,r10), volatility (ATR_z, realized_vol_z), momentum (EMA slopes, MACD_z, RSI_z), microstructure (spread_z, tick_vol_z), regime cues (entropy_z, Hurst proxy).
- Normalize with rolling mean/std or median/IQR; store scalers with the model.
### 20.5 Calibration & Uncertainty
- Temperature-scaled softmax; validate T per WFO segment.
- Metrics: Brier score, ECE. Uncertainty via predictive entropy and ensemble std.
- Use uncertainty to throttle or skip trades.
### 20.6 Persistence & Scheduling
- Files: `weights.[symbol].[tf].csv`, `scaler.[symbol].[tf].csv`, `config.json` (T, thresholds) under `Files/ProbEA/models/`.
- OnInit: load; if missing, disable NN gracefully and fall back to rule-based likelihoods.
- OnDeinit: persist last states; robust to restarts.
### 20.7 EA Flow Using NN Outputs
1) On new bar, build features and run Predict.
2) Gate by entropy z (debounced) and NN uncertainty.
3) Bayesian.Update() with NN-derived likelihoods; HMM.Step() with emissions.
4) If posterior > p_min and Confidence ≥ c_min, compute p_win from μ/σ or meta-prob.
5) CalcLot with confidence/drawdown throttles; place order with immediate SL/TP.
### 20.8 Training vs Inference Separation
- In Strategy Tester/WFO, train models and export weights/scalers/config to `Files/ProbEA/models/`.
- In live/sim, load and Predict only; no backprop.
- Prefer outputs as classification probabilities and optional return quantiles.
- Apply temperature scaling and (optional) small ensemble; expose uncertainty via buffers or accessor functions.
### 20.9 Guardrails
- NN outputs must pass existing gates (entropy, spread, session, HMM posterior).
- Never scale up risk under high entropy or high ensemble disagreement; only reduce or skip.
### 20.10 Interfaces (API Contracts)
This section defines the minimal, testable contracts between the EA and the NN component so data flows are predictable and failures are safe.
#### A) Include-based API (compiled MLP)
Inputs/Outputs and behavior:
- Inputs: `features[]` of length N (finite, normalized), symbol/timeframe context set at init.
- Outputs: regime probabilities `probs[K]` (sum=1, bounded by ε), distribution metrics (`mu`, `sigma`), optional quantiles (`q10`, `q50`, `q90`), and `uncertainty` in [0..1].
- Timing: call on new bar or via OnTimer; cache per (symbol, TF, bar time).
- Failure: return false and set last error; EA uses safe defaults and skips or reduces risk.
Sketch (MQL5-style pseudocode):
```cpp
enum NnError {
NN_OK=0,
NN_MODEL_NOT_LOADED=1,
NN_FEATURE_SIZE_MISMATCH=2,
NN_INVALID_INPUT=3,
NN_STALE_CONTEXT=4,
NN_NUMERIC_FAIL=5
};
struct NnOutputs {
double probs[]; // size K, normalized
double mu; // next-bar return (define: log-return or pips)
double sigma; // >= 0
double q10, q50, q90; // optional; q10<=q50<=q90
double uncertainty; // 0..1 (entropy or ensemble std normalized)
};
class CPriceNN {
public:
bool Init(const string symbol, ENUM_TIMEFRAMES tf,
const string weightsPath, const string scalerPath,
const double tempScaleT);
bool IsReady() const;
int FeatureCount() const; // expected N
bool Predict(const double &features[], const int n, NnOutputs &out);
int LastError() const;
string LastErrorMsg() const;
};
```
Safe defaults on failure:
- `probs` = uniform (1/K), `uncertainty` = 1.0, `mu` = 0, `sigma` = large (clamped), quantiles unset or mirrored around 0.
- EA logs once per bar and falls back to rule-based signals; no trade or reduced size per policy.
#### B) Indicator-based API (iCustom)
Buffer mapping (example):
- 0: p_bull, 1: p_neutral, 2: p_bear
- 3: mu, 4: sigma
- 5: q10, 6: q50, 7: q90
- 8: uncertainty (0..1)
- 9: status code (>=0 OK; negative = error)
Usage rules:
- Read buffers only on new bar; verify all are not `EMPTY_VALUE`.
- Treat `status<0` or any invalid/NaN as failure; apply same safe defaults and skip/throttle.
- Cache outputs with their bar time; avoid duplicate reads within the same bar.
#### C) Multi-symbol/TF context & caching
- Maintain a per-(symbol, TF) cache of `NnOutputs` keyed by the last completed bar time.
- Invalidate cache on `OnInit`/`OnDeinit`/symbol change; re-load models.
- Persist model/scaler versions; reject mismatches with `NN_STALE_CONTEXT`.
#### D) Validation checklist (implementation-time)
- Feature length matches `FeatureCount()`.
- Inputs finite (no NaN/Inf) and within expected ranges after normalization.
- `probs` sum≈1 within tolerance; clamp and renormalize if needed.
- Monotonic quantiles; fix ordering if violated and flag `NN_NUMERIC_FAIL`.
- Temperature scaling applied consistently across live/test.
## 21. Glossary
- Debounce: Require consecutive confirmations to filter noise.
- WFO: Walk-Forward Optimization – rolling train/test segments.
- KL Divergence: Measure of distribution difference for calibration checks.
- Etc. (expand as needed).
Bewerbungen
1
Bewertung
Projekte
907
76%
Schlichtung
25
16%
/
68%
Frist nicht eingehalten
99
11%
Frei
Veröffentlicht: 1 Artikel, 6 Beispiele
2
Bewertung
Projekte
34
24%
Schlichtung
3
0%
/
33%
Frist nicht eingehalten
2
6%
Arbeitet
3
Bewertung
Projekte
5
20%
Schlichtung
0
Frist nicht eingehalten
0
Arbeitet
4
Bewertung
Projekte
1
0%
Schlichtung
0
Frist nicht eingehalten
0
Frei
5
Bewertung
Projekte
550
49%
Schlichtung
56
39%
/
36%
Frist nicht eingehalten
227
41%
Arbeitet
6
Bewertung
Projekte
0
0%
Schlichtung
0
Frist nicht eingehalten
0
Frei
7
Bewertung
Projekte
0
0%
Schlichtung
0
Frist nicht eingehalten
0
Frei
8
Bewertung
Projekte
0
0%
Schlichtung
1
0%
/
0%
Frist nicht eingehalten
0
Frei
9
Bewertung
Projekte
3
0%
Schlichtung
0
Frist nicht eingehalten
0
Überlastet
10
Bewertung
Projekte
1
0%
Schlichtung
0
Frist nicht eingehalten
0
Frei
Ähnliche Aufträge
Create a Ninjatrader Indicator
100+ USD
Hey there, I’m looking to get a custom NinjaTrader indicator/plugin/tool developed. It’s very similar to Riley Coleman’s “Candlestick Trader” indicator (you can see a walkthrough of it in this YouTube video at timestamp 6:52: https://www.youtube.com/watch?v=NjCUZveXtLo& ;t=303s ). Please take a look at that video for reference, as I’d like most of the core features replicated. To summarize, the key functions I
Ninjatrader developer
80+ USD
Hello great developer. I am looking to get a ENABLE/DISABLE button on the ninjatrader chart. When I enable that button, the strategy submits the Limit or Stop Order (user input) which is X points (user input) away from the Close price of the last bar. I also want another button under the Enable/Disable button that lets me choose Long Only or Short Only. Let me know if you need any other details
Bainanans
500+ USD
Bainanan good f المؤشر. ينبغي إضافة نقطة صفراء عند أعلى نقطة في الشموع في منطقة ذروة الشراء - وهي نقطة H. ينبغي إضافة نقطة خضراء عند النقطة المنخفضة للشموع في منطقة ذروة البيع - وهي نقطة L. إذا وُجدت نقطة L واحدة على الأقل بين نقطتي H، فابحث عن نقطة LL في الفترة الفاصلة بينهما. ستكون الشمعة ذات أدنى سعر قاع هي نقطة LL. بشكل عام، لا تُعتبر نقطة LL بالضرورة نقطة L. ابحث عن الشموع ذات أدنى سعر قاع. إذا كانت هناك نقطة H
Need cTrader Bot with Heikin-Ashi & EMA Strategy
30 - 60 USD
I need a trading bot for cTrader (cAlgo) based on the following strategy: Indicators: Heikin-Ashi candles, 20 EMA, and 7 EMA Sell Conditions: Price is below the 20 EMA Price breaks the 7 EMA downward Open a Sell position Stop-loss: just above the last significant point (tight SL) Close trade when Heikin-Ashi candle turns green Buy Conditions: (opposite of above)
Døsh forex
30 - 200 USD
I want a robot that will help me and trade the the robot will be very good I don’t want to loose money I repeat I don’t want to loose money
We are looking for an experienced MT5/MQL5 developer to build a data bridge that connects an external market data feed (tick-by-tick from a third-party provider) into the MT5 platform. Requirements: Capture real-time tick-by-tick market data (bid/ask/last price, volume) from an external API (source: WebSocket/REST or DLL). Push this data into MT5 in a format compatible with native charts and order execution
📌 Forex EA Bot Requirement Document 1. General Information Trading Platform: MetaTrader 5 (MT5) / MetaTrader 4 (MT4) Trading Instruments: (e.g., GBP/JPY, XAU/USD, BTC/USD) Timeframes to Trade: (e.g., 15M, 1H, 4H, Daily) Trading Style: (Scalping, Swing trading, Intraday, Position trading) 2. Entry Rules Define the indicators/strategies for entry (e.g., Moving Averages, MACD, RSI, Order Blocks, Supply & Demand)
A few months ago, I started conducting some trading funding tests. I want to develop a management system that allows me to apply it to any account, regardless of the funding company. The idea is that ONLY from 11:00 PM to midnight, Monday through Friday, and on Saturday and Sunday, I can indicate the maximum number of trades allowed per day for each account. This means that if the system detects that I have already
Need Ninjascript expert
100 - 150 USD
Hey I would like to have an experience Ninjascript coder to help work on my strategy, I only need you to create and design an ATM similar to the one on this website based on my strategy https://ninza.co / please check this website and see if you can create similar ATM based on my design, would send my pdf file including references image and videos via comment section, please confirm you can do this job in this
Develop strategy on C# Ninjatrader
150 - 200 USD
Hello expert developer how are you doing ? Please see my strategy on the video but right now I want to create only ATM as my PDF file showing please let me know how much would you charge me for the only ATM to work as my images showing Auto trade works like this i.e, (It will send automatically orders on every bar) https://ninza.co/ please check this website and see if you can create similar ATM based on my
Projektdetails
Budget
30+ USD
Ausführungsfristen
von 1 bis 250 Tag(e)
Kunde
Veröffentlichte Aufträge6
Anzahl der Schlichtungen0