LLM Council Expert Trader
- Experts
- Cedric Olivier Kusiele Some
- Versione: 10.0
- Aggiornato: 22 febbraio 2026
- Attivazioni: 5
LLM Council Expert Trader™ – AI-Powered Multi-Agent Trading Expert Advisor for MT5
Delegate your trading decisions to a next-generation AI Council of Expert Traders designed for disciplined, semi-passive trading based on Smart Money Concepts (SMC) and Inner Circle Trader (ICT) methodology.
LLM Council Expert Trader is an advanced Expert Advisor (EA) that replaces static indicators with real-time reasoning powered by Large Language Models such as Claude, GPT, Qwen — and now custom-adapted LLMs running locally on your own machine. It operates like an institutional trading desk, where multiple AI agents collaborate, validate bias, structure, risk, and execution before any trade is placed.
🆕 NEW – LM Studio Local Inference Support (Run AI Locally — No API Costs!)
The latest version introduces full LM Studio integration, allowing you to run the entire AI council on your own hardware using any GGUF-compatible open-source model — completely offline and at zero API cost. Merge your LoRA adapters with the model base to use your fine tuned model in LM Studio with GGUF Fusion Pro™
- ✔ Run Llama 3.1, Qwen 2.5, Mistral, DeepSeek-R1 distills and more — locally
- ✔ Zero API costs for local inference — own your AI, own your data
- ✔ Full per-agent model routing — assign different local models per agent (HTF, Structure, Strategy, Execution)
- ✔ Seamless OpenAI-compatible API — drop-in replacement, no prompt changes needed
- ✔ Works alongside OpenRouter and Together.ai in hybrid routing mode
- ✔ Supports optional API key authentication when LM Studio auth is enabled
- ✔ 3-retry exponential backoff for resilient local server connections
- ✔ Cost tracking logs local calls as $0.00 to keep optimizer metrics accurate
Recommended Local Models for Trading Analysis:
- 🥇 Qwen2.5-14B-Instruct-GGUF – Best balance of reasoning quality and VRAM usage
- ⚡ Meta-Llama-3.1-8B-Instruct-GGUF – Fastest inference, low VRAM (~8GB)
- 🏆 Qwen2.5-72B-Instruct-GGUF – Highest quality, requires ~48GB VRAM
- 🧠 DeepSeek-R1-Distill-Qwen-14B – Excellent structured output for execution agent
🆕 NEW – Hybrid LLM Routing (LM Studio + Together.ai + OpenRouter)
The latest version introduces intelligent three-way hybrid routing, allowing the EA to dynamically route AI requests between LM Studio, Together.ai, and OpenRouter — simultaneously, per agent — depending on availability, pricing, performance, or trader configuration.
- ✔ Automatically route AI requests between LM Studio, Together.ai and OpenRouter
- ✔ Priority order: LM Studio → Together.ai → OpenRouter (fully configurable)
- ✔ Optimize cost efficiency, latency, and reliability
- ✔ Access a broader ecosystem of AI models — local and cloud
- ✔ Manual provider priority or fallback configuration
- ✔ Increased execution continuity during API outages
- ✔ Run heavy models locally for HTF/Execution and fast cloud models for Structure/Strategy
This upgrade provides greater flexibility, redundancy, and cost optimization for AI-driven trading operations.
🆕 NEW – Custom LLM Agents (LoRA / Adapter / Fine-Tune Support)
The latest version introduces advanced model customization, allowing traders and developers to deploy their own LoRA adapters, fine-tuned models, or custom structure agents.
- ✔ Use custom LoRA adapters for SMC/ICT structure recognition
- ✔ Plug in fine-tuned LLMs trained on your own market data
- ✔ Assign different models per agent (HTF, Structure, Strategy, Execution)
- ✔ Ideal for proprietary research, prop firms, and quant traders
This makes LLM Council Expert Trader not just an EA, but a research-grade AI trading framework.
🆕 NEW - Get all you need to train on Together.ai or elsewhere, your perfect trading LLM agent. Get your AI Trading Agent Fine-Tuning JSONL Dataset Generator
Why Traders Use LLM Council Expert Trader
- ✔ Reduces screen time and emotional decision-making
- ✔ Applies ICT & SMC logic consistently and objectively
- ✔ Enforces strict risk and drawdown protection
- ✔ Ideal for prop firm challenges, live accounts and demo accounts
- ✔ Run AI at zero cost with local inference via LM Studio
Multi-Agent AI Architecture
- HTF Agent: Detects higher-timeframe bias (D1/H4/H1) using SMC, liquidity and AMD phases.
- Structure Agent: Identifies MSS, BOS, FVGs and Order Blocks on M5 — now fully compatible with custom LoRA, fine-tuned models, or local LM Studio models.
- Strategy Agent: Selects or adapts strategies based on volatility and recent performance.
- Execution Agent: Final risk validator (minimum R:R 1:2, spread & portfolio checks).
Advanced Risk & Position Management
- Dynamic position sizing (0.25x – 2x base risk)
- Automatic drawdown-based risk reduction (>5%)
- Breakeven, trailing stops behind structures
- Partial profit-taking (30% / 60%)
- Auto-close on adverse market structure or inactivity
LLM & AI Provider Integration
- 🆕 LM Studio local inference — run open-source models on your own hardware at zero cost
- Hybrid real-time LLM analysis via OpenRouter API and Together.ai
- Supports Claude-3.5-Sonnet, o1-mini, Qwen-2.5-72B and more via cloud
- Supports Llama 3.1, Qwen 2.5, Mistral, DeepSeek-R1 distills and more via local LM Studio
- Custom LoRA adapters & fine-tuned LLMs via Together.ai
- Model override per agent (HTF / Structure / Strategy / Execution) — cloud or local
- Tracks API costs for efficiency (local inference logs as $0.00)
🧠 How to Choose the Right LLM for Trading (Using Livebench)
Not all Large Language Models perform the same in financial reasoning, market structure analysis, or cost efficiency. To objectively compare models, we recommend using Livebench.ai, an independent benchmark platform for real-world LLM performance.
- 1️⃣ Focus on Reasoning & Analysis Scores
Essential for SMC/ICT structure detection and execution logic. - 2️⃣ Balance Quality vs Cost
Identify the best quality-to-price ratio for intraday analysis. With LM Studio, local models are always $0.00. - 3️⃣ Match the Model to the Task
• HTF bias → strong reasoning models (Claude-class or Qwen2.5-72B locally)
• Structure & execution → fast models, custom LoRA adapters, or Llama-3.1-8B locally - 4️⃣ Evolve With the Market
Rotate or fine-tune models as volatility and structure behavior change.
💡 Tip: Always validate new LLMs, LoRA adapters, fine-tuned models, or local LM Studio models on demo accounts before live or funded deployment.
Performance Expectations*
- Win Rate: 45–65%
- Profit Factor: 1.5–2.5+
- Max Drawdown: <15% with protections enabled
- Average LLM Cost: ~$0.01–0.05 per analysis (cloud) / $0.00 with LM Studio local inference
*Results depend on market conditions and settings. Past performance does not guarantee future results.
🔥 LIMITED RENTAL PROMOTION 🔥
✔ 1-Month EA Rental
❌ Regular Price: $100 USD
✅ Promo Price: $44 USD
⏳ Offer valid until February 28, 2026
🚀 LIMITED EA PURCHASE PROMOTION 🚀
✔ Lifetime EA License
❌ Regular Price: $5,000 USD
✅ Promo Price: $444 USD
🎯 You save $4,556 USD (–91%)
⏳ Offer valid until March 31, 2026
No recurring EA fees. Unlimited usage.
(Only API usage costs apply — or run fully free with LM Studio local inference)
Usage Instructions:
- Set your OpenRouter API key (get one at openrouter.ai). Optional if using LM Studio local inference.
- Download the required indicators (.ex5) and save them to \MQL5\Indicators: AVPT_indicator; CRT_Candle_Range_Theory_Indicator; HistoricaPriceProjection; ICT_Concepts; Liquidity Swings; Liquidity Sweep; Multi_Timeframe_Harmony_Index_indicator; Support and Resistence Levels with Breaks; TradingSessions_IB_Signals
- Attach indicators to M5–H1 charts.
- Attach the EA to M5–H1 charts. Optional configs: Forex config; Crypto config
- Monitor logs for agent decisions and trades.
🖥️ Optional: LM Studio Local Inference Setup
- Download and install LM Studio (free, Windows/Mac/Linux).
- Download a GGUF model (e.g. Qwen2.5-14B-Instruct or Llama-3.1-8B-Instruct) from the LM Studio model browser.
- In LM Studio → Local Server tab: enable CORS, then click Start Server (default port 1234).
- In MT5 → Tools → Options → Expert Advisors → enable WebRequest and add http://localhost:1234.
- In EA inputs: set InpUseLMStudio = true and InpLMStudioURL = http://localhost:1234/v1/chat/completions.
Mandatory Step: Add URLs in the Terminal
According to the documentation: To use the WebRequest() function, add the following URLs to the allowed list:
https://openrouter.ai https://api.together.xyz http://localhost:1234 (add this if using LM Studio local inference)
