You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Within this OnTimer, you can "tell" the server to "listen" to what clients want, and then react or respond. I´ve tried this with NamedPipes and it kinda worked, but sometimes just not, since the internal handling of messages, of backpressure etc. triggered exceptions in the NamedPipe core. So I dropped it and decided for an own solution.
Ring-Buffers etc., again not the thing you should do, since MQL side can lag and you risk an overflow. Better blocking with clear timeouts, debugging-flags etc. Furthermore, you are not restricted to this 25ms, since you can also balance the handling based on the payload coming from clients and you can respond to up to 1000 messages - when you have the performance in your pipe to do so. Again a point for the need of performance when copying data. It might happen that C# wanna have data of 1000 bars, and you have to deliver on MQL side as efficient as possible.
Depends on what you wanna achieve.
In our case, or maybe also in the interest of many professional coders, its necessary to move the most of the code to an IDE like VS with C#. Problem is, there is no API in MT and you need to create kinda your own.
Mainly this means, you have to constantly update data which you have in MQL, but you need on the other side. Like account data, PnL, Quotes, Tickdata, etc. The list gets longer the more you wanna do outside MQL. Thats for example the reason, why this ring-buffer-thing is not the way to do. Why? Lets overdo a litte.
Imagine your have 100 open positions, 20 symbols. To manage this outside, you need to know about every position all the details, about all the symbols, the ticks, maybe last bars depending on what you wanna do on the other side. News come out. 20 symbols fire ticks same time, more than you maybe can handle, but also more than you maybe need. Do you really need every tick to update some drawings in C#? No. Do you need every tick to manage the positions? Also no, since the execution time, when closing a position is anyways always above 50 ms. It doesnt matter if you miss a tick - most of the time.
Would you use ring-buffers, every tick would go into that ring, would be stuck in queue and would not help you. Same time there is an overflow risk for such buffers. We talk only bout the server side in C# here, not yet about MQL server side.
So what you rather need, is a mutual buffer, but still working like a pipe. The way I do it, is: Push the data to the pipe, and when the next tick comes, I check: Was it even read? If not, overwrite it. With ring buffers not possible. And here we also come to the this= thing. Such data, in structs, can be copied to the IPC buffer at once, the struct is taken from MQL via IntPtr and copied using Marshal.Copy to the same struct in C#, after it went through the IPC pipe / its buffer.
In general, you have 3 kinds of messages:
1. Push - Client sends, Server processes, one after the other
2. Request - Client sends, waits, server processes, returns results, client takes it
3. Update - like described above
NamedPipe and everything that deals with ringbuffers can only handle 1 and 2, but the 3rd is crucial for the usecase with data like ticks, but also any kind of data which is just accurate for the very moment. With classical methods, you just create your own bottleneck for nothing.
Next problem, with classical methods is: You are restricted in the number of clients, since every client has its own instance. What if dispose didnt work for serveral times, what if clients need temporary access here and there? You will end up in a deadlock sooner or later.
And its not I dont have this solution, I developed it and its working reliable. There are also several worker threads maintaining the servers index, check for dead candidates, dispose data, etc. All the MQL servers share one worker (at the moment), since they all have this 25ms Timer problem anyway, so the worker is no bottleneck. Furthermore, Listen and Return is splitted. Whenever MQL sends a listen, it blocks for a fraction of a microsecond, never longer. After that MQL code takes over and the worker continues with the next MQL server. Return does the same, always blocking only for the time really needed to store data into the buffer.
All these threads are managed within a DLL - no problem so far.
The servers on C# side have their own thread each.
Each server has its own queue. Whereby the queue is only used for async messages. Sync messages are handled separately.
Most of the messages are string-based, especially the sync-messages which need response.
All the updating stuff, ticks, account data, time etc. is done via structs.
The risk with all this is just one: What if MQ decides to re-organize how data is handled in structs. If they decide to do so without any announcement, it will all crash.
Long story short: If someone has a solution, how to solve this with the same performance we currently have with this = and Marshal.Copy on DLL side, I am happy to learn how.
10 Clients in 10 threads sending 1000 requests each. Server responds with the original message plus a verification, client approves. 10000 times without any error. Of course this can be done with a million as well.
For me is mainly important how much I can pack into one OnTimer-Cycle. 10000 requests from 10 clients handled in one OnTimer() cycle is very performant - no need to improve that, and since its already 3 times faster than NamedPipe, I doubt its possible, anyway not necessary.
With async messages it comes to 20000-30000 messages within the same time.If I remember correctly, during our talks before I had to excuse myself due to personal reasons, we had discussed using a continuously running MQL5 Service instead of OnXXX events in EAs. Did you give up on that?
Given that an MQL5 Service has a dedicated thread and can aggregate and send data out without interruption, what happened to that line of thought?
If I remember correctly, during our talks before I had to excuse myself due to personal reasons, we had discussed using a continuously running MQL5 Service instead of OnXXX events in EAs. Did you give up on that?
Given that an MQL5 Service has a dedicated thread and can aggregate and send data out without interruption, what happened to that line of thought?
Yep.Of course I remember.
That time I mentioned also, that we have already some kind of bridge, and this is the improved version in the meanwhile. Dedicated thread is not necessarily necessary, since we can monitor from the "central-service" which instance of the EA is not very busy and we can tell the available EA to execute some job like collecting bars, updating account data etc.
EDIT: But still the thought is interesting. If a service could take over all the serving stuff within a separate thread, it could still be very helpful. But honestly: I never worked with these MQL services, you have more experience with that than I have. I dont know whats possible and what not.If I remember correctly, during our talks before I had to excuse myself due to personal reasons, we had discussed using a continuously running MQL5 Service instead of OnXXX events in EAs. Did you give up on that?
Given that an MQL5 Service has a dedicated thread and can aggregate and send data out without interruption, what happened to that line of thought?
So would it be possible to overcome that 25ms OnTimer() bottleneck with an MQL service app?
So would it be possible to overcome that 25ms OnTimer() restriction with an MQL service app?