Machine learning in trading: theory, models, practice and algo-trading - page 3695

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
This is a logical hypothesis from the graph provided.
I don't know what a spread and a dollar is. There are two XAUUSD_bid. The higher this price is the more favourable.
If close bars were built on ask, you would get a negative spread, not a positive one. Because the real symbol is almost always more profitable to trade/exchange than its synthetic counterpart.
But for other periods there is no such difference, everything is around zero. Then I will take quotes from another DC and compare. I think it will be the same.
In order not to get bogged down in all sorts of conspiracy theories, I will finish writing the TS next week and check it :).Another live server. Everything is in relative order here.
You are right, the quotes are broken on the demo. Everything is completely different. Now it's even better :)
Another live server. Everything here is in relative order.
You are right, the quotes are broken on the demo. Everything is completely different. Now it's even better :)
If you open legs in different DCs, it works out very well. It is necessary to think over the scheme of how it is most convenient to do it.
Might pick up my old miracle and rewrite it for such a case :) Or a separate software with connectors. I don't want to write code.
An article about hidden Markov models. I like the hmmlearn library, well done.
https://www.mql5.com/ru/articles/17917
I have been using pieces.app for over a year now, probably the most useful and (so far) free coding app where you don't need vpn.
Limited only by coding tasks, you can't generate pictures.
Today we have added models (in addition to clude 3.7).
Later I will make a comparison of the new models on a rather complex task.
In the chach-ve test subject will be this rather complex task (and code from one of the articles on kozul inference). Perhaps there will be some clarifying questions. Then I will test each code and compare it with the original. The goal is to increase the quality of output models (new data) in TC, it directly depends on this function.
The goal is quite specific, because if you just ask them to "make it better", they start to offer many different non-working variants, including delusional ones.
GPT 4.1 offered an option, but cannot write working code. He can't fix the errors that appear either.
But he offered an interesting library for conformal predictions, which he tried to "put into the code". No credit so far, but thanks for the library, of course.
GPT 4.1 offered an option, but cannot write working code. He can't fix the errors that occur either.
But he offered an interesting library for conformal predictions, which he tried to "put into the code". No credit so far, but thanks for the library, of course.
Maxim, how do you communicate with this intelligence? Please allow me to send you a message.