You are wrong, 100*$200 == $20000, great profit! And you are all nerds, whining about some forex, trading, strategies..... All that's left is to put this strategy on MQL5 signals and bet whether at least one person will sign up.
About the suggestion - I would extend it to the fact that the author should start a signal and show the result at least for a month. Lately, the site has been flooded with a huge number of empty articles. ((:
That's yes, on the one hand. On the other hand, you yourself write at least one article to understand the cost of labour and the level of complexity.... As the classic saying goes, talk is not talk....
And what good is this complexity?
Who will return the time spent on these articles?
It's about forex, not robotics.
And what good is this complexity?
Who will return the time spent on these articles?
It's about forex, not robotics.
Depends on what you call thick. If you think that in the article the author shares the Grail, then probably yes, there is no use.... But if you regard the article as a source and development of some market idea, then it may have a right to life....
Attitude towards users
Clearly
Only to those who think the author owes him something.....
The scammer doesn't owe anyone anything either.
But people fall for him for some reason.
If the articles didn't contain triggers and blatant motivation like "...the model is capable of generating profit", then it's all right. Our problems.
And when untested information is manipulated, that's not really our problem.
Considering that the first user was banned for criticism, I'll finish for good too. You can parry with counterarguments, I'll leave it better without reply.
...If the articles did not contain triggers and blatant motivation like "...the model is capable of generating profits", then so be it. Our problems.
And when they manipulate untested information - it's not really our problems....
Under some article by Dmitry in the comments I asked him to write an article specifically about training his Expert Advisors. He could take any of his models from any article and fully explain in the article how he teaches it. From zero to the result, in detail, with all the nuances. What to look at, in what sequence he teaches, how many times, on what equipment, what he does if he does not learn, what mistakes he looks at. Here is as much detail as possible about training in the style of "for dummies". But Dmitry for some reason ignored or didn't notice this request and hasn't written such an article until now. I think a lot of people will be grateful to him for this.
Dmitry write such an article please.

- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
Check out the new article: Neural Networks in Trading: Practical Results of the TEMPO Method.
We continue our acquaintance with the TEMPO method. In this article we will evaluate the actual effectiveness of the proposed approaches on real historical data.
The TEMPO method is built on the use of a pre-trained language model. In particular, the authors of the method use pre-trained GPT-2 in their experiments. The main idea of the approach lies in using the model's knowledge obtained during preliminary training to forecast time series. Here, of course, it is worth drawing non-obvious parallels between speech and the time series. Essentially, our speech is a time series of sounds that are recorded using letters. Different intonations are conveyed by punctuation marks.
The Long Language Model (LLM), such as GPT-2, was pre-trained on a large dataset (often in multiple languages) and learned a large number of different dependencies in the temporal sequence of words that we would like to use in time series forecasting. But the sequences of letters and words differ greatly from the time series data being analyzed. We have always said that for the correct operation of any model, it is very important to maintain the distribution of data in the training and test datasets. This also concerns the data analyzed during the operation of the model. Any language model does not work with the text we are accustomed to in its pure form. First, it goes through the embedding (encoding) stage, during which the text is transformed into a certain numerical code (hidden state). The model then operates on this encoded data, and at the output stage, it generates probabilities for subsequent letters and punctuation marks. The most probable symbols are then used to construct human-readable text.
The TEMPO method takes advantage of this property. During the training process of a time series forecasting model, the parameters of the language model are "frozen," while the transformation parameters of the original data into embeddings, compatible with the model, are optimized. The authors of the TEMPO method propose a comprehensive approach to maximize the model's access to useful information. First, the analyzed time series is decomposed into its fundamental components—such as trend, seasonality, and others. Each component is then segmented and converted into embeddings that the language model can interpret. To further guide the model in the desired direction (e.g., trend or seasonality analysis), the authors introduce a system of "soft prompts".
Author: Dmitriy Gizlyk