Machine learning in trading: theory, models, practice and algo-trading - page 3713
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Where are we supposed to get data from, for example, about the content of Trump's post tomorrow (and then again a week from now, a month from now) on his social network? And that could turn more than one market.
The problem is not just that the market is unsteady - the problem is that it's unsteady all of a sudden)
Well, according to the classics, at least a system of monitoring deviations - signals like volatility spikes, changes in spreads ora sharp drop in the quality of predictions for example through metrics likezetcor,war , etc., or even simple rules like: "if a strategy goes negative in a few minutes - switch it off". The general idea is that for each model from a portfolio of models, there is a "market profile", in the form of a pattern of stat-characteristics, or a learned mini MO on it, when the market changes profile, the system closes positions and switches off and switches on another, which in the past worked in plus in this mode, if the mode is recognised. News orsocial media, etc. filters to at least partially understand the source of the momentum. The most basic and reliable way of protection is loss limits, stops, automatic shutdown if key parameters are violated.
In real life, the market will always behave more ornate and insidious than 90% of traders modelling it, but that is the nature of this activity, all or nothing, they chose it themselves. And, putting your hand on your heart, few people have even partially realised and brought the classics to a combat state. When someone works alone, most often everything is limited to banal stops, which is known to be such a solution.
Interesting article on the topic of deal markers. To avoid look-ahead-bias past-price markers are used that do not look into the future.
For example, I have more often marked up trades based on future price changes. It kinda shows here that markups based on past prices are more often more stable.
https://www.mdpi.com/1099-4300/22/10/1162
Interesting article on the topic of deal markers. To avoid look-ahead-bias past-price markers are used, which do not look into the future.
For example, I have more often marked up trades based on future price changes. It kind of shows here that markups based on past prices are often more stable.
https://www.mdpi.com/1099-4300/22/10/1162
I didn't see a word about marking on the past prices.
Traditional marking they have: it seems to be determined by the direction of each bar.
And their original marking:
Normal zigzag: when the price reverses by e.g. 100 pts, a new knee is formed and changes direction. If the threshold is not exceeded, the direction of the ZZ is maintained.
Prof. Prof. Savelyev calls such scientists grantotods....
Didn't see a word about the markup on past prices.
It is written under the picture and can be seen on the picture. The current state is marked relative to the current values of the signs, not the future change for n bars ahead.
In other words: current market conditions are predicted, not future changes.Another article with a similar partitioner. This is quite a bit different from the Prada partitioners that a lot of people are hooked on.
https://www.researchgate.net/publication/372976059_Optimal_Trend_Labeling_in_Financial_Time_Series
Another article with a similar partitioner. This is quite a bit different from the Prada markers that a lot of people are hooked on.
https://www.researchgate.net/publication/372976059_Optimal_Trend_Labeling_in_Financial_Time_Series
Powerful dope, there and dynamic programming is used to select the optimal markup strategy.
Unlike CTL, Optimal Trend Labelling, proposed by Kovacevic et al, is not a specific trend markup algorithm per se. Instead, it focuses on identifying and selecting the optimal trend labelling algorithm in terms of its robustness and impact on the performance of machine learning classifiers.
Key features of Optimal Trend Labelling:
Definition of robustness: The authors introduce a robustness metric for a trend labelling algorithm. This metric evaluates how well a classifier trained on such labels can tolerate changes in cumulative returns given the generalisation error of the classifier.
Noise Model: A noise model is proposed to simulate the desired accuracy metric. This allows us to evaluate the robustness of the trend partitioning algorithm without the need to train the actual classifier.
Optimal algorithm selection: The goal is to select the most robust partitioning algorithm that will be most effective for training machine learning models for financial time series forecasting.
Meta-approach: This is more of a meta-approach or framework for evaluating and selecting existing partitioning methods, rather than a new partitioning method per se. It can be used to evaluate methods such as CTL.
Maxim hi, there is a baserga in essence, but I think it is forbidden to discuss it here, but in private........ How are you?
Hi, you try, if not banned - then it is allowed :) but you can also in private.
Hi, you try it, if you don't get banned then it's allowed :) but you can do it in a private message.
Let's do it in private.
Everything genius is simple! I haven't investigated the TS yet, I'm resting for a month and a half already :)
In addition to the variety of TC on regression models.
https://openreview.net/pdf?id=cGDAkQo1C0p