Machine learning in trading: theory, models, practice and algo-trading - page 1743

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
It's a complicated thing, you can't know exactly how to approach it... it's all very vague.
I don't know if it's too much to do by signs or what.
well there?
Foundit here
I looked through the source, but nowhere did I see an unambiguous conclusion about the bad work of this method....
What's up?
See the very interesting TSrepr (Time Series Representations ) package in R.
"Time series representation methods can be divided into four groups (types) (Ratanamahatana et al. (2005)):
In nondata adaptive representations, the parameters of transformation remain the same for all time series, irrespective of their nature. In data adaptive representations, the parameters of transformation vary depending on the available data. An approach to the model-based representation relies on the assumption that the observed time series was created based on basic model. The aim is to find the parameters of such a model as a representation. Two time series are then considered as similar if they were created by the same set of parameters of a basic model. In data dictated approaches, the compression ratio is defined automatically based on raw time series such as clipped (Aghabozorgi, Seyed Shirkhorshidi, and Ying Wah (2015)).
The most famous (well known) methods for nondata adaptive type of representations are PAA (Piecewise Aggregate Approximation), DWT (Discrete Wavelet Transform), DFT (Discrete Fourier Transform), DCT (Discrete Cosine Transform) or PIP (Perceptually Important Points). For data adaptive type of representations, it is SAX (Symbolic Aggregate approXimation), PLA (Piecewise Linear Approximation) and SVD (Singular Value Decomposition). For model-based representations it is ARMA, mean profiles or estimated regression coefficients from a statistical model (e.g. linear model). The data dictated is the less known type of representation and the most famous method of this type is clipping (bit-level representation) (Bagnall et al. (2006)).
In the TSrepr package, these time series representation methods are implemented (the function names are in brackets):
Nondata adaptive:
Data adaptive:
Model-based:
Data dictated:
Very interesting transformations, including clustering.
Good luck
Look at very interesting package TSrepr(Time Series Representations) in R.
Remember, when I've asked you to make a script for mt4, there was trained neuronics from nnfor package, and the target was PIP- Perceptually Important Points (repr_pip) from TSrepr :)
Vladimir! I have a few questions, if you allow me...
Tellme what maximal error you managed to achieve on the classification of zigzag direction on the EURUSD? And did you use noisefilter while doing it?
2) Does "discretization" of predictors, that you described in your articles, worsen the quality of learning?
3) I want to try to do some kind of meta-learning, at the lowest level, the gist of the idea is as follows:
n1. train let forrest on the data
n2. We pull out all the rules that Forest has generated and apply them as new predictors; each rule is a predictor and there will be 500-1000 rules. Predictors turn out to be "sparse", but what can we do?
n.3 Let's train a new model on predictor rules...
The idea is to
1) to increase the number of predictors
2) obtaining more complex and deeper rules, i.e. more hierarchically complex rules
3) Forest show prediction as sum of all rules predictions (trees), it seems to me that if we consider not rules sum but rules separately then we can better separate class labels, maybe find some unique combinations of rules etc.
The question is: isn't what I just wrote the usual gradient boosting?
4) And also, where can I get those spectral indicators that you use satl, fatl etc. ?
Foundit here
Read the topic, came to the same conclusion. And the prediction in cssa is cleverly done, gradually predicting one step ahead, is it really that effective?
Any speed comparisons between bpf and ssa? Or take complex wavelets and there will be the same Lessage figures. Only it is not clear how to put them into the optimizer, it is more suitable for visual tuning.
cssa translates as Causal SSA. This method is in the 2013 book.
Oleg and Miklouha were banned?)
Oleg was unbanned and Miklokh for some reason ............