Machine learning in trading: theory, models, practice and algo-trading - page 2249

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
How can we measure the predictability of the series or statynarity without removing the trend?
What is the measure of series stability?
The presence of stationary (described by one mathematical model, or if we decompose, by the constancy of frequencies and amplitudes on the whole plot). I.e. the short section should be described the same way as the long one, or the description of different short ones throughout the long one is the same.
From filters and carriers. If noise frequency is commensurable with carrier, it is not terrible, and if it is informative, it is worse. And of course the amplitude of the noise must be smaller.
Stability of series within one matmodel is easy to understand. But when models periodically change, the rate of establishment of a stable area, or the length of the area of model change, the length of stable areas, the constancy of frequency and amplitude characteristics. This is a complex concept.
The presence of stationary (described by one mathematical model, or ..................
I don't know how to implement it.... maybe there is a simpler kind...
I want to create a network which aims to take market quotes as input, and output a more "predictable" series
But I need a measure of "predictability"
I don't know how to implement it.... maybe there is a simpler kind...
I want to create a network, the purpose of which is to take market quotes at the input, and at the output to produce a more "predictable" series
But I need a measure of "predictability"
I don't know how to implement it.... maybe there is a simpler kind...
I want to create a network, the purpose of which is to take market quotes at the input, and at the output to produce a more "predictable" series
But I need a measure of "predictability"
So far, the science doesn't know how to determine the exact stationarity beginning because it is defined on the history. Like MA.
There is no measure of predictability on a non-stationary series, it can only be on stationary sections. If the network will at least just define these plots on history to begin with, that's a good thing.
.
As one option. The measure of predictability cannot be measured by one value.)
So far, science does not know how to precisely determine the beginning of stationarity, because it is determined on history. How MAshka.
There is no measure of predictability on a nonstationary series, it can only be on stationary sections. If the network will at least just define those plots on history to begin with, that's a good thing.
You don't get it... I won't be predicting anything, I will be forcing the network to generate a new stationary series...
I'm good with history, as long as it works.
That' s what I think.
You don't get it... I will not predict anything, I will force the network to generate a new stationary series...
I like the definition of history, as long as it works.
That 's what I'm looking for.
Entropy can also be used. It's a complicated concept to me. It is like the stability of equity. It can't be described by one parameter either.
Yes, you are right, if the feature vector is converted to a matrix and fed to the convolutional network, then little will change (I have already checked:)) In my case, the idea is to make maximum use of the convolutional network property to find and use local templates. These patterns are invariant with respect to transposition, that is, multilayer convolution can find the same pattern, in different places in the image. The same architecture with intermediate aggressive reduction of the feature map allows to form a hierarchy between templates on different convolutional layers. So, I am trying to find such a graphical interpretation of the quote, which will allow the convolution to find these templates.
For our tasks, convLSTM is more likely. That is, convolution taking into account the spatial and temporal parameters. Examples can be found here and here. In torch, I'll test it this week how it works. There is an implementation in PyTorch
Good luck
About tsos, ns, errors of the 1st, 2nd kind.
Read up to halfway through and stopped with a chuckle here
random numbers. A neural network using these weights may have the correct input-output relationship, but why these practical weights work remains a mystery. This mystical property of neural networks is the reason why many scientists and engineers avoid them. Think of all the scientific fiction spread by computer renegades.
I think the author is very far from NS, even more far than I am))