Machine learning in trading: theory, models, practice and algo-trading - page 1909
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I have a dirty file of 7,700 columns where I take 24 leagues, so don't go on, but rather look here. Here's your file.
And here is mine
What's the difference???? I'm not going to stall. In principal component analysis, when each column is its own coordinate system, it is important that they can be clustered when points from different columns can be plotted on one coordinate system common to them. The interpretation is simple. The more vertical and horizontal vectors, the cooler it is. What you have is a stupidly uniform spot.
If you want to compress information, check for autocorrelation first, and you can safely leave only 1 input, but network will not work, because there is no memory.
If you want to compress the information, then first check for autocorrelation, and you can safely leave only 1 input, but the network will not work, because there is no memory.
Well, in addition to the model estimates.
The predictors show the number of columns in the file
258 total number of vectors. I removed class 0 and left class 2, renaming it to zero, because they were balanced with class 1 in number, 19.60 is quadratic error, or rather the difference between straight linear and quadratic it should tend to zero, 79.141 is General generalizability, the difference between the errors decreases when it gets to 100, 69.767 is spicificity. The total control plot is 75 with a general generalizability of 70. The answer is NOT KNOW we got on 77 vectors of the total sample where the control plot had 17.
In fact, even though I got worse results on the training, I got much better results on the control plot. Moreover, this is not a test plot, like yours, but a control one, the one that the network did not see at all. The test is when it trains on the training so that the test work well, that is, potentially the network sees the test site during training. The test one does not. Questions????
Adabust gave me 79 on 79
Adabust gave me 79 for 79
Maybe you are using such a configuration of AI that using the whole set can get a high learning score
That's exactly what it is. Let me try to explain it in another way. Suppose there is a classical system on Ma(100) and price. Crossing up is a buy, crossing down is a sell. Usually you feed the network with Ma and price for entry and system signals for exit. In this case we save on inputs since Ma have been calculated in advance and sent to the net in the ready form. And you can feed the network not Ma, but 100 lags of price (for the network to calculate itself) on the input and the system signals on the output. In this form the network cannot be fed less than 100 lags of price.
That's what it is. Let me try to explain in another way. Let's say there is a classic system on Ma(100) and price. Crossing up is buying, crossing down is selling. Usually you feed the network with Ma and price for entry and system signals for exit. In this case we save on inputs since Ma have been calculated in advance and sent to the net in the ready form. And you can feed the net not Ma, but 100 lags of price (for the net to calculate itself) on the input and the system signals on the output. In this form of the network can not be fed less than 100 lags of the price.
Primitive thoughts and the sooner you get rid of them, the sooner you will jump out of the pit of your delusions. You are breaking one of the main rules of model preparation. Remember how we are here with Alexei Vyazemsky was active about the Olympic Games, so as a sign of gratitude, I've podnakil him about lags and the effect that I was surprised. I am a practitioner, and then a theorist. Let me quote part of the correspondence, which reveals the essence of lag
Quote:
As you noticed I save data for 15 instruments on the basis of this data I build several indicators. I take the stochastic function. I take the cumulative standard deviation and that's about it. As a result, I have 307 unique input bars that are primary for the current signal. As a result, I take these 307 bars for the current signal and add to it (the signal) another 307 bars from the 24 previous signals. The meaning of this decision at the appearance of the signal to present the data to the NS immediately for the last 24 signals. This is the transformation that allows the classification to look deep into the history of the current signal. In essence it is a lag to 24 depth.
The quote is done:
The network can't do anything by itself. What you give it as input is what you get as a result. Not all lags are good as practice shows, and in your set there is not a single column which somehow would help NS to get an adequate model..... Good luck!!!!!
And when done correctly, the model works like this.....
Yes, wrong) Apparently it's all glitches in parsing, or on the data where reading is going on.
Normalization is unlikely to be convenient. The good thing is that we need archive news data in the terminal, the ability to download them and the service to work with them. I do not think that there are no archives) But judging by the creators position, until users say their word, what they want, it will not start, and if it does start, first of all in a paid version).
Well, yes, the free and good calendar is hardly at all possible)
Well, yes, a free and good calendar is hardly possible at all)
Thank you, it is interesting, it would not occur to me to feed the previous signals. This begs the question, maybe the net is an unnecessary link?
That is what makes the network interesting to me, so as not to cull the signs. Otherwise it would be easier to make a classic system.
I will have to tinker more with experiments, I will take a very large number of examples on the system on the wand, so that in training there was not a single repeat and could fit into one epoch.