Machine learning in trading: theory, models, practice and algo-trading - page 1353
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Market BP was fed to the LPF input. At the input of the NS the same BP. Yes, exactly so, the network described the LPF operation and made a prediction of its output at a given interval.
No, I haven't tried it on the boosting.
If you can, I'll try it.
If you don't mind posting a sample, I'll give it a try.
I don't understand, what sample? The BP itself? You can do that. But the sample from BP, it's random.
I don't understand, what kind of sample? The BP itself? You can do that. But the BP sample, it's random.
And you can't write it to a file? You must be using Python, where you can unload split samples prepared for training to a file, right?
Can't you write it to a file? You're probably doing it in Python, where you can unload split samples prepared for training to a file, right?
Let's go over it again.
1. The entire BP is 55k OHLSV. Randomly 5 - 6 thousand lines of 20 length (Close only) are extracted from it. It will go to the NS inputs for training.
2. Sampling from step 1 is passed through filtering filter. Now we get a sequence of length 20+Tr. Where Tp is the prediction time. The last value of the LPF output is a target value.
3. We feed the NS 1 and 2 - teach.
Or maybe I do not understand something.
PS I am trying to save the data. Will .mat or .spydata file formats work? Export to CSV somehow did not come, it should look for.
Let's go over it again.
1. The entire BP is 55 thousand OHLSV. Randomly 5 - 6 thousand lines of length 20 (Close only) are extracted from it. It will go to the NS inputs for training.
2. Sampling from step 1 is passed through filtering filter. Now we get a sequence of length 20+Tr. Where Tp is the prediction time. The last value of the LPF output is a target value.
3. We feed the NS 1 and 2 - teach.
Or maybe I do not understand something.
PS I am trying to save the data. Will .mat or .spydata file formats work? Export to CSV somehow did not come across, it is necessary to look for it.
Okay, do not bother.
I do not know what to read these formats.
But, not quite understand what are the predictors ...
Okay, don't bother.
I don't know what to read these formats with.
But, not quite understand what are the predictors...
There are no predictors there a scaled series of Close values are directly fed to the NS input - [Close[i-0], Close[i-1], Close[i-2],...,Close[i-19]].
As a target - one value of VLF output [i + Tp], where Tp is prediction time in minutes. There are 5-6 thousand such lines in total.
In general, it is interesting to see the results in the forest. If you are going to do it, I'll make a CSV in the near future.
Well, one more forecast plot to finish. FNF forecast (approximately equal to EMA(8)) for 5 min. I show it because it is quite possible to work with this forecast.
.
Well, this citizen said a lot and vaguely... The main purpose of his message was that nothing will help me: neither Erlang, nor Bachelier, in general, nothing, except such rows, as he gave.
Nothing works on my models - that's why I came here, maybe the neural network will see something.
Search Wikipedia for more names - there are a lot of them. The more you know surnames, the smarter you will seem. Something Kalomorov forgot to mention again.
There are no predictors there, a scaled series of Close values is directly fed to the NS input - [Close[i-0], Close[i-1], Close[i-2],...,Close[i-19]].
As a target - one value of VLF output [i + Tp], where Tp is prediction time in minutes. There are 5-6 thousand such lines in total.
In general, it is interesting to see the results in the forest. If you are going to do it, I'll make a CSV soon.
If you do the sampling, I will do it. True, this is not a classification, but it is interesting.
If you make a sample, I'll spin it. But that's no longer a classification, but that's interesting.
OKAY. But not urgent.
If you make a sample, I'll spin it. It's not a classification, though, but that's interesting.
Here are the archives. See attachment.
Learn.csv - inputs. The first digit in each line - history binding, it should be removed.
Cell.scv - target.
This should produce the following chart after learning from the data.
The filter is approximately equal to EMA(16) and the forecast is 5 min.
I will perform the test later, if necessary.