Machine learning in trading: theory, models, practice and algo-trading - page 1908

 
Rorschach:

My results are described in detail here. For validation I multiply the original rows by -1

Guys, let's so, since such a theme began, have respect for each other. Here you have sent me one file with the inputs of another with the target, I beg your pardon, how it should be glued to the data directly or back? And take it for the first line to sign the input data, the last column is the target, the first may be the line number or not. I spent more time getting the file in the right form.

Honestly, this is the first time I've seen such a result. VTRIT has 0 useful inputs, I think it has to do with the macro itself because your target is not mixed up. the classes go one after another. But the fact there is no useful data for the target.

I ran it in the optimizer from the bottom. It was able to go from 4 inputs to 8, but the quality of the training is wanting.

This is the maximum to which it was able to go and to get a model at this quality is not possible. Saving the model begins more than 85% of the generalizing ability, you see you have 63.51 and above I think it will not go up. Try to assemble the file as I indicated above, but be sure to mix up the lines so that the target function would alternate, maybe VTRIT can process it.

 
Mihail Marchukajtes:

Guys, let's do this, since the topic began, have respect for each other. Here you sent me one file with the inputs of the other with the target, excuse me, how it should be glued to the data directly or back? And take it for the first line to sign the input data, the last column is the target, the first may be the line number or not. I spent more time getting the file in the right form.

Honestly, this is the first time I've seen such a result. VTRIT has 0 useful inputs, I think it has to do with the macro itself because your target is not mixed up. the classes go one after another. But the fact there is no useful data for the target.

I ran it in the optimizer from the bottom. It was able to go from 4 inputs to 8, but the quality of the training is wanting.

This is the maximum to which it was able to go and to get a model at this quality is not possible. Saving the model begins more than 85% of the generalizing ability, you see you have 63.51 and above I think it will not go up. Try to assemble the file as I indicated above, but be sure to mix the lines so that the target function would alternate, maybe VTRIT can process it.

Ok, I'll know.

In general all inputs should be useful, in my case when I reduce the number of inputs to 80 the result is much worse. This is a real system, probably it uses waving, zigzags, I do not know what their period is also not known. The network itself recovers it all. If you have google drive I can throw my version to play with.

Are you still interested? I will have to tinker with it to mix the examples.

 
Rorschach:

OK, I will know.

In general, all inputs should be useful, in my case, when reducing the number of inputs to 80, the result is much worse. This is a real system, probably there are used waving, zigzags, I do not know, what their period is also not known. The network itself recovers it all. If you have google drive I can throw my version to play with.

Are you still interested? That would mix the examples will have to tinker.

Well first of all there are a lot of lags that can be seen with the naked eye. Prepare the file, I'll run it....
 
Aleksey Nikolayev:

It seems to be the same forexprostools that you have, taken from the widget investingsom.

Yes, wrong) Apparently it's all glitches in parsing, or on the data where the reading is going on.

Normalization is unlikely to be convenient. The good thing is that we need archive news data in the terminal, the ability to download them and the service to work with them. I don't think there are no archives, but judging by the position of their creators, until users say what they want it will not start and if it does start, it will be in a paid version first).

 
Mihail Marchukajtes:
Well first of all there are a lot of lags that can be seen with the naked eye. Prepare the file, I will run it....

grids have no memory, so you need to feed lags

Files:
data.csv  262 kb
 
Rorschach:

Grids have no memory, so you have to feed lag

According to VTRIT, none of the inputs are relevant to the target alas. The question is closed.
 
Mihail Marchukajtes:
According to VTRITA none of the inputs are relevant to the target alas. The question is closed.

And he can use all inputs, less than 100 inputs is not possible

at 50 inputs the best error is 0.5


 
Rorschach:

And it can use all inputs, less than 100 inputs is not possible

with 50 inputs the best is 50%


As far as I understood during its operation it looks at each bar separately for formation of levels in relation to the target, each bar is assigned a coefficient. In fact, it compares the bars relative to each other with respect to the target in your case, none of the bars could not overcome the threshold above unity, all the bars showed false. I'm currently writing an article on pre-processing this and there's a lot of interesting stuff there. One of the rules says that a column must correlate with the target and NOT correlate with other columns, and in your case they seem to have a very high correlation between themselves, thus becoming prsorto useless. Hold on, let me check something else....
 
Mihail Marchukajtes:
As far as I understood during its operation it looks at each bar separately for formation of levels in relation to the target, each bar is assigned a coefficient. In fact, it compares the bars relative to each other with respect to the target in your case, none of the bars could not overcome the threshold above unity all the bars showed false. I'm currently writing an article on pre-processing this and there's a lot of interesting stuff there. One of the rules says that a column must correlate with the target and NOT correlate with other columns, and in your case they seem to have a very high correlation between themselves, thus becoming prsorto useless. Let me check something else....

It's not exactly a standard situation here. Usually you take some kind of oscillator and feed it into a network. These indicators use a number of past bars for the calculation, depending on their periods. In my case, the network first counts these indicators and then uses them to give the answer, so it needs to know the past values of prices.

 
Rorschach:

This is not exactly a standard situation. Usually they take some kind of oscillators and feed them into the network. These indicators use a number of past bars for the calculation, depending on their periods. In my case, the network first reads these indicators and then uses them to give an answer, so it needs to know the past price values.

I have a messy file of 7700 bars where I take 24 leagues, so don't go on, but rather look here. Here's your file

And here is mine

What's the difference???? I won't keep you in suspense. In principal component analysis, when each column is its own coordinate system, it is important that they can be clustered when points from different columns can be plotted on one coordinate system common to them. The interpretation is simple. The more vertical and horizontal vectors, the cooler it is. What you have is a stupidly uniform spot.

Reason: