Machine learning in trading: theory, models, practice and algo-trading - page 1612

 
Evgeny Dyuka:
I used to deal with such strategies for half a year, maximal result on backtests is x5 for a year, but once a year I will lose everything and it's impossible to solve this problem.

Yeah, well... It's a shame about the people.

So what's going on with the broadcast of your system?

 
mytarmailS:

Yeah, well... It's a shame about the people.

So what's up with the broadcast of your system?

If you mean the signals in metatrader - it's just unreal. The spread on bitcoin is insane + the quotes are left-handed. On normal crypto exchanges, if you open with limits, the commission is negative, i.e. they still pay extra.
 

I ran the system in the tester for the past week and was once again convinced that "help" to put filters, etc. leads to the fact that the effort spent on training the system this "help" leads only to a negative result and this result on the known and trained data, what to say when the data comes in real time (raw).

 
Farkhat Guzairov:

I ran the system in the tester for the past week and was once again convinced that "helping" to put filters, etc. leads to the fact that the effort spent on training the system this "help" leads only to a negative result and this result on the known and trained data, what to say when the data comes in real time (raw).

You can not put crutches, it must learn by itself, once you start to help it then you will not stop))
 
mytarmailS:

We have two vectors of variables, the current candle and the previous one ("-1")

a = "open", "high", "low", "close", "center"

b = "open-1", "high-1", "low-1", "close-1", "center-1"

the variable "center" is the middle of the candlestick (high+low)/2, without this variable it is impossible to describe a pattern like "eskimo" etc. I think the meaning of other variables is not necessary to explain, they are obvious.

So, we create all sorts of logical combinations (you can create non-logical ones too).

Just two candles, miserable two candles.....

Geez, I don't even know what to say. The neural network libraries became so accessible, that people even forget about common sense, or what?

It's obviously nonsense to multiply entities, when all initial manuals for any kind of analysis say, that it's necessary to get rid of redundant entities for achieving results.

If you create variables, then if the figure is higher by two, by two, by one third, etc., then two candlesticks may result in 1000)))

If you think that predictor ratio is so important for your model and you have to feed it to the input, then create a kind of convolutional layer, but to multiply predictors, yada yada...

 
Aleksey Mavrin:

Man, I don't even know what to say. The neural network libraries have become so accessible that people even forget about common sense, or what?

It's obviously nonsense to multiply entities, when all initial manuals for any kind of analysis say that it's necessary to get rid of unnecessary entities to achieve the result.

If you create variables, then if the figure is higher by two, by two, by one third, etc., then two candlesticks may result in 1000)))

If you think that for your model the ratio of predictors is so important, and necessarily need to feed it to the input, then create a kind of convolution layer, but to multiply predictors, yadaa...

oops.... children....

To get rid of superfluous entities, you need to first understand what is superfluous! to understand it, you need to look through the options! or do you know another way? You can choose 5 significant by eye from 1000 pretenders ?

 
mytarmailS:

oops.... children....

To get rid of superfluous entities, you need to first understand what is superfluous! to understand it, you need to look through the options! or do you know another way? You can choose 5 significant out of 1000 pretenders by eye?

You have done a lot of work, studied a lot of material, spent a lot of time. What's the point of trying to change your mind?

 
Aleksey Mavrin:

You have done a lot of work, studied a lot of material, spent a lot of time. Well, I don't have to convince you of anything.

If you have arguments, I would be happy to listen.

If the arguments are objective, I'd be happy to change my mind.

If you realized that you said too much and decided to nicely skip, it did not work))

 
mytarmailS:

If there are arguments, I would love to hear them.

If the arguments are objective, I will be glad to change my mind and become wiser.

If you realized that you said too much and decided to bail, it did not work))

What arguments, if you do so either you do not understand something, or I do.

The essence of my surprise is the following - a teachable model, we are talking about them here, should be trained on raw data.

If the input data is correlated, must be brought together with uncorrelated. And you do the opposite - multiply the original data, strongly correlated with each other.

Here is an example - we teach the model to classify color shade by 3 digits - RGB. Three digits, that is pure raw data!!! With your approach, you have to make sort of predictors:

1- R 2-G 3-B - 4 More red 5 -More green 6- More red than green and blue together .... 100500 Not as red as it would be if green was as red as blue. ))

Shouldn't the model learn on its own, it has the raw data and that's what it's for!

 
Aleksey Mavrin:

What arguments, if you do that, then either you do not understand something, or I do.

The essence of my surprise is this - a trained model, we are talking about them here, must be trained on the raw data.

If the input data is correlated, must be brought together with uncorrelated. And you do the opposite - multiply the original data, strongly correlated with each other.

Here is an example - we teach the model to classify color shade by 3 digits - RGB. Three digits, that is pure raw data!!! With your approach, you have to make sort of predictors:

1- R 2-G 3-B - 4 More red 5 -More green 6- More red than green and blue together .... 100500 Not as red as it would be if green was as red as blue. ))

Doesn't the model itself need to learn? It has the input data and that's what it's for!

I absolutely agree, the correct selection of input data determines whether the model will learn or not, the rest is a matter of technique. If there is no understanding at this stage, there is no point in moving on.
Reason: