Machine learning in trading: theory, models, practice and algo-trading - page 1928

 
Aleksey Vyazmikin:

That's how the chips will be generated - we have to prepare a constructor in the form of basic rules.

For example, describe once how the price behaves in the channel and then just change the channels, and so on.

I guess that's part of the rules, if so, then yes, it can be implemented.

 
Aleksey Vyazmikin:

During clustering, many rows were distributed in different areas, a map was formed, which, as I assume, can be called through:

And then weighting each row to assign it to one or another center of the cluster. I just don't understand how the weighting of a single line is done...


This map is called either prototypes or cluster centers, new data is compared to each center for closeness and gets the label of the closest center

There is help embedded for each function just write "?" in the console and the function name like "?Kmeans"

There are always examples at the bottom

how to predicthttps://stackoverflow.com/questions/53352409/creation-prediction-function-for-kmean-in-r


Decided to look at meaningful market reversals. Significant U-turns as a target. Thought it would be chaos, but no.

What is the rule for classifying a reversal as a significant reversal?

zigzag knee.


Well, that's very interesting. Thanks for the tip.

Can you share the code for dummies, maybe I will join the eR?

Vladimir posted the code. You may have to learn the basics, otherwise you will have too many questions and add little help.


Thank you, I managed to unload the clustering.

.

 
Aleksey Vyazmikin:

So how is your method better than mine - collecting leaves is essentially new predictors derived from existing data. You just need to build trees not only using comparison, but also transformation and combining levels of the target, in general you can implement it on the basis of a regular tree and drag leaves from there.

If your method can generate such rules as I wrote Maxim, nothing.

 
mytarmailS:

If your method can generate such rules as I wrote Maxim, then nothing

My method will allow to generate not randomly, but meaningfully - more yield, so to speak, but based on a regular tree.

In general I mean that in the tree algorithm we can add a number of transformation procedures in training, such as comparing one predictor to another, multiplication, division, addition, subtraction, other actions. The point is that during genetic construction of the tree the variant will be selected not at random, but giving some description of the sample, which will reduce the period of search for decision. By throwing out random predictors from the sample we will be able to build different trees with these transformations in mind.

 
Rorschach:
Nowhere did I come across a study on the best way to normalize the inputs: increments, subtraction ma, sliding window?

What do you mean by "normalize"? Bringing the distribution of the variable as close to normal as possible?

 
Aleksey Vyazmikin:

My method will allow you to generate not randomly, but meaningfully - more yield, so to speak, but based on a regular tree.

In general I mean that we can add to the tree algorithm a number of transformation procedures during training, such as comparison of one predictor with another, multiplication, division, addition, subtraction, other actions. The point is that during genetic construction of the tree the variant will be selected not at random, but giving some description of the sample, which will reduce the period of search for decision. By throwing out random predictors from the sample we will be able to build different trees with these transformations in mind.

read that rule I gave as an example and try to build a rule generator of this kind into the tree

 
Vladimir Perervenko:

What do you mean by "normalize"? Bringing the distribution of a variable as close to normal as possible?

Bringing the range of a variable to +-1

 
Vladimir Perervenko:

In continuation of a personal conversation

your version

umap_transform(X = X[tr,], model = origin.sumap, n_threads = 4 L, 
               verbose = TRUE) -> train.sumap
head(train.sumap)
[1,] 22.196741
[2,] 14.934501
[3,] 17.350166
[4,]  1.620347
[5,] 17.603270
[6,] 16.646723

plain variant

train.sumap <- umap_transform(X = X[tr,], model = origin.sumap, n_threads = 4 L, 
               verbose = TRUE)
head(train.sumap)
[1,] 22.742882
[2,]  7.147971
[3,]  6.992639
[4,]  1.598861
[5,]  7.197366
[6,] 17.863510

As you see the values are quite different, you can check yourself


In my model

n_components = 1

because I have only one column, but it doesn't really matter.

===================UPD

Man, they are different every time you run umap_tranform, it shouldn't be that way

 
mytarmailS:

Read the rule I gave as an example and try to build a rule generator of this kind into the tree

What's the problem - create components at the beginning from which the rules will be stacked.

 
Aleksey Vyazmikin:

What's the problem - create components at the beginning of which the rules will be put together.

I don't know what the hell it is, I can't come to my senses

Reason: