Machine learning in trading: theory, models, practice and algo-trading - page 3162

 
Aleksey Nikolayev #:

Those seem to be non-linear all the way round, and those are both linear like the PCA if I'm not confused.

Yeah, they're non-linear. But is that a bad thing?


I tried this contrast PCA, got pretty much the same result as the PCA.

This vignette shows a comparison of all the different types.

It shows how PCA looks "pale" before other methods, but if the data are not only normalised but also centred (which I have done separately), then regular PCA gives the same result on the data presented there.

 
mytarmailS #:

Yeah, non-linear. But is that a bad thing?


I tried this contrast PCA, got pretty much the same result as the PCA.

This vignette shows a comparison of all the different types.

It shows how PCA looks "pale" before other methods, but if the data is not only normalised but also centred (which I have done separately), then normal PCA gives the same result on the data presented there.

Probably "poor" in comparison with those test data, and the same result is obtained on cortices?
 
Forester #:
Probably "poor" in comparison to those test data, and the same result obtained on kotrovki?

No, the same on their own data


=============

library(scPCA)
label <- toy_df$label
data <- toy_df[, -31]


pca <- prcomp(data,center = TRUE)
plot(pca$x,col=label,lwd=2,main = "обычный РСА")

cpca <- scPCA(target = data, background = background_df, penalties = 0, n_centers = 4)
plot(cpca$x, col=label, lwd=2,main = "контрасный РСА")

That's what I did.

npca <- prcomp(data,center = TRUE,scale. = TRUE)
plot(npca$x,col=label,lwd=2,main = "обычный РСА + нормализация")

library(umap)
um <- umap(data)
plot(um$layout,col=label,lwd=2,main = "umap без нормализации данных")

num <- umap( scale(data,center = TRUE,scale = TRUE) )
plot(num$layout,col=label,lwd=2,main = "umap + нормализация")



It's like this.... We draw our own conclusions.

 

The goal is not to centre something in a particular dataset, but to find a successful representation of the data in a flexible way. Which that method allows you to do. I guess yumap can do that too, but linear is usually almost always better than non-linear, in terms of stability.

It's a nice way to do it, but there's a prettier way.

cPCA is ready to be discussed if someone is practising with it.

 

I found another problem.
I found a good variant with training once a week on 5000 lines of M5 (3,5 weeks). And decided to shift all the data to 300 lines - like training not on Saturdays, but on Tuesdays. As a result, the model on OOS from profitable became unprofitable.
These new 300 lines ( about 8% of the total) brought out other chips and other splits, which became better for slightly changed data.
Repeated the shift by 300 for 50000 rows. It would seem to be only 0.8% of new rows. But the changes on the OOS are significant too, though not as strong as with 5000 rows.

In general there is a fit not only to the size of the window, but also to the beginning of the window. Small offsets make a big difference in the result. There are no strong features, everything is on the edge of 50/50 ± 1-2%.
 
I'd forget about interval retraining. RL has already shown its failure in this matter. Wizard also wrote about it when I was into this sort of thing.There are 50+ articles there. Lots of approaches and impossibility to concentrate on anything :)
 

Gentlemen, experts and academicians of this thread, please express your opinion on the following:

What if we look at forex as a game of games. Like chess. Or Go, or whatever.
We divide the game into batches: 500 steps each. Each step is an hourly closing price. 500 steps is an average month of trading.
We input anything + balance state.
We set two rules: if the balance drops by 30% or if the game ends with a minus balance - the game starts again.
Total N games (let's say 120 - like 10 years)
The goal is to win all games with no matter what plus. In short, to close every month in plus.

Agent's actions:
1) Buy 0.01
2) Buy 0.02
3 Buy 0.03
4) Buy 0.04
5) Buy 0.05
6) Buy 0.06
7) Buy 0.07
8) Buy 0.08
9) Buy 0.09
10) Buy 0.1

11) Sell 0.01
12) Sell 0.02
13) Sell 0.03
14) Sell 0.04
15) Sell 0.05
16) Sell 0.06
17) Sell 0.07
18) Sell 0.08
19) Sell 0.09
20) Sell 0.10

21) Close 0.01
22) Close 0.02
23) Close 0.03
24) Close 0.04
25) Close 0.05
26) Close 0.06
27) Close 0.07
28) Close 0.08
29) Close 0.09
30) Close 0.10

31) Close All

32) Don't open a position
33) Skip a move

Total 33 actions.

The reward is deferred - equal to the difference between the opening price of the position and the closing price of both partial and full positions.
The input balance will give a fraction of the state that the agent knows. After all, according to the rules, the state of the environment must change from the agent's actions. The agent cannot change the price graph, but he can influence his balance that enters the state. It's like the analogue of pieces on a board. The agent does not know how many million steps can be made with them, but he knows how many pieces he has left on the board.

Thus, it is not necessary to memorise every next candle (whether it will give a minus or a plus), but we learn to sacrifice small drawdowns (pieces on the playing board) in order to get a profit at the end.

I read on the Internet how to train a neural network if it has more than 1 output, they write about DQN. Like q-learning is a stupid memorisation of states and in case of a new state the result is deplorable, and DQN is a projection of memorised states onto new ones, as a result of which the optimal one is chosen from over_dofig actions.


After all, in chess there is a conditionally unknown number of states and in these conditions the neural network wins over a human. Why not try a similar method in a game called "freaking forex for fuck's sake".

 
Ivan Butko #:

Gentlemen,

So what's the question?
 
mytarmailS #:
So what's the question?

Opinion on trying to teach forex by teaching agents to play games.

Are there any fish, maybe tried something like this, any experience.
 
Ivan Butko #:

Opinion on trying to teach forex by teaching agents to play games.

Are there any fish, maybe tried something like this, any experience.
Well this is a typical RL or deepRL or optimisation problem
It's essentially all the same thing but different)
Reason: