Machine learning in trading: theory, models, practice and algo-trading - page 2632

 
elibrarius #:

Trees look for splits by sorting each column.
Apparently you should take as many columns as possible and fill NAN in those rows where they are not used. If the model can handle NAN.

Or with something else: -INF, 0, +INF... so that all unused rows are on the same side when sorting.

This is more or less understandable. I'd like some kind of more creative approach or something. There are a lot of new tasks like working with video scenes of different length, etc.

 
mytarmailS #:
What do you mean? Describe the problem

For example, I want to feed the classifier input price chunks not of a fixed length in bars (or in links of a zigzag pattern), but starting from some significant moment.

 
Aleksey Nikolayev #:

For example, I want to feed chunks of price not of a fixed length in bars (or in links of a zigzag pattern) to the classifier input, but starting from some significant moment.

Recurrence nets are suitable, like many-to-many

 
Aleksey Nikolayev #:

For example, I want to feed the classifier input price chunks not of a fixed length in bars (or in links of a zigzag pattern), but starting from some significant moment.

The associative rules should be ok, i'll show an example

set.seed(123)
li <- list()
for(i in 1:100){
 li <- append(li,  
               list(c(letters[sample(1:10,sample(5:10,1))] ,   sample(c("buy","sell"),1)))
              )}

head(li)

The data is in the form of a list , each row is a vector with observations of any length .

head(li)
[[1]]
[1] "c"    "b"    "f"    "j"    "e"    "d"    "i"    "sell"

[[2]]
[1] "j"    "e"    "c"    "h"    "a"    "sell"

[[3]]
[1] "i"   "c"   "h"   "b"   "g"   "buy"

[[4]]
 [1] "c"   "d"   "f"   "a"   "j"   "e"   "i"   "h"   "b"   "g"   "buy"

[[5]]
[1] "i"   "g"   "c"   "d"   "e"   "buy"

[[6]]
 [1] "f"   "i"   "b"   "e"   "g"   "d"   "c"   "a"   "h"   "buy"

the code to search for patterns in the form of asociation rules

library(arules)
model  <- apriori(li, parameter=list(support=0.2, 
                                     confidence=0.6,
                                     minlen=4,
                                     maxlen=5), 
                 appearance = list(rhs=c("buy","sell"), default="lhs"))
inspect(model)                 

the rules

inspect(model)
      lhs          rhs   support confidence coverage lift     count
[1]   {e,f,j}   => {buy} 0.23    0.6764706  0.34     1.166329 23   
[2]   {e,i,j}   => {buy} 0.21    0.6176471  0.34     1.064909 21   
[3]   {b,e,j}   => {buy} 0.23    0.6216216  0.37     1.071761 23   
[4]   {a,e,j}   => {buy} 0.24    0.6857143  0.35     1.182266 24   
[5]   {e,h,j}   => {buy} 0.22    0.6111111  0.36     1.053640 22   
[6]   {c,e,j}   => {buy} 0.26    0.6666667  0.39     1.149425 26   
[7]   {e,g,j}   => {buy} 0.23    0.6571429  0.35     1.133005 23   
[8]   {e,f,i}   => {buy} 0.24    0.6153846  0.39     1.061008 24   
[9]   {b,e,f}   => {buy} 0.22    0.6666667  0.33     1.149425 22   
[10]  {a,e,f}   => {buy} 0.25    0.6756757  0.37     1.164958 25   
[11]  {c,e,f}   => {buy} 0.24    0.6486486  0.37     1.118360 24  
...
...
..
..
.

The algorithm looks for associations between elements regardless of their order...

There are order-aware algorithms, but they're greedy.


Or if you want more, there's a recommender system called recommenderlab, but I didn't get into it.

 
Maxim Dmitrievsky #:

Recurrent networks are suitable, the many-to-many type

Thanks, I'll have a look.

Of course, I would like to have some kind of review text on the subject, with a description and comparison of approaches (there's no harm in wishing). In theory, there should be such a text somewhere, but so far I haven't found it.

 
Aleksey Nikolayev #:

Thanks, I'll have a look.

Of course, I would like to have a review text on the topic, with a description and comparison of approaches (there is no harm in wishing). In theory, there should be such a text somewhere, but so far I haven't found it.

I have only seen about input-output of variable length in such networks, and that is purely review, without going into it.

for word processing, translations
 
mytarmailS #:

associative rules should work, i will give you an example

data as a list , each row is a vector with observations of any length .

The code to search for patterns in the form of association rules

the rules

The algorithm looks for associations between elements regardless of their order...

There are order-aware algorithms, but they're greedy.


Or if you want more, there's a recommender system called recommenderlab, but I haven't looked into it.

Thanks, I'll have a look.

Still, with us, order does matter. You can always for example get SB by mixing up the increments randomly.

I also remembered that I think you wrote here some time ago about sequential pattern mining and sequence alignment problem arising there. It also seems to be one of the methods of solving the problem. Although, belonging of sequences to one class does not necessarily mean their similarity.

 
Aleksey Nikolayev #:

Thanks, I'll have a look.

Still, with us, order does matter. It is always possible, for example, to get SB by shuffling the increments randomly.

I also remembered that you once wrote here about sequential pattern mining and sequence alignment problem. It also seems to be one of the methods of solving the problem. Although, belonging of sequences to one class does not necessarily mean their similarity.

Well then the arulesSequence package

 

Sparked one gold strategy from the marketplace ))

Capital curve in my tester.

threw it into tslab to get a better look

It looks like it's a good match.


I looked at the trades.


I looked at it as if it was a manual trader with extremely long sit and vague trading algorithm...

Forrest certainly could not identify anything, but it was interesting and informative )))

 
Maxim Dmitrievsky #:

Recurrent networks are suitable, many-to-many type

Might be useful... I have a many-to-many without recurrence. And no convolutional layers. And I chose this model after I've analyzed the neural network mechanism. We're looking for a common denominator here, aren't we? Argue.

Reason: