Machine learning in trading: theory, models, practice and algo-trading - page 38

 
Yury Reshetov:

Judging by the fact that Dr.Trader has already failed when trying to port an old version of libVMR to R and did not have enough memory for a large nuclear machine, as well as full performance for a small one (the number of cycles reduced by 100 times), then who wants to step on the same rake is unlikely to be found?


So, it is better not to breathe a word about porting such tasks to R - this clunker will not pull.

Only a very superficial acquaintance with R will allow to talk about "nags".

Of course, we put R and see the interpreter of character strings. If you go deeper, you can see the bytecode, but it doesn't solve any of the interpreter's problems in terms of efficiency. There is nothing to even discuss - nag.

But if you look a bit deeper into R packages, you quickly find out that what we see in R code is a reference to another code. And if you start looking into it, it turns out that for computationally intensive algorithms R always uses third-party packages, which were chosen on the principle of maximum efficiency. These are usually C or Fortran libraries.

Or, for example, matrix operations. Considering that R has no notion of scalar, and everything starts with vectors and matrix arithmetic is completely natural for R, the question of using an appropriate library that is written NOT in R is a matter of principle. Intel Math Kernel Library is used.

To add to this, paralleling calculations not only to all the cores of one's own computer, but also to neighboring computers, is a common operation in R.

So, what is a "nag" and what is not is a big question.

PS.

You don't have to port anything to R, you just have to learn the basics. R has everything you need and a lot more than that.

 
Is there pay for posts here? :)
 
mytarmailS:

Question : how do I give the new columns the names like "a_minus_b" , "a_minus_c"

a <- 1:5
b <- 6:10
c <- 11:15
d <- 16:20
dt <- data.frame(a,b,c,d)

res.dt <- data.frame(matrix(nrow=nrow(dt), ncol=0))

for(i in 1:(ncol(dt)-1)){
        for(j in (i+1):ncol(dt)){
                colname <- paste0(colnames(dt)[i], "_minus_", colnames(dt)[j])
                res.dt[, colname] <- dt[, i] - dt[,j]
        }
}
res.dt

We will be paid for the posts by forex itself :) If you read all 38 pages and try it in practice and combine all the knowledge, then I think you can make a working EA.

 
SanSanych Fomenko:
Could you please refute PROPERly the content of the article I linked to. At this pointDr.Trader: has attempted to use this material. To use it quite specifically. The result is negative. Maybe you can give an opinion on the subject as well?

I apologize for being off-topic.
SanSanych, what language are you thinking in?
Your post looks like a Google translator. Respect the Russian language, please.

PS if you want to be understood...

 
Event:

I apologize for being off-topic.
SanSanych, what language are you thinking in?
Your post looks like a google-translator. Respect the Russian language, please.

PS if you want to be understood...

I've been saying it all my life... You're the first...

If you don't understand something, I'm ready to explain.

 
SanSanych Fomenko:

I've been saying that all my life... You're the first...

If you don't understand something, I'm ready to explain.

You don't have to explain. Somebody has to be the first))

 
Dr.Trader:

We will be paid by forex itself for our posts :) Everyone knows and knows something different, and if you read all 38 pages and try it in practice, and combine all the knowledge - I think you can make a working EA.

Thank you so much, human, thank you!

Bp. this neat idea of a double loop needs more work!)

 

Made a description for the binary classifier jPrediction, posted the source code.

Table of contents:

  1. Basic Features
  2. Running jPrediction.
  3. How to create a mathematical model of a binary classifier in jPrediction
  4. Saving the model to a file
  5. Reduction - removing uninformative features from the model
  6. Loading and using the model to classify objects
  7. Appendix
    1. Additional samples for binary classification
    2. CSV file format for jPrediction

Full text in the attached archive (PDF format)

jPrediction - бинарный классификатор для машинного обучения | Reshetov & Co
jPrediction - бинарный классификатор для машинного обучения | Reshetov & Co
  • yury-reshetov.com
Основные характеристики Запуск jPrediction Как создать математическую модель бинарного классификатора в jPrediction Сохранение модели в файл Редукция - удаление неинформативных признаков из модели Загрузка и использование модели для классификации объектов Приложение Дополнительные выборки для бинарной классификации Формат CSV файлов для...
Files:
Reshetov_150.zip  2217 kb
 
Yury Reshetov:

Made a description for the binary classifier jPrediction, posted the source code.


Hello Yuri, thank you for your work!

1) Can you explain in more detail what it still means

  • Sensitivity - is the sensitivity of the model in percent
  • Specificity - the specificity of the model in percentage

2) If I have a weak computer, then how long will it take the model to learn the model, say on a sample of 300 predictors and 100,000 observations

(It would be nice to replace the inscription "please wait" with the calculation of the training progress in % or something like that, so as not to wait for 100 years until completion)

3) And what about the "R"?

 
mytarmailS:

Hello Yuri, thank you for your hard work!

1) Can you explain in more detail what it does mean

  • Sensitivity - is the sensitivity of the model in percent
  • Specificity - the specificity of the model in percent

Sensitivity of generalization abiliy - correctly predicted positive outcomes on test sample: 100% * TP / (TP + FP)

Specificity of generalization ability - correctly predicted negative outcomes on the test sample: 100% * TN / (TN + FN)

Where:

TP - number of true positive outcomes

TN - number of true negatives

FP - number of false positive outcomes

FN - number of false negatives

mytarmailS:

2) if my computer is weak, how long will it take to train the model on a sample of 300 predictors and 100 000 observations

3) What about "R" ? won't it?

Will not learn at all, but will give an error message if the number of predictors in the sample exceeds 10 pcs.

mytarmailS:

3) And what about "R" ?


If you're so eager, install the gJava package. Calling Java code from R

Calling Java code from R
Calling Java code from R
  • 2011.01.01
  • View all posts by darrenjw
  • darrenjw.wordpress.com
In the previous post I looked at some simple methods for calling C code from R using a simple Gibbs sampler as the motivating example. In this post we will look again at the same Gibbs sampler, but now implemented in Java, and look at a couple of options for calling that code from an R session. Stand-alone Java code Below is some Java code for...
Reason: