Machine learning in trading: theory, models, practice and algo-trading - page 531

 
elibrarius:

Question for R experts.

How to convert a matrix with absolute values into a matrix of softmax classes?

apply(matr, 1, function (rowvec) { result <- rep(0, length(rowvec)); result[which.max(rowvec)] <- 1; result } )

And transpose as needed depending on the dimensionality of the matrix

 
Maxim Dmitrievsky:

Ah, well, there are no libraries yet... I can't write them myself )


i think i'll write them and ask them to lend me a neural network, if it works, i'll pay them back with % :)

by the way, they are a little closer to the original, at least they added mediators, but it's still not AI as the synapses don't form and rebuild by themselves, as far as I understood

 
Maxim Dmitrievsky:

I think to write to them, ask them to lend me a neural network, and if it works, I'll return it with the % :)

I think that if there is not any shift in solutions with simple MLPs, there will not be any with complex NSs. In addition, even if you replace MLP with a more complex NS, it is far from certain that the result will somehow improve. Because a different tool requires a different treatment, a different formulation of the problem.

For now I will stick to simple MLP and ancient BP learning algorithms, and I'll see if the real need arises.

 
anonymous:

and transpose as needed depending on the dimensionality of the matrix

Wow, that's a lot shorter! Thank you!

I've changed it a bit, I'm creating a new matrix by copying from the old one, in order to keep the names of columns and rows. This way it will be more universal in case they should be further used. And I immediately rewrap the matrix. As I understand it, apply(x,1,f) always! flips the matrix, because it processes data line by line and combines it into columns.

get_softmax <- function(m){
    t(apply(m, 1, function (row) {r <- row;  r [1:length(row)] <-0 ; r[which.max(row)] <- 1;  r } ))
}

 
Yuriy Asaulenko:

I think that if there is no shift in solutions with simple MLPs, there won't be any with complex NSs. In addition, even if you replace the MLP with a more complex NS, it is far from certain that the result will somehow improve. Because a different tool requires a different treatment, a different formulation of the problem.

For now I will remain faithful to simple MLP and ancient, BP, learning algorithms, and then we'll see - if the real need arises.


I just like it when everything is counted quickly, you can try lots of strategies.

If I had a more productive analogue I would use it... But now it's 90% data mining and 10% model selection.

 
Maxim Dmitrievsky:

I just like it when everything is counted quickly, you can go through a lot of strategies.

If I had a more productive analogue I would switch to it... so far I've got 90% of data mining and 10% of model selection.

Regarding performance, reaction time of trained 6-layer MLP (~60 neurons) is 0.005s. It's quite enough for almost everything.

As for the training time, which is long, it doesn't bother at all, since it takes much longer to think about each next experiment alone - a week, or even more.

Well, and to retrain every few months, say, we spend a couple of days - I think it's not a problem. Redoing the system on the logic is much longer. Yes, and just standard training does not pass - you need a long dance with tambourine between epochs. While standard training (without dancing)) gives excellent results, but only on a training sample.)

 

Another problem with R.

On one computer everything is fine, on the other one there are some increased requirements for the correctness of the code.

For example

darch.unitFunction = linearUnit - caused Rterm.exe to crash

changed it to

darch.unitFunction ="linearUnit"

this point began to pass before the next error.

I also had to change library(darch) to require(darch)

Now it's the learning itself that's a bummer.

R_NN <- darch(
darch = NULL,
x = MatrixLearnX ,
y = MatrixLearnY ,
paramsList = params
)

I tried many variants, Rterm.exe always crashes.

Does R have some kind of error level control? Maybe on the second PC I ended up with an error level to work out when stopping at every warning?

I installed R on both computers with default settings, I installed all the packages.
How can I fix it?

 

If library darch is not installed then library(darch) will cause an error and code execution will stop, and require(darch) will just warp and the code will continue to work, but since the bibilotech is not installed then its functions can not be called.

The next step is to run
install.packages("darch", dependencies=TRUE) to automatically install the library

 
Yuriy Asaulenko:

As for performance, the reaction time of a trained 6-layer MLP (~60 neurons) is 0.005s. It is quite enough for almost everything.

As for the training time, which is long, I don't care at all, since it takes much longer just to think over each next experiment - a week, or even more.

Well, and to retrain every few months, say, we spend a couple of days - I think it's not a problem. Redoing the system on the logic is much longer. Yes, and just standard training does not pass - you need a long dance with tambourine between epochs. And standard training (without dancing)) gives great results - but only on the training sample.)


I do it a little wrong - I use optimizer actively for search of strategies, i.e. I look through all features, combinations of TS blocks, something else... because sometimes it happens so that I forgot something, and it was missing in the system. And to go through at least a hundred options requires considerable speed of learning (and 100 options is very small) ... well, everyone has their own approach, I'm not saying that mine is better. Of course, you can say that NS is an optimizer by itself, but there are always some hyperparameters, which can be chosen at initial stage

 
Maxim Dmitrievsky:

I am actively using the optimizer to search for strategies, i.e. to search through chips, combinations of TS blocks, something else... because sometimes there is even a little thing that I did not consider, and it was missing in the system. And to go through at least a hundred options requires considerable speed of learning (and 100 options is very small) ... well, everyone has their own approach, I'm not saying that mine is better. Of course, you can say that NS is an optimizer in itself, but there are always a number of hyperparameters, which you can pick up at the initial stage.

So you soon will have to have a mining farm for mining strategies.

Reason: