Machine learning in trading: theory, models, practice and algo-trading - page 1238

 
SanSanych Fomenko:

Everything is about the same: rf, xgboost, SVM, GLM, nnet.

On some sites one model is better than the other, on others worse - all units of percent.

The impression is that the model error is really the error of the predictor-target variable pair. There is a certain limit, above which no tricks can jump, but it is easy to ruin it, you can pass by a promising pair.

Well, then you don't have to bother... my current goal is to test the alglib library itself via a third party, to make sure that the forest network works correctly or on the contrary, that its implementation is rather strange

Maybe I'm worried about the implementation, and there really can be significantly improved only by taking a norm. model, or some other nuances that are not clear right now

 
Maxim Dmitrievsky:

I don't know the number of columns beforehand... and aren't arrays of structures with dynamic arrays inside written to files? ) This is kind of a mess...

you just need to save a 2-d array, the number of columns of which is not known in advance

dynamic arrays cannot be written usingFileWriteArray()

it means to write the array element by element into the file, or, alternatively, to open the file and assemble the array by strings into a string with a " , " delimiter, and then write each string into the .csv file

 
Igor Makanu:

dynamic arrays can not be written usingFileWriteArray()

it means to write the array element by element into the file, as an alternative to open the file and assemble the array by strings with separator " , " and then write each string into the file .csv

Yes, it seems to be ok, I will try it

 
One-dimensional symmetric models ARCH, GARCH, GARCH-M, IGARCH, one-dimensional asymmetric models GJR, EGARCH, multidimensional models VECH, diagonal VECH, BEKK are all volatility models.
 
Dmitry:

Two years ago I wrote here Maximka that NS is a toy like a nuclear bomb. That if ANY other model gives at least satisfactory results, it is not recommended to use NS - they find what does not exist and can not do anything with it.

Trees are a good thing, but it's better to use scaffolding.

i actually got interested in this stuff about two years ago ) then i gave it up for a year and now i'm back again

now i know a lot of new words

 
Maxim Dmitrievsky:

In fact, two years ago I started to be interested in this stuff) then I gave it up for a year, and now I'm back again

but now I know a lot of new words.

That's right too.

 
Alexander_K2:

I beg to differ.

Of the programmers/coders right now, the Indians are the strongest. No kidding - I had the honor to work with them in different projects.

Therefore, my last hope for finding NeuroGraal is Max and his Indian mentee. The two of them will hack the market, and the Hindu, being by nature a deeply religious man full of compassion for those in need, will just hand it out to everyone.

With the NS?

Forex was not able to use neuronics at the time of its creation, because computers were weak.

And as you know, you fight fire with fire.

I checked it on 46 years of history.

Majors still exist exactly as they were at birth. That is, everything is simple to this day.

But I don't advise to use crosses in trade, it's a complete finish... And the 5-character also appeared on their soul...

The grail is in everyone's brain, with a lot of neurons.

Once again - by eye, by eye and only then mathematical conclusions and formulas, and only their own, since there are no correct ones anywhere ...

All you can read is an attempt to use the same function by and large: the line, or + to it evolving degree of the x-axis or y-axis with a coefficient + averaging. This is ALL utopia.

 

who is already on... oops, sorry, learned to work with R and python, can you teach me something on this set? while I deal with the boost :)

on the train\test by yourself in any proportions. It is interesting to see the errors in the test of different models and compare with my

dataset is small 600 lines

Files:
 
Maxim Dmitrievsky:

who is already on... oops, sorry, learned to work with R and python, can you teach me something on this set? while I deal with the boost :)

on the train\test by yourself in any proportions. It is interesting to see the errors in the test of different models and compare with my

dataset is small 600 lines

Dataset is very small, accuracy is about 55% (+-3%)

 
Grail:

Dataset is very small, accuracy is about 55% (+-3%)

I have a smaller class. The error rate is 0.14\0.4

Reason: