Machine learning in trading: theory, models, practice and algo-trading - page 2040

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
There's something there - I don't know what it is.
Day of week, day of month, hour, minute, ...same for exit..., deal duration in minutes, SL, TP, result +-1
I have 8GB of memory.
As I understood from your results, the entry information is not taken into account at all. It's strange, because a whole class of systems is based on entry time.
So 50% is taken from the day the deal was closed?Day of week, day of month, hour, minute, ...same for exit..., deal duration in minutes, SL, TP, result +-1
I have 8GB of memory.
As I understood from your results, the entry information is not taken into account at all. That's strange, since you have a whole class of systems based on entry time.
Day of week, day of month, hour, minute, ...same for exit..., deal duration in minutes, SL, TP, result +-1
I have 8GB of memory.
As I understood from your results, the entry information is not taken into account at all. It is strange, since a whole class of systems is based on entry time.
So 50% is taken from the closing day of the trade?In general, the result is not strange - there are probably days when the trend changes more often or there is a flat, respectively, the non-stop movement is not infinite and on average it ends after some value of points, so the holding time and TP with SL have hit. And the entry by time turned out to be unimportant, as it does not guarantee a pointless movement - this is a forecast of the future - if we were looking for it - the entry time for profitable trades, then we would have found the highest probability. In general, if there were more predictors, then maybe the entrance at the time of entry with someone else.
The percentage most likely says only about how high in the tree is the split with the predictor. I haven't dealt with that. Here's the description through the translator:
"
Individual importance values for each of the input objects (the default method of calculating object importance for non-rank metrics).
For each object, the change in prediction values shows how much the prediction changes, on average, when the object value changes. The greater the importance value, the greater on average the change in prediction value would be if that feature was changed.
"
You can't prepare features like that. The ranges of column values should be commensurable. For categorical ones, it's done by van hot.
Why do you consider time as categorical? Or what features are we talking about?
Why do you think time is categorical? Or what signs are we talking about?
By the way, have you seen such a generator, which randomly outputs a number from the array without repeats - I need just such a generator.
This is what I do:
1) I create an array of string indexes with length equal to the number of strings, fill it with values from 0 to N strings
2) I shuffle this array
where RandomInteger() is any variant of the array
3) then I take all the values of these indexes in a row in the loop and use them from the main array to get the desired string, it turns out to be pseudo-random after mixing the indexesHas anyone tried to do a classification for a lot of classes, say 10k?
does it work at all?
Has anyone tried to do a classification for a lot of classes, say 10k?
does it work at all?
Trees/forests/busting can. But I haven't tried more than 3, I didn't have such a task.
Forests hang, not enough RAM
Forests are frozen, not enough RAM