Discussion of article "Random Decision Forest in Reinforcement learning" - page 9

 
Maxim Dmitrievsky:

When you're older, you'll understand.

At least, for starters, read what solvers are used in neural networks, and why nobody uses genetics to train them.
I knew it was all about solvers.
[Deleted]  
Evgeniy Scherbina:
I knew it was all about solvers.

even with 50 intputs and a 2-layer NS with 15 neurons per layer, calculate the approximate number of weights. It will take you about forever to optimise such a primitive through an optimiser. And this is still quite an uncomplicated construction.

Put these 50 inputs into my example and you will learn instantly. The difference is about this.

 
Maxim Dmitrievsky:

even with 50 intputs and a 2-layer NS with 15 neurons per layer, calculate the approximate number of weights. It will take you about forever to optimise such a primitive through an optimiser. And this is still a very simple construction.

Put these 50 inputs into my example and you will learn instantly. That's about the difference.

That's what this is about. You overcomplicate it and think that's the right way. It's the wrong way. More complicated doesn't mean better.

I have four inputs, three neurons, one layer for each signal. There are only two signals, but they're individualised for each symbol. That's a lot of symbols. Trading for a month after a year of training is better with the neural network than without it. Here's the result! Opened recently, sitting shivering with happiness and looking at what else they have come up with. Waiting for confirmation on real trading.

[Deleted]  
Evgeniy Scherbina:

That's what I'm talking about. You overcomplicate it and you think it's the right way. It's the wrong way. More complicated doesn't mean better.

I have four inputs, three neurons, one layer for each signal. There are only two signals, but for one symbol individually. That's a lot of symbols. Trading for a month after a year of training is better with the neural network than without it. Here's the result! Opened recently, sitting shivering with happiness and looking at what else they have come up with. Waiting for confirmation on real trading.

Nothing is complicated, it's a classic MO model.

learning through the optimiser is even more complicated, because you choose variants that you like and that work on the forward, out of hundreds and thousands that don't work. This is a mental trap, you think there is a learning process - in fact there is a fitting process. Divide the number of working models by the number of models that do not work on the forward optimisation list, and you get the probability that something from the selected models will work in real life in the future. Usually it is 1-5% probability.

so there's no need to shiver, it's a 99% probability overfit.

either way: Good luck.
 
Maxim Dmitrievsky:

nothing's complicated, it's a classic MoD model.

learning through an optimiser is an even bigger adjustment, because you choose variants that you like and that work on the forward from hundreds and thousands that don't work. This is a mental trap, you think there is a learning process - in fact there is a fitting process. Divide the number of working models by the number of models that do not work on the forward optimisation list, and you will get approximately the probability that some of the selected models will work in real life in the future. Usually it is 1-5% probability.

so there's no need to shiver, it's a 99% probability overfit.

either way: Good luck.

No, no, no, no. There was no fitting. There was learning independent of the next control set. I chose different periods and the next month is just the next month. I tried taking more input parameters, it doesn't improve. I ended up splitting the nets. To be fair, I have 3 of them in one system (2 open + 1 close), totalling 11 inputs. So I'm shaking things up.

In other words, the current month is the same control set that I did the test on in the past. There was no adjustment to the control month.

[Deleted]  
Evgeniy Scherbina:
No, no, no, no. There was no fitting. There was training independent of the next control set. I chose different periods, and the next month is just the next month. I tried taking more input parameters, it doesn't improve. I ended up splitting the nets. To be fair, I have 3 of them in one system (2 open + 1 close), totalling 11 inputs. So I'm shaking.

Maybe a miracle will happen, it happens.

or the predictors themselves are meaningful.
 
Can you tell me how to build an n-ary tree? I did through an array of structures mass[0].mass[1].mass[n]...mass[n], but it does not fit, it should be dynamic in depth and it is not clear how to add a new node in this case, and the address in general will be very long).
[Deleted]  
VANDER:
Can you tell me how to build an n-ary tree? I made through an array of structures mass[0].mass[1].mass[n]...mass[n], but it does not fit, it should be dynamic in depth and it is not clear how to add a new node in this case, and the address in general will be very long).

n-ary in the sense of multiclass? You can use the same library to set the number of trees = 1

 
Maxim Dmitrievsky:

n-ary in the sense of multiclass? You can use the same library to set the number of trees = 1

To make it not Binary, but with an arbitrary number of descendants at each node, like this:

[Deleted]  
VANDER:

To make it not Binary, but with an arbitrary number of descendants at each node, like this:

Well, it is several classes if you set it, it will be like this. You can put an arbitrary number of random features, the order of which will change if you put a different seed. It is not clear what the task is: classification or regression. Besides, a forest is better than a single tree.