Machine learning in trading: theory, models, practice and algo-trading - page 1885
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Yes, you have to go through everything, otherwise there is no way.
Try playing this, it's good for understanding how the parameters affect the result.
Yeah saw that, too bad you can't tuck your own in.
Try reducing lerning_rate for starters.
Nice picture, clearly see where overtraining went
Trading your clusters, just like I told you
And this despite the fact that the error in the new data 0.82%
So you should have listened to me when I told you to make a terminal and see what kind of optimizers, parsers and other rubbish you made
The regular network seems to be better than lstm. And tanh is better than relu.
The network parameters are the same everywhere. The data are normalized to a range of +-1.
On the left are errors on trayn and validations from epochs. In the center, the network output on the trayn and the benchmark. On the right, network output on validations and the benchmark.
tanh
relu
lstm
I had to shake things up with lstm for a long time to make it go away from 0.5. And the result was not very good and the parameter window was very narrow. And it took me about 10 minutes to train it. But here the network was practicing for a little over a minute. They write that lstm takes longer to train, on this example the grids were trained in the same time (upd still lstm takes longer to train).
The regular network seems to be better than lstm. And tanh is better than relu.
Everywhere the same network parameters
tanh
relu
lstm
I had to shake things up with lstm for a long time to make it go away from 0.5. And the result was not very good and the parameter window was very narrow. And it took me about 10 minutes to train it. But here the network was practicing for a little over a minute. They say that lstm takes longer to train, on this example grids train in the same time.
I have a primitive NS without training. I don't see the point of training. I'm not discouraging anyone by any means.
The bar graph in percent shows the probable further direction of movement.
It is important to feed the necessary information to the network.
If you feed the garbage. The result with any packet will be chaotic.
I hope it is clear that blue is up and carrot is down.
normal is Sequential Dense ?
Yes
some kind of activation does not work with negative values, did not go into it?
Didn't think of that. tanh +-1, relu 0-inf
Upd looked at examples, with relu still lead to 0 average.
I have a primitive NS without training. I don't see the point of training. I'm not discouraging anyone by any means.
The bar graph in percent shows the probable further direction of movement.
It is important to feed the necessary information to the network.
If you feed the garbage. The result with any packet will be chaotic.
I hope it is clear that blue is up and carrot is down.
m5 short, d1 buy, I agree with others.
m5 short, d1 buy, others agree.
In an hour there has been a change.
Trading your clusters, just like I told you
And this despite the fact that the error in the new data 0.82%.
So you should have listened to me when I said - make a trading platform and look at once, and you're some optimizers, parsers and other crap saw
not crap but candy
everything there should work like clockwork, you're doing something wrong
But let's close the public discussion on this. Because I imagined the cash flowing into my pockets after such an ingenious discovery. I don't want to share it with anyone.