Machine learning in trading: theory, models, practice and algo-trading - page 2268
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I don't know how to mkl ((
5k trainee
40k test
Try to apply my criterion with your gmm, it should be better to find working models
I already use R^2 to select
I get the same bumps but better.)
everything as usual, only the marks with the averaging taken into account. There will be a completely different markup, which is interesting
I recently posted a book where an interesting idea was to set network weights according to the classical method, and then fine-tune it with training. I wonder if there are ways to combine training with a teacher and with reinforcement
I recently posted a book where an interesting idea was to set network weights according to the classical method, and then fine-tune with training. I wonder if there are ways to combine training with a teacher and with reinforcement
these are all advanced analogues of MA
these are all sophisticated analogues of the Mashka
Grids have an advantage in non-linearity and the way of selecting parameters, but they are the same filters.
There is just a lot of negativity about reinforcement learning. Nets with a teacher show better results in cars, the same in games. They even came up with the idea of training the net from the end of the level and rearranging the spawn closer to the beginning. It's also interesting, the experience of the datasetter decides. Unity made a game specifically for ML and set up a championship. Man on average gets to level 20. Took the 2 newest methods on the grids, with their help averaged up to level 4. And the experts in the championship were able to show results at the human level.
Grids have an advantage in nonlinearity and the way the parameters are selected, but they are the same filters.
There is just a lot of negativity about reinforcement learning. In cars, nets with a teacher show better results, the same in games. They even came up with the idea of training the net from the end of the level and rearranging the spawn closer to the beginning. It's also interesting, the experience of the datasetter decides. Unity made a game specifically for ML and set up a championship. Man on average gets to level 20. Took the 2 newest methods on the grids, with their help averaged up to level 4. And the experts in the championship were able to show results at the level of a person.
There was an RL hype, it's gone now... transformers and GANs are trending now
There was a RL hype, but it's gone now... transformers and GANs are trending now.
The trend is for brains, who know all the algorithms and know how to apply a specific algorithm to a specific task, rather than chasing trends....
If you need to win in GO, then what the heck do you need those GANs for? And if you need to classify Irises, then what the heck do you need RLs for?
he's got his place!
In the trend are brains! who know all the algorithms and know how to apply a specific algorithm to a specific task, not trends chasing....
If you need to win in GO, then what for do you need those GANs there? And if you need to classify irises, then what for do you need RLs there?
everything has its place!
You have a small mind and can not see where and why.
There was a RL hype, it's gone now. now transformers and GANs are trending
GANs are interesting to try for generating artificial data
It's a good idea to master this framework , then everything will go much faster.gan for generating artificial data is interesting to try
I should master this framework , then everything will go much faster.I wrote my gan, there's nothing complicated there. It's not recursive though, I'll have to redo it.
Example on Torch
here's another example