Machine learning in trading: theory, models, practice and algo-trading - page 2809

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
No way.
And reinforcement learning?
The topkstarter wrote a paper on DQN on hubr on R.
It should be understood that reinforcement learning is just a tricky optimisation.
It may work in some cases, it may not.
I can't find a numpy analogue for R..
..
I wanted to alleviate Alexey's suffering ) It's certainly easier from the get-go... but still.
The topikstarter wrote an article on DQN on the hub on R
it should be understood that reinforcement learning is just a cleverly designed optimisation
It may work in some cases, it may not.
Well, in the context of the memory question.
it can adjust the states to new data, but it's all at the level or like Mashka, i.e. with a lag.
It's more important to pick a reward, a target, basically. And it will throw the trades in different directions and at each iteration it will get better and better.
I wanted to alleviate Alexey's suffering ) It's certainly easier from the get-go... but still
it can adjust states to new data, but it's all level or Mashka-like, i.e., laggy
It's more important to select the reward, i.e. the target, in essence. And it will throw the trades in different directions by itself and at each iteration it will get better and better
memory is an NS with weights trained, you train it at each step, you move the weights a little bit... not much, that's why there is a lag.
and you can't really transfer that to the terminal.
memory is an NS with weights trained, you retrain it at each step, move the weights around a bit... not much, so the lag .
.
there's a whole zoo of approaches, you can find implementations on github, I saw one for python.