Machine learning in trading: theory, models, practice and algo-trading - page 1961

 
Fast235:

I have in friends and play with them, the ones without cheats toper, to play such schemes, so that mails.ru to remove all this at all, and also I once nubo got to them once, almost impossible

did not understand anything

 

Played in warface, took the top, and sold our players places in the top,

You know, go to the rk and ask, can you take a place in the top today in this discipline, so there was a game, then removed the records.

Well I am with the topers themselves with all there) on the same team every day played top 1

 
Fast235:

Played in warface, took the top, and sold our players places in the top,

You know, go to the rk and ask, can you take a place in the top today in this discipline, so there was a game, then removed the records.

What do I have to do? ) don't play games

 
Maxim Dmitrievsky:

What do you have to do? ) I don't play games.

Nub you, I tell you how we played against all the others)

high ranks and not banned, no cheats, and 1 place has always been taken every day!!!!!!!!!!!!

 
Fast235:

Nub you, I tell you how we played against everyone else)

high ranks and not banned, no cheats, and 1 place has always been taken every day!!!!!!!!!!!!

i can only play as a zerg player

 
Maxim Dmitrievsky:

cool bible on python

https://docs.google.com/document/d/1hoU2HbnEyBYXbxA7dya8zrsJzzSvd-Plv8osaB63YvQ/edit#

I think it's a promising thing

Reinforcement learning neural network (RLNN) based real-time adaptive control
Reinforcement learning neural network (RLNN) based real-time adaptive control
  • docs.google.com
Reinforcement learning neural network (RLNN) based real-time adaptive control Введение 1 Решаемые задачи 2 Структура нейронной сети 3 Синапсы 3 Дендриты 4 Нейроны 4 Подсети 6 Алгоритм D-нейрона 8 Эксперименты 10 Воспроизведение булевых функций 10 Постановка задачи 10 Этапы эксперимента и их резу...
 
Maxim Dmitrievsky:

I can only play Starcraft as a Zerg

(Heh-heh.))

 
mytarmailS:

cool bible in python

https://docs.google.com/document/d/1hoU2HbnEyBYXbxA7dya8zrsJzzSvd-Plv8osaB63YvQ/edit#

I think it's promising.

online training is normal, I'll read it. I can't find any example for forex on the git. I do not understand how he taught me. I would say that he made one pass at the chart, and the memorizer memorized it and the second one, like, look at the balance. Otherwise it is too good. I will have to figure it out. I already used a similar network, called recurrent reinforcement learning - the results were so-so. Well, this is cool that without a teacher, that is like the same, but without the clustering module. The funniest thing is that I have almost the same numknokodil - clustering plus boosting. It's retraining.
 
Maxim Dmitrievsky:
There is no example for Forex on the git

Maybe write to him and let him throw the code?)

Maxim Dmitrievsky:
I do not understand how he did it. Most likely, he showed it once in a schedule and the memorizer memorized it.

I read it twice and did not understand how it works, how memory is tripled and what he did to beat the properties of the super-precision layer ... Karoch. The article is not very good, but the product itself is good, and the author should have practiced with the explanations... So if you figure it out, at least write how it works...

 
Maxim Dmitrievsky:
online training is normal, I will read it. I cannot make up my mind about Forex. I do not know how to correctly read this article. The programmer memorizes this and after he had completed the second one he would look at the balance. Otherwise it is too good. I will have to figure it out. I already used a similar network, called recurrent reinforcement learning - the results were so-so. Well, this is cool that without a teacher, that is like the same, but without the clustering module. The funniest thing is that I have almost the same numknokodil - clustering plus boosting. It's retraining.
Have you tried the demo?
Reason: