Machine learning in trading: theory, models, practice and algo-trading - page 1051

 
Igor Makanu:

Is there a continuation of the story?

According to my observations, if the trading system gives only positive results, then there will be a permanent loss - we are talking about the TS with a fixed lot and stoplosses

If I'm not mistaken, the story begins in 2006, you just do not suspect at what high level it's all done and by what man.

http://www.kamynin.ru/

Select the rubric "trading robots" Choose the rubric "trading robots", go to the beginning, make tea and crumpets, read, look at pictures, read comments from people, the author's answers, get smarter, and not a child's ah ... ee ... ) ))

There's the whole evolution from beginning to end
Николай Камынин
Николай Камынин
  • 2018.09.02
  • www.kamynin.ru
умру от акробатиков! Продолжу свои рассуждения на тему пенсионной реформы. Естественно, что я не претендую на истину в последней инстанции, но на основе своего опыта и знаний в кибернетике, экономике, финансах и гражданском праве, вижу то, что не видят, либо делают вид, что не видят многие эксперты и власть имущие. Вернемся к судьбоносному...
 

Unfortunately not :-( Although I have an idea for a committee of six polynomials, but I haven't fully cracked the Reshetov code yet. Although the last change started saving my time while preparing model in MKUL, but not the point, the essence of the MO problem is the following as it seems to me and if you have weighty arguments against it I'm ready to listen to them.

Among other things like pre-prep, training, etc. The very final step is to choose a model. After all, in the process of learning algorithm builds a model, estimates its parameters and tries to improve them, while building other models and the model with the best learning metrics is the one that saves the model and isthe result of optimization. In my opinion, the best metric for classifiers is chosen in Reshetov's optimizer. It is the determination of sensitivity and specificity, as well as general generalizability. When the model learning result is evaluated using four parameters True Positiv, True Negativ, False Positiv, False Negativ. I'm sure you have heard of it. This is a rather popular metric, but as practice has shown, it has only partial relation to generalization. In other words, the result of this metric is overestimated and when overtraining on the training set its indicators will be as high as when there is no overtraining. Let's make a little fantasy:

Imagine that we have a method for estimating the generalizability of a polynomial to a data set. And our metric actually estimates the level of generalizability. In other words, when other metrics show good result on training period, and our metric shows bad result when polynomial is overtrained and good result when polynomial is generalized. Then such a metric could be used in the learning process, forcing the algorithm to look for an underestimated (with our data), but still a generalized model. It may be bad, but it works and is 100% untrained. This is where the effect of undertraining comes in. It is very important that the undertraining be minimal. This text can be considered a precursor of my theory, because we are close to what???? Here's a question for you. Think.....

 
mytarmailS:

The story begins in 2006 if I'm not mistaken, you just do not suspect at what high level it's all done, and what kind of person.

http://www.kamynin.ru/

Select the rubric "trading robots". Choose the rubric "trading robots", go to the beginning, make tea and crumpets, read, look at pictures, read comments from people, the author's answers, get smarter, and not a child's ah ... ee ... ) ))

There's the whole evolution from beginning to end

i really have not heard about this man, i will look into it, thank you

 
Vizard_:

This thread... is just a hoot.))

http://www.kamynin.ru/2015/08/26/lua-quik-robot-uchitel/


Read from the beginning and then post, I know what I'm talking about.

MSUA This is where everything began, the author recommends to start with this, but now he does not apply it, and uses something else is purely his own, but it is something that grew out of MSUA and this is something he does not disclose
 

Briefly about the information in the pictures

 
Maxim Dmitrievsky:

Briefly about the information that is shown in the pictures


Yes, that is what it is, the author does not like minimalism)

 
mytarmailS:

Yes, it is what it is, the author does not like minimalism)

there are levels and a couple of wipes, from this somehow extracts information for his alleged neural network

another ballabolic of sorts
 
mytarmailS:

Yes, that is what it is, the author does not like minimalism)

I looked for 5 seconds and understood. The fish is not there 100% and I did not even look......

Have you seen my sites or posts or pictures????

 
Maxim Dmitrievsky:

there's levels and a couple of mash-ups, from which he somehow extracts information for his supposedly neural network

more of a balabolic like.

Oh shit. here we go) allegedly, neural networks, bebe))

Read it, understand it or forget it.

Here's the answer to your post.

All is there, you just need to read

 
mytarmailS:

Oh fuck. here we go)) allegedly, neural networks, bebe))

Read it, get into it, or forget it.

Here's the answer to your post.

It's all there, you just have to read it.

That's kind of what I was talking about.

I'll do the same tomorrow.

Reason: