Machine learning in trading: theory, models, practice and algo-trading - page 1045

 
Maxim Dmitrievsky:

As long as no one pushes me, it's very difficult to move in this direction, I'm not trading now, I finish the system when I am in the mood... because I have to think there, and I do not like to think

I agree, I suffer from the same thing. I just thought there's something I can see (I mean the account monitor) or hear what I have faced with.
 
mytarmailS:

Yes, better to spend a year on theory and realization of different transformations than to spend 5 minutes in python or R to understand that this crap doesn't work, the logic is the strongest, listen, maybe you should create your own programming language, why you need mql, c++ or whatever you're sitting on...

It's not the theory that's the problem, but how you understand it and the right approach to lying about what you've learned during analysis.

There are times when a person does a Herculean job of collecting and systematizing data, but can't apply it correctly.

 
Farkhat Guzairov:

There are times when a person does a Herculean job of collecting and organizing data, but can't apply it correctly.

I agree, it happens.

 
Vizard_:

You fucking promised to show me in the fall the equities in the sky and a video tutorial)))

Well, who knew that I would be wrapped up in a network by the trans-Atlantic intercontinental GIANT!!!! I had the entire probationary period to get into the work, to travel on all sorts of training. I even went to Kazan during the championship. But I didn't leave the trade, although I mostly spent my time on Saturdays and just recently made a mistake by accident and got another data improvement. Improvement implies even greater separability. Figure 2 above. This led to the fact that the Optimizer Reshetov was retrained, which is not good for him in principle, because it is fundamentally squeezed for incompleteness, and here he is just retrained. Analysis of the data showed the presence of double points in the clustering regions, which indicates that the vectors are identical, that is, are a copy of each other with a slight difference in the tails of the number. Such datasets lead to overtraining when one of such vectors gets into training, and the other into the test sample. In this case one of the dual vectors should be removed! But if you look at the meaning of the input, you begin to wonder about the high degree of alogicality of MO. Because I don't see the point of such an input, but it SCUKO works!


I've made up my mind: If you happen to come across an input that works, you don't need to understand it, just use it and that's it..... Ask me "And why does it work?" and I'll answer "I don't know" and continue to take the money no matter what :-)

 
Alexander_K:

Work loves fools :))))

First of all it will put you in order. Then the trade will settle down...

 
Maxim Dmitrievsky:

Trying different models (predictors), e.g. building multiple models and choosing the best one, on different transformed input data. Like passwords are picked from accounts. When there is no a priori knowledge of the subject and the patterns.

Handmade is like handmade.

That Wapnick video in English was about this

NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOH But let's keep it simple, let's say we built 10 models for the same data set, BUT with a random breakdown into a trend and a test. The question is: How do we choose the model that COULD best summarize the original set??? This is a fundamental question and I'm currently building a theory around it. So far, everything is adding up logically in my theory, but it's not complete to the end.

In order to solve this question, I need to determine the metric of the Generalizability of the resulting models. I have read some resources here and it turns out that such metrics already exist, but they all overestimate values. As I understood there is no unified effective methodology for determining the level of generalization. This is a fundamental problem in the field of MO. The way Reshetov solves it is also a metric and at the moment it is the best solution when calculating Specificity and Sensitivity of the model, but it's all wrong. But what's that..... is HOOOOOOOOOOOOOOOOO!!!!! :-)

 

I didn't think I would suggest such a thing, but still...

I created a system (indicator) based on a neural network that builds some levels under the contract, it works pretty well.

Philosophy of the indicator search for some real overbought/oversold or sentiment

It gives about 1-2 signals per week, if the signal is defined correctly it works with a probability close to 100%.


The problem is that I don't know mql and indicator is written in R (with use of many libraries), I have no powers to learn mql.

If there is a developer here who is ready to integrate my code into mql and visualize it in mt4 I'm ready to discuss and collaborate with him in the future

 
Mihail Marchukajtes:

NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO But let's keep it simple, let's assume we built 10 models for the same dataset, BUT with random partitioning into trend and test. The question is: How do we choose the model that COULD best summarize the original set??? This is a fundamental question and I'm currently building a theory around it. So far, everything is adding up logically in my theory, but it's not complete to the end.

In order to solve this question, I need to determine the metric of the Generalizability of the resulting models. I have read some resources here and it turns out that such metrics already exist, but they all overestimate values. As I understood there is no unified effective methodology for determining the level of generalization. This is a fundamental problem in the field of MO. The way Reshetov solves it is also a metric and at the moment it is the best solution when calculating Specificity and Sensitivity of the model, but it's all wrong. But that what's what..... that's HOOOOOOOOOOOOO!!!!! :-)

10 is about nothing, from 2000 models. Random partitioning is present as it is, but the datasets change as well. An ultrabook on the 1st core counts in 15-20 minutes.

By the way about jpedictor - I was fiddling with the version you gave me and in the UPOR did not see a nuclear machine there... wanted to pull it out, see how it functions

how come, comrades

i don't know what to use other than the classification error or logloss

 
mytarmailS:

I didn't think I would suggest such a thing, but still...

I created a system (indicator) based on a neural network that builds some levels under the contract, it works pretty well.

Philosophy of the indicator search for some real overbought/oversold or sentiment

It gives about 1-2 signals per week, if the signal is defined correctly it works with a probability close to 100%.


The problem is that I don't know mql and indicator is written in R (with use of many libraries), I have no powers to learn mql.

If there is a developer here who is ready to integrate my code into mql and visualize it in mt4 I'm ready to communicate and collaborate in the future

No... i dont know them here :-(

 
Maxim Dmitrievsky:

10 is nothing from 2000 models. Random partitioning is present as it is, but the data sets change too. Ultrabook on a 1-core counts in 15-20 minutes.

By the way about jpedictor - I was poking around the version you gave me and in UPOR did not see a nuclear machine there... wanted to pull it out, see how it functions

how come, comrades

I don't know what to use except a classification error or a logloss.

it's there 100%. I have already slowly started to remake it for myself. Now installing a model in MKUL is a 5-second affair...

Reason: