Machine learning in trading: theory, models, practice and algo-trading - page 1183

 
Igor Makanu:

I do understand, because I've been in this topic for a long time ;)

I do not need the grail, but a workable ATS would be useful

that would be very stable with the MO - you need to spend some statistical analysis and look for patterns yourself, then fit them, and if very lucky it will work for a relatively long time

As for the rest (my own practice, I don't know about others) it's about permanent retraining with latest data and having some good deals, all in profit, controlling everything daily. This is if we proceed from the principle that the artificial intellect should look for everything itself, because constantly statanalysis is also a laborious business. It's better to run the optimizer a couple dozen times, and once it was found by itself.

In the end it all comes down to controlling the quality of the model on the test sample, that's all. On the training sample, it is almost always good as it is. You don't even need to visualize anything for that. How to control and sub-sample is an art
 
Yuriy Asaulenko:

Comodo Internet Security and nothing flies for years.

Thanks, but it's not in the browser, but in Windows, firewall now blocks everything, any browser works for about 10 minutes, then zzdyn! and does not open anything else, ... I can get home in 15 minutes with a backup in acronix restore from a flash drive, I always back up all new installations
Maxim Dmitrievsky:

to be stable with the MO - you need to do some statanalysis and look for patterns yourself, then fix on them, and if you are very lucky it will work for a relatively long time

As for the rest (my own practice, I don't know about others) it's about permanent retraining with fresh data and having some good deals, all in profit, controlling everything daily. This is if we proceed from the principle that the artificial intellect should look for everything itself, because constantly statanalysis is also a laborious business. It's better to run the optimizer a couple dozen times and it will be found by itself.

i know, i do it myself when i have time, but it is not interesting (((, want to like in cool movies.... pressed the button.... the letters ran on the laptop .... and then asses confirmed!!! .... the main thing that then the ALARM-ALARM did not start )))
 
Igor Makanu:
Thank you, but it's not in the browser, it's in Windows, firewall blocks everything, any browser works for about 10 minutes, then zzdyn! and does not open anything else ... Windows crashed, come home in 15 minutes through the backup in akronix restore from a flash drive, good always backups all the new settings
i know, i do it myself when i have time, but it is not interesting (((, want to like in cool movies.... pressed the button.... the letters ran on the laptop .... and then asses confirmed!!! .... the main thing that then the ALARM-ALARM did not start )))

Murphy's Law: If shit can happen, it will happen

 
Igor Makanu:
I do it myself when I have time, but it's not interesting (((, I want it to be like in cool movies.... the button pressed.... the letters ran on the laptop .... and then asses confirmed!!! .... the main thing that then the ALARM-ALARM did not start )))

I don't think so.

I have simplified the task for the NS even more. A preliminary strategy is developed, it defines intervals of possible inputs, the NS is trained on these intervals and finds the optimal entry points. If there is no input at the interval, it does not find it.

Outside of the intervals the NS analyzes nothing.

 
Yuriy Asaulenko:

I don't think so.

I have simplified the task for the NS even more. A preliminary strategy is developed, it defines intervals of possible inputs, the NS is trained on these intervals and finds the optimal entry points. If there is no input at the interval, it does not find it.

Outside of the intervals the NS analyzes nothing.

NS will not find the best entry points, they need to bruteforce.

 
Maxim Dmitrievsky:

The NS will not find the optimal entry points, they need to bruteforce

Maxim, this has already been done a year ago. And wrote how, in this thread.

But something new I can't think of. I am still messing around with Python, maybe some ideas will come up.

 
Maxim Dmitrievsky:

The NS will not find the optimal entry points, they need to bruteforce

Well as if the NS is not to blame, what the MO will find optimal is not the fact that we are looking for it in the input data, so I want a software with visualization, but I probably do not want NeroSolutions, found a free net, in Matlab will read about NS, there are also a lot of ready-made
 
Ivan Negreshniy:

also try to disable the compulsive creation by CatBoost of its temporary directories at every startup, as it makes it crash in a protected environment.

In general, these glitches it looks somehow not very professional, so if you can not beat them, then personally in my opinion, cheaper than free - from this product to abandon immediately:)

The directories are created when using python? In the console it is logical that the directories are created, as in them lies at once model and markup, and as other statistical data, where else to put them if not in a directory? In my opinion, directories are a very good solution, because I can loop through a lot of settings and put the results of each into my directory.

So far, I haven't seen any glitches that cause malfunctions.

 
Maxim Dmitrievsky:

By the way, I don't know who to ask, maybe you can tell me

when using PCA, alglib returns its own vectors

how to work with them further, that is to apply to the original features

http://alglib.sources.ru/dataanalysis/principalcomponentsanalysis.php
I can hardly say any more than in Wikihttps://ru.wikipedia.org/wiki/Метод_главных_компонент#Сингулярное_разложение_тензоров_и_тензорный_метод_главных_компонент

Limits of applicability and limitations of effectiveness of the method


as far as I understand this method, selecting the main components, we have a useful signal feature map and can then look for a similar map (matrix) to search for a useful signal in noisy data

 
Maxim Dmitrievsky:

In the end, it all comes down to the quality control of the model on the test sample, that's all. The training sample is almost always good as it is. You don't even need to visualize anything for that. How to control and sub-sample is an art.

Here's just thinking that the control sample may be very different from the one on which you trained, for example there is a tree which has 10 trained leaves, 9 of which yield 1, and one drain all to zero, which work well on a particular sample, but then the test (test in my case) do not work - what happened? Or it could be that the conditions were only for 3 of the 9 leaves, and the rest all merged to zero. This will not be a sign of overtraining (which implies redundant connections that are not regularities), but will simply be either a completely different sample or a sample where there are really many events for 3 leaves, and critically few for the remaining 6 leaves, as an option trained on trends, and tested on the flat. I think that it is then necessary to mix up the sample and create artificial conditions, where both in training and in tests will be proportionally similar parts of the market, and if necessary, we need to somehow identify these areas and mark them, then we can see what feedback during training on these areas, and what in the test sample. Or we need to look for such regularities that are characteristic of all markets and describe them in a universal way in order to increase the number of different situations in the training sample.

Reason: