Machine learning in trading: theory, models, practice and algo-trading - page 2622

 
Replikant_mih #:

It's a good idea, only I think it's important here:

- To build up a lot of statistics.

- For a person to trade one thing (one system).

- That the person remains objective and trades systematically.


In this case, I think, a good markup will be obtained, and therefore it is possible to get a normal benefit out of it.

It's better to request/how to get))) the trading history from the exchange and analyse it.)

 
BillionerClub #:
What if one were to trade and give ML what is good and what is bad?
Better to simulate this on any data and realise that it only sounds tempting
 
mytarmailS #:
It's better to simulate it on any data and understand that it only sounds tempting

We are converging on one point. The NS is just learning from running the story. The learning rate is quite high. The disadvantages are that the size of the base is not sufficient to accumulate patterns over a year. Previous results get blurred. It is possible to do so at large timeframes and low frequency of trades during training. But large timeframe implies a larger drawdown - no TS guarantees a 100% hit. One of the tasks is to use the market movements as much as possible. Exit - on the chart, the Expert Advisor in the Work mode with periodic base loading and in the Strategy Tester, at the same time, the Expert Advisor in the training mode is constantly improving the base. Such a mess we have here...

 
Dmytryi Voitukhov #:

One of the tasks is to use the market movements as much as possible. Exit - on the chart the Expert Advisor in Work mode with periodic base uploading, and in the tester at the same time the Expert Advisor in training mode is constantly improving the base. Such a mess we have here...

I re-read it more thoughtfully. Basically, everything is correct, but neuronics is a dead end for several reasons
 
mytarmailS #:
I re-read it more thoughtfully. Basically, all true, but neuronics is a dead end for several reasons

Exactly. I'm stuck with one of those. The idea is that prediction accuracy is filtered by the probability threshold on the output layer, but then the frequency of trades falls very much and the responsiveness to the situation deteriorates. Filtering on hidden layers has little effect on results. I use fixed equal stop and take for objectivity when training. In Work mode stop is pulled after Breakeven, starting from some distance, the convergence threshold is reset to 0 in order to process all pictures. Stop value is an average of movements between 0 and 10, ..., 50 and 61 bars. This value is approximately the same as the optimized one. Maybe something else should be applied here? The zigzag has only aggravated the picture. What kind of deadlocks did you encounter and what solutions do you suggest?

 
Dmytryi Voitukhov #:

Exactly. I'm stuck with one of those. The idea is that prediction accuracy is filtered by the probability threshold on the output layer, but then the frequency of trades falls very much and the responsiveness to the situation deteriorates. Filtering on hidden layers has little effect on results. I use fixed equal stop and take for objectivity when training. In Work mode stop is pulled after Breakeven, starting from some distance, the convergence threshold is reset to 0 in order to process all pictures. Stop value is an average of movements between 0 and 10, ..., 50 and 61 bars. This value is approximately the same as the optimized one. Maybe something else should be applied here? The zigzag has only aggravated the picture. What deadlocks have you encountered and what solutions do you suggest?

Fixed stop, take, sliding window, table data on entry, all this does not work for very non-stationary data for obvious reasons.

Conceptually, "asociative rules" are good for the market. but the implementation has to be different.
 
Maxim Dmitrievsky #:

is not a multilabel, different meaning. Exclude bad signals iteratively, leave those that are well predicted by the main model in the general pile, and the second model learns to separate the bad from the good, to forbid or allow the trade of the first

the 2nd model may not be needed here either? - Cross Validation and Grid Search for Model Selection ...(in Keras)

but maybe just the confusion matrix will answer your 2nd question (purpose of 2nd model of your idea)...

.. . or

... I just doubt you need the 2nd model ... imho

Cross Validation and Grid Search for Model Selection in Python
  • stackabuse.com
A typical machine learning process involves training different models on the dataset and selecting the one with best performance. However, evaluating the perfo...
 
mytarmailS #:
Fixed stop, take, sliding window, table data on entry, all that doesn't work for very non-stationary data

at the end of the day, the trader wants to make money from the noise... possible cyclical fluctuations can only interest the investor in the long term, - and NOT without an understanding of financial interrelationships, not simple statistics... imho, modeling noise is more interesting (for a trader), but more risky (for his trade)... - the usual risk-profitability balance

p.s.

except that filtering out the noise (working) from the noise (non-working) is a real challenge(i.e., to separate the noise pollution from the noise)... I saw an article somewhere that we should look for a Signal/Noise>2 ratio (for the working noise) - it looks like a common oscillator that is wound on the trend component of the TS model... everything is trivial (as beginners are taught - 1 trend indicator, 1 oscillator), - and within such a common reference point one can put any preferences to information and calculations that a trader is more inclined to trust - just here we see a field for subjectivism in TS ... imho ... And this triviality should only be digitized in the TS model for a robot to trade, and not to stand in front of the terminal for days

Временные ряды-Введение
  • www.machinelearningmastery.ru
  • www.machinelearningmastery.ru
Статьи, вопросы и ответы на тему: машинное обучение, нейронные сети, искусственный интеллект
 
JeeyCi #:

You don't even need a 2nd model here, do you? - Cross Validation and Grid Search for Model Selection ...

but maybe just the confusion matrix will answer your 2nd question (the purpose of the 2nd model of your idea)...

.. . or

... I just doubt you need the 2nd model ... imho

So the lady thinks we don't know what crossover is? )) A thousand facepalms...

And the "article" is just a masterpiece ))))

1) for random forrest you don't need to do crossvalidation as the rule construction itself does this because it's random...

2) for random forrest you don't need to normalize the traits, the wood works with raw traits

This is below the bottom.
 
mytarmailS #:

1) you don't need to crosvalidate for Random Forest

I didn't mean to answer your question - you still can't read... (( - your ability to analyse what you read has long been questioned by me, or rather its absence, as well as your analysis of your trading and its automation (you don't even mix up words, you mess up the context)

p.s.

trend analysis is nothing without prior dependency analysis... time series analysis is the last thing to be done in statistics after other analyses... -- you can't be satisfied that your time-series are non-stationary without looking for dependencies... - just snapping back and sniggering (probably thinking you're having fun?) -- don't bother answering a rhetorical question

Reason: