Discussing the article: "Self Optimizing Expert Advisor with MQL5 And Python (Part III): Cracking The Boom 1000 Algorithm"

 

Check out the new article: Self Optimizing Expert Advisor with MQL5 And Python (Part III): Cracking The Boom 1000 Algorithm.

In this series of articles, we discuss how we can build Expert Advisors capable of autonomously adjusting themselves to dynamic market conditions. In today's article, we will attempt to tune a deep neural network to Deriv's synthetic markets.

We will analyze all of Deriv’s synthetic markets individually, starting with their best known synthetic market, the Boom 1000. The Boom 1000 is notorious for its volatile and unpredictable behavior. The market is characterized by slow, short and equally sized bear candles that are randomly followed by violent, skyscraper sized bull candles. The bull candles are especially challenging to mitigate because the ticks associated with the candle normally aren’t sent to the client terminal, meaning that all stop losses are breached with guaranteed slippage every time.



Therefore, most successful traders have created strategies loosely based on only taking buy opportunities when trading the Boom 1000. Recall that the Boom 1000 could fall for 20 mins on the M1 chart, and retrace that entire movement in 1 candle! Therefore, given its overpowered bullish nature, successful traders look to use this to their advantage by attributing more weight to buy setups on the Boom 1000, than they would to a sell setup.

Author: Gamuchirai Zororo Ndawana

 
MetaQuotes:
We will analyse all Deriv synthetic markets individually, starting with the most famous one - Boom 1000.

Thank you very much for the article! I have been looking at these indices for a long time, but I didn't know from which side to approach them.

Please continue!

 

#Instatiate the model model = MLPClassifier(hidden_layer_sizes=(30,10),max_iter=200) #Cross validate the model for i,(train,test) in enumerate(tscv.split(train_X)): model.fit(train_X.loc[train[0]:train[-1],:],ohlc_train_y.loc[train[0]:train[-1]]) validation_accuracy.iloc[i,0] = accuracy_score(ohlc_train_y.loc[test[0]:test[-1]],model.predict(train_X.loc[test[0]:test[-1],:])))


--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[25], line 5 3 #Cross validate the model 4 for i,(train,test) in enumerate(tscv.split(train_X)): ----> 5 model.fit(train_X.loc[train[0]:train[-1],:],ohlc_train_y.loc[train[0]:train[-1]]) 6 validation_accuracy.iloc[i,0] = accuracy_score(ohlc_train_y.loc[test[0]:test[-1]],model.predict(train_X.loc[test[0]:test[-1],:])) File c:\Python\ocrujenie\.ordi\Lib\site-packages\sklearn\base.py:1389, in _fit_context.<locals>.decorator.<locals>.wrapper(estimator, *args, **kwargs) 1382 estimator._validate_params() 1384 with config_context( 1385 skip_parameter_validation=( 1386 prefer_skip_nested_validation or global_skip_validation 1387 ) 1388 ): -> 1389 return fit_method(estimator, *args, **kwargs) File c:\Python\ocrujenie\.ordi\Lib\site-packages\sklearn\neural_network\_multilayer_perceptron.py:754, in BaseMultilayerPerceptron.fit(self, X, y) 736 @_fit_context(prefer_skip_nested_validation=True) 737 def fit(self, X, y): 738 """Fit the model to data matrix X and target(s) y. 739 740 Parameters (...) 752 Returns a trained MLP model. 753 """
...
476 "Found input variables with inconsistent numbers of samples: %r" 477 % [int(l) for l in lengths] 478 ) ValueError: Found input variables with inconsistent numbers of samples: [4139, 4133]

Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...


I have tried several tools and on each of them your model gives anerror because of inconsistent sizes of input data (X) and target variables (y).