Discussing the article: "Self Optimizing Expert Advisor with MQL5 And Python (Part III): Cracking The Boom 1000 Algorithm"

 

Check out the new article: Self Optimizing Expert Advisor with MQL5 And Python (Part III): Cracking The Boom 1000 Algorithm.

In this series of articles, we discuss how we can build Expert Advisors capable of autonomously adjusting themselves to dynamic market conditions. In today's article, we will attempt to tune a deep neural network to Deriv's synthetic markets.

We will analyze all of Deriv’s synthetic markets individually, starting with their best known synthetic market, the Boom 1000. The Boom 1000 is notorious for its volatile and unpredictable behavior. The market is characterized by slow, short and equally sized bear candles that are randomly followed by violent, skyscraper sized bull candles. The bull candles are especially challenging to mitigate because the ticks associated with the candle normally aren’t sent to the client terminal, meaning that all stop losses are breached with guaranteed slippage every time.



Therefore, most successful traders have created strategies loosely based on only taking buy opportunities when trading the Boom 1000. Recall that the Boom 1000 could fall for 20 mins on the M1 chart, and retrace that entire movement in 1 candle! Therefore, given its overpowered bullish nature, successful traders look to use this to their advantage by attributing more weight to buy setups on the Boom 1000, than they would to a sell setup.

Author: Gamuchirai Zororo Ndawana

 
MetaQuotes:
We will analyse all Deriv synthetic markets individually, starting with the most famous one - Boom 1000.

Thank you very much for the article! I have been looking at these indices for a long time, but I didn't know from which side to approach them.

Please continue!

 
#Instatiate the model
model = MLPClassifier(hidden_layer_sizes=(30,10),max_iter=200)

#Cross validate the model
for i,(train,test) in enumerate(tscv.split(train_X)):
    model.fit(
        train_X.loc[train[0]:train[-1],:],
        ohlc_train_y.loc[train[0]:train[-1]]
    )
    validation_accuracy.iloc[i,0] = accuracy_score(
        ohlc_train_y.loc[test[0]:test[-1]],
        model.predict(train_X.loc[test[0]:test[-1],:])
    )
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[25], line 5
3 #Cross validate the model
4 for i,(train,test) in enumerate(tscv.split(train_X)):
----> 5 model.fit(train_X.loc[train[0]:train[-1],:],ohlc_train_y.loc[train[0]:train[-1]])
6 validation_accuracy.iloc[i,0] = accuracy_score(ohlc_train_y.loc[test[0]:test[-1]],model.predict(train_X.loc[test[0]:test[-1],:]))

File c:\Python\ocrujenie\.ordi\Lib\site-packages\sklearn\base.py:1389, in _fit_context.<locals>.decorator.<locals>.wrapper(estimator, *args, **kwargs)
1382 estimator._validate_params()
1384 with config_context(
1385 skip_parameter_validation=(
1386 prefer_skip_nested_validation or global_skip_validation
1387 )
1388 ):
-> 1389 return fit_method(estimator, *args, **kwargs)

File c:\Python\ocrujenie\.ordi\Lib\site-packages\sklearn\neural_network\_multilayer_perceptron.py:754, in BaseMultilayerPerceptron.fit(self, X, y)
736 @_fit_context(prefer_skip_nested_validation=True)
737 def fit(self, X, y):
738 """Fit the model to data matrix X and target(s) y.
739 740 Parameters (...) 752 Returns a trained MLP model. 753 """
...

476 "Found input variables with inconsistent numbers of samples: %r"
477 % [int(l) for l in lengths]
478 )
ValueError: Found input variables with inconsistent numbers of samples: [4139, 4133]

Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...

I have tried several tools and on each of them your model gives anerror because of inconsistent sizes of input data (X) and target variables (y).
 
Janis Ozols # :

Thank you so much for the article! I've been analysing these indexes for a long time, but I wasn't sure where to approach them.

Please continue!

You're welcome Janis.

I will definitely continue. There's a lot to cover, but I will create time.

 
Aliaksandr Kazunka # :

I've tried several tools, and in each one your model gives an error due to inconsistent sizes of input data (X) and target variables (y).

Hello Aliaksandr, you can instead just use the code as template guide, and then make the necessary adjustments on your side. I'd also recommend you try different indicators, try different variations of the general idea in the article. That will help us understand the global truth faster.

 
Aliaksandr Kazunka #:

I tried several tools, and each of them would cause the model to error out due to inconsistencies in the size of the input data (X) and the target variable (y).

# Keep the indexes consistent, otherwise the indexes will be reconstructed if there is filtered out data
X = pd.DataFrame(RobustScaler().fit_transform(boom_1000.loc[:, predictors]), columns=predictors, index=boom_1000.index)