Machine learning in trading: theory, models, practice and algo-trading - page 3695

 
fxsaber #:

This is a logical hypothesis from the graph provided.

I don't know what a spread and a dollar is. There are two XAUUSD_bid. The higher this price is the more favourable.

If close bars were built on ask, you would get a negative spread, not a positive one. Because the real symbol is almost always more profitable to trade/exchange than its synthetic counterpart.

But for other periods there is no such difference, everything is around zero. Then I will take quotes from another DC and compare. I think it will be the same.

In order not to get bogged down in all sorts of conspiracy theories, I will finish writing the TS next week and check it :).
 

Another live server. Everything is in relative order here.

You are right, the quotes are broken on the demo. Everything is completely different. Now it's even better :)


 
Maxim Dmitrievsky #:

Another live server. Everything here is in relative order.

You are right, the quotes are broken on the demo. Everything is completely different. Now it's even better :)

If you open legs in different DCs, it works out very well. It is necessary to think over the scheme of how it is most convenient to do it.

Might pick up my old miracle and rewrite it for such a case :) Or a separate software with connectors. I don't want to write code.



 
And if you trade limits, how do you control the limits that didn't work for other legs?
 

An article about hidden Markov models. I like the hmmlearn library, well done.

https://www.mql5.com/ru/articles/17917

 
There's a reason to pick on spike neurons (just curious)
snnTorch Documentation — snntorch 0.9.4 documentation
  • snntorch.readthedocs.io
The brain is the perfect place to look for inspiration to develop more efficient neural networks. One of the main differences with modern deep learning is that the brain encodes information in spikes rather than continuous activations. snnTorch is a Python package for performing gradient-based learning with spiking neural networks. It extends...
 

I have been using pieces.app for over a year now, probably the most useful and (so far) free coding app where you don't need vpn.

Limited only by coding tasks, you can't generate pictures.

Today we have added models (in addition to clude 3.7).

Later I will make a comparison of the new models on a rather complex task.



 

In the chach-ve test subject will be this rather complex task (and code from one of the articles on kozul inference). Perhaps there will be some clarifying questions. Then I will test each code and compare it with the original. The goal is to increase the quality of output models (new data) in TC, it directly depends on this function.

The goal is quite specific, because if you just ask them to "make it better", they start to offer many different non-working variants, including delusional ones.

Можешь модифицировать этот код, чтобы он использовал конформные предсказания для определения плохих меток?

def meta_learners(models_number: int, iterations: int, depth: int, bad_samples_fraction: float):
    dataset = get_labels_one_direction(get_features(get_prices()),
                                       markup=hyper_params['markup'],
                                       min=1,
                                       max=15,
                                       direction=hyper_params['direction'])
    data = dataset[(dataset.index < hyper_params['forward']) & (dataset.index > hyper_params['backward'])].copy()

    X = data[data.columns[1:-2]]
    y = data['labels']
    BAD_WAIT = pd.DatetimeIndex([])
    BAD_TRADE = pd.DatetimeIndex([])

    for i in range(models_number):
        sampled = data.sample(frac = 0.3)
        X = sampled[data.columns[1:-2]]
        y = sampled['labels']
        X_train, X_val, y_train, y_val = train_test_split(
            X, y, train_size = 0.5, test_size = 0.5, shuffle = True)
        
        # learn debias model with train and validation subsets
        meta_m = CatBoostClassifier(iterations = iterations,
                                depth = depth,
                                custom_loss = ['Accuracy'],
                                eval_metric = 'Accuracy',
                                verbose = False,
                                use_best_model = True)
        
        meta_m.fit(X_train, y_train, eval_set = (X_val, y_val), plot = False)
        
        coreset = X.copy()
        coreset['labels'] = y
        coreset['labels_pred'] = meta_m.predict_proba(X)[:, 1]
        coreset['labels_pred'] = coreset['labels_pred'].apply(lambda x: 0 if x < 0.5 else 1)
        
        # add bad samples of this iteration (bad labels indices)
        coreset_w = coreset[coreset['labels']==0]
        coreset_t = coreset[coreset['labels']==1]

        diff_negatives_w = coreset_w['labels'] != coreset_w['labels_pred']
        diff_negatives_t = coreset_t['labels'] != coreset_t['labels_pred']
        BAD_WAIT = BAD_WAIT.append(diff_negatives_w[diff_negatives_w == True].index)
        BAD_TRADE = BAD_TRADE.append(diff_negatives_t[diff_negatives_t == True].index)

    to_mark_w = BAD_WAIT.value_counts()
    to_mark_t = BAD_TRADE.value_counts()
    marked_idx_w = to_mark_w[to_mark_w > to_mark_w.mean() * bad_samples_fraction].index
    marked_idx_t = to_mark_t[to_mark_t > to_mark_t.mean() * bad_samples_fraction].index

    data['meta_labels'] = 1.0
    data.loc[data.index.isin(marked_idx_w), 'meta_labels'] = 0.0
    data.loc[data.index.isin(marked_idx_t), 'meta_labels'] = 0.0
    # data.loc[data.index.isin(marked_idx_t), 'labels'] = 0.0

    return data[data.columns[:]]
 

GPT 4.1 offered an option, but cannot write working code. He can't fix the errors that appear either.

But he offered an interesting library for conformal predictions, which he tried to "put into the code". No credit so far, but thanks for the library, of course.

MAPIE - Model Agnostic Prediction Interval Estimator — MAPIE 0.9.2 documentation
  • mapie.readthedocs.io
MAPIE - Model Agnostic Prediction Interval Estimator¶ MAPIE is an open-source Python library for quantifying uncertainties and controlling the risks of machine learning models. It is a scikit-learn-contrib project that allows you to: Easily compute conformal prediction intervals (or prediction sets) with controlled (or guaranteed) marginal...
 
Maxim Dmitrievsky #:

GPT 4.1 offered an option, but cannot write working code. He can't fix the errors that occur either.

But he offered an interesting library for conformal predictions, which he tried to "put into the code". No credit so far, but thanks for the library, of course.

Maxim, how do you communicate with this intelligence? Please allow me to send you a message.