Machine learning in trading: theory, models, practice and algo-trading - page 2477

 

If you are interested, here is a script for balancing classes when solving a classification problem.

The balancing is based on resampling the original sample using Gaussian mixture model. I advise to use it, because in local datasets the class labels are rarely balanced.

Saves a lot of time and nerves.

import pandas as pd
from sklearn import mixture
import numpy as np

class GMM(object):
    def __init__(self, n_components):
        self.gmm = mixture.GaussianMixture(
                                            n_components=n_components,
                                            covariance_type='full', verbose=0,
                                            max_iter=1000,
                                            tol=1 e-4,
                                            init_params='kmeans')
        self.name_col = []
        # print("GMM_model -- create")


    def fit_data(self, data):
        self.name_col =  data.columns
        self.gmm.fit(data)
        # print("GMM_model -- fit comlite")

    def get_samples(self, n_samp):
        generated = self.gmm.sample(n_samp)
        gen = pd.DataFrame(generated[0])
        for c in range(len(self.name_col)):
            gen.rename(columns={gen.columns[c]: self.name_col[c]}, inplace=True)

        return gen

def get_balance_dataset(X,Y, gmm=30,num_samples= 200, frac=1):
        '''
        X -             features
        Y -             targets [0,1]
        gmm -           number of mixture components
        num_samples -   number of samples for each class
        num_samples -   percentage of a random number of samples from the original sampling


        '''
    name_targ = Y.columns
    X_out = pd.DataFrame()
    Y_out = pd.DataFrame()
    for index,name in enumerate(name_targ):
        prt_data = pd.concat([X, Y[name]], axis=1)
        if frac!=1:
            prt_data = prt_data[prt_data[name] == 1].drop(columns=[name]).sample(frac=frac)
        else:
            prt_data = prt_data[prt_data[name] == 1].drop(columns=[name])

        gmm_1 = GMM(n_components=gmm)
        gmm_1.fit_data(prt_data)
        sig_X = gmm_1.get_samples(num_samples)
        sig_Y = np.zeros((num_samples, len(name_targ)))
        sig_Y[:, index] = 1
        sig_Y = pd.DataFrame(sig_Y, columns=name_targ)
        X_out = pd.concat([X_out, sig_X], axis=0)
        Y_out = pd.concat([Y_out, sig_Y], axis=0)

    return X_out.to_numpy(), Y_out.to_numpy()
 
iwelimorn #:

If you are interested, here is a script for balancing classes when solving a classification problem.

The balancing is based on resampling the original sample using Gaussian mixture model. I advise to use it, because in local datasets the class labels are rarely balanced.

Saves a lot of time and nerves.

Aren't there standard bibbles in python for this?

 
mytarmailS #:

Aren't there any standard libraries in Python for this?

Probably, there are such libraries, but I haven't come across them.

 
iwelimorn #:

If you are interested, here is a script for balancing classes when solving a classification problem.

The balancing is based on resampling the original sample using Gaussian mixture model. I advise to use it, because in local datasets the class labels are rarely balanced.

Saves a lot of time and nerves.

I think you need to specify that, when solving classification problems with neural networks.
Forests and boostings don't require balancing.

 
iwelimorn #:

Probably there are such libraries, but I haven't come across any.

I see... It's just that R-ka has a lot of stuff for MO, and python is positioned as a language for MO, and it has nothing but 2-3 unlocked libraries.

It's kind of confusing(!

 
elibrarius #:

I think it should be specified that, when solving classification problems with neural networks.
Forests and boostings do not require balancing.

Perhaps.

 
mytarmailS #:

I see... It's just that R-ka has a lot of stuff for MO, and python is positioned as a language for MO, and there is nothing in it except for 2-3 unlocked binaries.

It's not clear at all(!

I`m not familiar with R, I`m an MO major and I`m just at the beginning of my despair gorge with Daning-Krueger.

 
mytarmailS #:

Aren't there standard bibbles in python for this?

It uses a bible, just wrapped
 
iwelimorn #:

If you are interested, here is a script for balancing classes when solving a classification problem.

The balancing is based on resampling the original sample using Gaussian mixture model. I advise to use it, because in local datasets the class labels are rarely balanced.

Saves a lot of time and nerves.

There's more of a standardization than balancing effect here, in my opinion. Plus sampling from distributions helps against overtraining
 
iwelimorn #:

the beginning of the journey into the gorge of despair

))) It's going to be okay!
Reason: