Machine learning in trading: theory, models, practice and algo-trading - page 2268

 
mytarmailS:

I don't know how to mkl ((

5k trainee

40k test

Try to apply my criterion with your gmm, it should be better to find working models

I already use R^2 to select

I get the same bumps but better.)

 
Maxim Dmitrievsky:

everything as usual, only the marks with the averaging taken into account. There will be a completely different markup, which is interesting

I recently posted a book where an interesting idea was to set network weights according to the classical method, and then fine-tune it with training. I wonder if there are ways to combine training with a teacher and with reinforcement

 
Rorschach:

I recently posted a book where an interesting idea was to set network weights according to the classical method, and then fine-tune with training. I wonder if there are ways to combine training with a teacher and with reinforcement

these are all advanced analogues of MA

 
Maxim Dmitrievsky:

these are all sophisticated analogues of the Mashka

Grids have an advantage in non-linearity and the way of selecting parameters, but they are the same filters.

There is just a lot of negativity about reinforcement learning. Nets with a teacher show better results in cars, the same in games. They even came up with the idea of training the net from the end of the level and rearranging the spawn closer to the beginning. It's also interesting, the experience of the datasetter decides. Unity made a game specifically for ML and set up a championship. Man on average gets to level 20. Took the 2 newest methods on the grids, with their help averaged up to level 4. And the experts in the championship were able to show results at the human level.

 
Rorschach:

Grids have an advantage in nonlinearity and the way the parameters are selected, but they are the same filters.

There is just a lot of negativity about reinforcement learning. In cars, nets with a teacher show better results, the same in games. They even came up with the idea of training the net from the end of the level and rearranging the spawn closer to the beginning. It's also interesting, the experience of the datasetter decides. Unity made a game specifically for ML and set up a championship. Man on average gets to level 20. Took the 2 newest methods on the grids, with their help averaged up to level 4. And the experts in the championship were able to show results at the level of a person.

There was an RL hype, it's gone now... transformers and GANs are trending now

 
Maxim Dmitrievsky:

There was a RL hype, but it's gone now... transformers and GANs are trending now.

The trend is for brains, who know all the algorithms and know how to apply a specific algorithm to a specific task, rather than chasing trends....

If you need to win in GO, then what the heck do you need those GANs for? And if you need to classify Irises, then what the heck do you need RLs for?

he's got his place!

 
mytarmailS:

In the trend are brains! who know all the algorithms and know how to apply a specific algorithm to a specific task, not trends chasing....

If you need to win in GO, then what for do you need those GANs there? And if you need to classify irises, then what for do you need RLs there?

everything has its place!

You have a small mind and can not see where and why.

 
Maxim Dmitrievsky:

There was a RL hype, it's gone now. now transformers and GANs are trending

GANs are interesting to try for generating artificial data

It's a good idea to master this framework , then everything will go much faster.
 
Rorschach:

gan for generating artificial data is interesting to try

I should master this framework , then everything will go much faster.

I wrote my gan, there's nothing complicated there. It's not recursive though, I'll have to redo it.

Example on Torch

декларация

#  creating cGAN
class Discriminator(nn.Module):
    def __init__(self, input_vector):
        super().__init__()
        self.model = nn.Sequential(
            nn.Linear(input_vector, 500),
            nn.ReLU(),
            nn.Dropout(0.2),
            nn.Linear(500, 250),
            nn.ReLU(),
            nn.Dropout(0.2),
            nn.Linear(250, 1),
            nn.Sigmoid()
        )

    def forward(self, x):
        return self.model(x)


class Generator(nn.Module):
    def __init__(self, input_vector):
        super().__init__()
        self.model = nn.Sequential(
            nn.Linear(input_vector, 250),
            nn.ReLU(),
            nn.Dropout(0.2),
            nn.Linear(250, 500),
            nn.ReLU(),
            nn.Dropout(0.2),
            nn.Linear(500, input_vector)
        )

    def forward(self, x):
        return self.model(x)

обучение

tens = torch.FloatTensor(pr[pr.columns[1:]].values)
train_iterator = torch.utils.data.DataLoader(
    tens, batch_size=BATCH_SIZE, shuffle=True,)

discriminator = Discriminator(INPUT_VECTOR+1)
generator = Generator(INPUT_VECTOR+1)
optimizer_discriminator = torch.optim.Adam(discriminator.parameters(), lr=lr)
optimizer_generator = torch.optim.Adam(generator.parameters(), lr=lr)

for epoch in range(NUM_EPOCHS):
    for n, real_samples in enumerate(train_iterator):
        if real_samples.shape[0] != BATCH_SIZE:
            continue
        #  Data for training the discriminator
        real_samples_labels = torch.ones((BATCH_SIZE, 1))
        latent_space_samples = torch.randn((BATCH_SIZE, INPUT_VECTOR+1))
        generated_samples = generator(latent_space_samples)
        generated_samples_labels = torch.zeros((BATCH_SIZE, 1))
        all_samples = torch.cat((real_samples, generated_samples))
        all_samples_labels = torch.cat(
            (real_samples_labels, generated_samples_labels)
        )

        #  Training the discriminator
        discriminator.zero_grad()
        output_discriminator = discriminator(all_samples)
        loss_discriminator = loss_function(
            output_discriminator, all_samples_labels)
        loss_discriminator.backward()
        optimizer_discriminator.step()

        #  Data for training the generator
        latent_space_samples = torch.randn((BATCH_SIZE, INPUT_VECTOR+1))

        #  Training the generator
        generator.zero_grad()
        generated_samples = generator(latent_space_samples)
        output_discriminator_generated = discriminator(generated_samples)
        loss_generator = loss_function(
            output_discriminator_generated, real_samples_labels
        )
        loss_generator.backward()
        optimizer_generator.step()

        #  Show loss
        if epoch % 10 == 0 and n == BATCH_SIZE - 1:
            print(f"Epoch: {epoch} Loss D.: {loss_discriminator}")
            print(f"Epoch: {epoch} Loss G.: {loss_generator}")
here's another example
 
If you don't want to write anything, the ready-made
The Synthetic Data Vault | Open Source tools for Synthetic Data Generation
  • sdv.dev
The Synthetic Data Vault (SDV) enables end users to easily generate synthetic data for different data modalities, including single table, relational and time series data. With this ecosystem, we are releasing several years of our work building, testing and evaluating algorithms and models geared towards synthetic data generation.
Reason: