Machine learning in trading: theory, models, practice and algo-trading - page 2251

 
Maxim Dmitrievsky:

map the series into another space (distribution?)

Well, yes, to another space, I don't know about distribution...

Why class labels? Why coders?

 
mytarmailS:

Yes, to another space, I don't know about the distribution.

Why class labels? Why coders?

you need good sell examples and buy examples, so the labels

that's the distribution of points in that space, usually multivariate normal is chosen

it probably won't work, but it's fun.

What will you do if next year the pattern changes? Nothing, you can't get it out of the current year.

You have to take the whole history, break it down into clusters, equalize the number of samples in each cluster, then generate examples from them and train them. This will be +- steady, in theory.

 
Maxim Dmitrievsky:

You need good sell and buy examples, so the marks

are the distribution of points in this space, usually ..............

Well yes, I agree the specific idea is so-so....

========================

What's your robot on the distributions?

 
mytarmailS:

Well, yes, I agree the specific idea is so-so....

========================

What's your robot on distributions?

the idea is cool, on normal data, not random

It only works on euros, not so much on others.

I put the coder instead of gmm, haven't finished yet.

 

Kak prikrepiti instrument BTC/USD?

 

coder instead of gmm. Training 2 months, test 5 years.

It's a bit harder to pick up the architecture. Single layer didn't work well at all, added a 2nd layer, got better

Regular feedforward layers.

 
Maxim Dmitrievsky:

coder instead of gmm. Training 2 months, test 5 years.

It's a bit harder to pick up the architecture. Single layer didn't work well at all, added a 2nd layer, got better

I'm just using the usual feedforward layers.

I have a feeling that the graph is smoother with gmm...

Why do you need a neuron? Can you explain the block diagram?


a single layer can only solve linear problems

 
mytarmailS:

It seems to me that the graph is smoother with gmm ...

Why is there a neuron at all can you explain the block diagram


One layer can only solve linear problems

I expected more from them

The encoder is a neuronics

still can not understand a damn thing, but here's the structure of it

class Encoder(nn.Module):
    def __init__(self, input_dim, hidden_dim, latent_dim, n_classes):
        super().__init__()

        self.linear = nn.Linear(input_dim + n_classes, hidden_dim)
        self.mu = nn.Linear(hidden_dim, latent_dim)
        self.var = nn.Linear(hidden_dim, latent_dim)

    def forward(self, x):
        #  x is of shape [batch_size, input_dim + n_classes]

        hidden = torch.sigmoid(self.linear(x))
        #  hidden is of shape [batch_size, hidden_dim]

        #  latent parameters
        mean = self.mu(hidden)
        #  mean is of shape [batch_size, latent_dim]
        log_var = self.var(hidden)
        #  log_var is of shape [batch_size, latent_dim]

        return mean, log_var


class Decoder(nn.Module):
    def __init__(self, latent_dim, hidden_dim, output_dim, n_classes):
        super().__init__()

        self.latent_to_hidden = nn.Linear(latent_dim + n_classes, hidden_dim)
        self.hidden_to_out = nn.Linear(hidden_dim, output_dim)

    def forward(self, x):
        #  x is of shape [batch_size, latent_dim + num_classes]
        x = torch.sigmoid(self.latent_to_hidden(x))
        #  x is of shape [batch_size, hidden_dim]
        generated_x = torch.sigmoid(self.hidden_to_out(x))
        #  x is of shape [batch_size, output_dim]

        return generated_x


class CVAE(nn.Module):
    def __init__(self, input_dim, hidden_dim, latent_dim, n_classes):
        super().__init__()

        self.encoder = Encoder(input_dim, hidden_dim, latent_dim, 1)
        self.decoder = Decoder(latent_dim, hidden_dim, input_dim, 1)

    def forward(self, x, y):

        x = torch.cat((x, y), dim=1)

        #  encode
        z_mu, z_var = self.encoder(x)

        #  sample from the distribution having latent parameters z_mu, z_var
        #  reparameterize
        std = torch.exp(z_var / 2)
        eps = torch.randn_like(std)
        x_sample = eps.mul(std).add_(z_mu)

        z = torch.cat((x_sample, y), dim=1)

        #  decode
        generated_x = self.decoder(z)

        return generated_x, z_mu, z_var
 
Maxim Dmitrievsky:

the coder is a neuron.

seriously? ))))) are you kidding me or what? ))

Maxim Dmitrievsky:

you wouldn't understand a thing.

That's why I'm telling you in simple terms, what do you expect, why should it work in your opinion, the block diagram is perfect...

And the code is not familiar with the Bible in a language that is not familiar, of course, it's hard to understand

 
mytarmailS:

seriously? ))))) are you kidding me or what? ))

That's why I say in simple terms, what did you expect, why it should work in your opinion, the block diagram is perfect ...

It's hard to understand the code in a language you're not familiar with.

I mean, are you kidding me?

Reason: