Machine learning in trading: theory, models, practice and algo-trading - page 2269
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I wrote my gan, there's nothing complicated there. It's not recursive though, I'll have to redo it.
Example on Torche.
here is another exampleWhen I'll have enough time, I'll try to figure it out.
I'll try to figure it out when I have time.
I did a comparison of different generative models from the library above, through my lib. It turns out that GMM works better for tabular data (dataframe with increments). Then come copulas, the second most efficient. Neural network models like tabular Gan and others worked worse. But maybe I did something wrong. There's also this option.
I did a comparison of different generative models from the library above, through my lib. It turns out that GMM works better for tabular data (dataframe with increments). Then come copulas, the second most efficient. Neural network models like tabular Gan and others worked worse. But maybe I did something wrong. There's also this option.
The networks seem to have poor noise tolerance, maybe that's why the results are worse.
I wanted to add noise to the data at every epoch, but I never got around to it.
The networks seem to be poorly tolerant of noise, which is probably why the results are worse.
I wanted to add noise to the data at each epoch, but never got around to it.
It looks like they are averaging very hard. The output is similar samples, with weak scatter. No matter how I change the latent vector, I get too close values.
as if they were averaging very hard. At the output you get similar samples, a weak scatter. No matter how you change the latent vector, you get too close values.
How about reducing the depth of history?
Maybe the depth of the history can be reduced?
I did different, the output of both autoencoder and gmm gives strongly averaged values. If the autoencoder by definition compresses, it is unclear why the GANs. Dropout doesn't help either, it seems.
I made different ones, the output of both autoencoder and gmm gives strongly averaged values. If the autoencoder by definition compresses, it is unclear why the GANs. Dropout does not save it either, it seems.
Averaging and blurring are roughly the same thing, right? I found this article .
Averaging and blurring are roughly the same thing, right? I found this article .
Yeah, compression of information.
I understand with numbers, but tabular data is worse.
That's why there's TabularGAN. In the package above.
Well, yes, information compression.
I understand it with figures, but it works worse with tabular dataI read the diagonal, it seems to be about a different noise distribution and unsuitable metrics.
It is better to check with test data under greenhouse conditions.An interesting topic is grid reversal.
Put noise on the inputs. Get the spectrum at the output.
https://arxiv.org/pdf/1806.08734.pdf