Machine learning in trading: theory, models, practice and algo-trading - page 2313
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
In theory, yes.
But still, what's the point of the action?
You can't argue with that, there's no point...
I need to get 10 out of 100. Is there a solution?
I don't know how it works in alglib, try to pull info from psa function and see how many components you need to describe 100 prizes.
Or just take the first n columns of the psa and steamroll...
But all this is useless... if you have 50k features you just need Rsa, and if you want to play around without understanding what you're doing you don't need Rsa at all, the result will be worse than without it with 99.999... % probability you got it...
In theory, yes.
But still, what is the point of the action? If there is no gain in speed, but rather a slowdown, for an extra operation.
It is necessary to get 10 out of 100. Is there a solution?
Take the 1st 10 components of the cov. matrix
take the 1st 10 components of the matrix
The matrix will not work - it is 100x100.
And we need to get 10x1000 from it, not 10x10 or 10x100, but by some calculations. That is, for each of the 1000 rows to apply 10 GC
The matrix will not work - it is 100x100.
And we need it not 10x10 or 10x100, but by some calculations get 10x1000. I.e. for each of 1000 rows apply 10 GC
Each component point is the sum of products of attribute values by their cov. values, as far as I remember
Do the math and compare it to sklearn.
The matrix may turn out to be inverted, then from the end. You should check it additionally there
Each component point is the sum of the products of feature values by their cov. values, as far as I remember
Do the math and compare it to sklearn.
The matrix may turn out to be inverted, then from the end. You should check it additionally there
In general, you need cycles like the ones above.
There s (bestfeatures) is just the number of components which are selected for training, like
the value of each component is multiplied by the value of the attribute and they are added together. And so for each row of the original series.
I have to remember, I have no time. I must read the documentation.
There s (bestfeatures) is just the number of components that are selected for training, like
the value of each component is multiplied by the value of the attribute and they are added together. And so for each row of the original series.
I have to remember, I have no time. I need to read the documentation.
Now everything is fine.
I decided to see what's inside the network, on each layer... I decreased the dimensionality through umap to two components in each layer
A network with three internal layers, almost untrained, only 400 examples... but still fun to look at...
I decided to see what's inside the network, on each layer... I decreased the dimensionality through umap to two components in each layer
A network with three internal layers, almost untrained, only 400 examples... but still fun to look at...
How did you manage that?
What's the output dimension?
Ludwig has deep learning models without having to write code, no programming skills required to teach the model:https://ludwig-ai.github.io/ludwig-docs/
Installed it recently. Haven't gotten around to checking it out yet. They promise miracles.