Machine learning in trading: theory, models, practice and algo-trading - page 850

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
How many of them are there? There are ways to do it faster... Genetics... Also to drive them into the ns...
heuristic search
Which one do you need? I still process it with an exponent with p=0.5 to get the simplest flow.
If we have identified the tick flow we have, for example, Erlang's k=4, well, if we drop Cauchy, then why do we need to go over it with the exponent again? When we can go straight to Erlang k=5 and so on? Align the gaps between ticks further, rather than confuse and align first?
Probably the most reliable way is to loop through combinations of predictors. But it's very long(
The increasing complexity of jPrediction models implies a gradual increase in the number of predictors. Because in jPrediction the number of neurons in the hidden layer is 2^(2*n+1), where n is the number of predictors. Accordingly, as the number of predictors increases, the complexity of the model (the number of neurons in the hidden layer) increases. Thus, by gradually increasing the model complexity, jPrediction will sooner or later reach the value M, after which further complication of models will lead to a further decrease in generalizability (increase in errors in generalizability).
I came across Reshetov's post about number of neurons.
If we have 10 predictors, we get 2^21 = 2097152 neurons.
Isn't it too much?
Even for 3 predictors there are 128 neurons...
I came across Reshetov's post about the number of neurons.
If there are 10 predictors, then 2^21 = 2097152 neurons.
Isn't that a lot?
Even for 3 predictors there are 128 neurons...
N = 2^i - 1
1023 neurons for 10 inputs is already better.
But judging by the articles, in practice much less is used, for example n=sqrt(#inputs * #outputs)
Apparently N = 2^i - 1 - for exact memorization, and formulas with less number - for generalization.
The chicest predictor selection in caret: gafs- genetic predictor selection; rfe- reverse predictor selection (fastest); safs- simulated stability of predictor selection (annealing) - most efficient.
Tried rfe on a 12*6400 matrix - about 10 hours read with default parameters (sizes = 2^(2:4)), didn't wait and turned it off. Thought glitch, restarted again with sizes = ncol(x) - already an hour counts.
Update: the second run with sizes = ncol(x) finished the calculation in 2.5 to 3 hours, the results are close to packages that take 3-5 minutes to process the same data.If rfe is the fastest, how long do the others wait?
The previous packages I tried didn't take longer than 5 minutes for the same data.
Did it take that long for you?
Setting , rfeControl = rfeControl(number = 1,repeats = 1) - reduced time to 10-15 minutes, changes - 2 pairs of predictors switched places, but in general similar.
I tried rfe on 12*6400 matrix - about 10 hours it was working with default settings (sizes = 2^(2:4)), didn't wait and turned it off. Thought glitch, restarted again with sizes = ncol(x) - already one hour counts.
Update: the second run with sizes = ncol(x) finished the calculation in 2.5 to 3 hours, the results are close to packages that take 3-5 minutes to process the same data.If rfe is the fastest, how long do the others wait?
The previous packages I tried didn't take longer than 5 minutes for the same data.
Did it take that long for you?
I don't remember exactly, it was a long time ago, but such passions as yours are not imprinted in my memory.
The matrix is usual for me.
BUT
I don't remember exactly, it was a long time ago, but such passions as yours are not imprinted in my memory.
The matrix is usual for me.
BUT
2 classes
Loaded 1 kernel
Setting , rfeControl = rfeControl(number = 1,repeats = 1) - reduced the time to 10-15 minutes. Changes in the results - 2 pairs of predictors swapped places, but overall similar to what the default was.