Machine learning in trading: theory, models, practice and algo-trading - page 3181

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
It just occurred to me that
1) the first attempt to understand a process is to decompose it into some primitives (break it into pieces), e.g. compression algorithms, various decompositions, decompositions....
2) The second step is to look for combinations of interactions between these primitives ( broken pieces ).
3) The superfluous are discarded and only the essence remains. Filtering...
In medicine, I've always been surprised by its supposedly evidence-based approach, where you take 100 patients, give half a drug and half a placebo , and present this as STATISTICS to justify the supposedly statistically significant conclusions.
Seems to me like cheating on a universal scale.
For stationary processes 100 is not a sample, but here is a person, all 100 are ALWAYS different, with a bunch of other diseases of different severity, living different lives and all with unknown correlation with the tested drug. It's called evidence-based medicine.
In a word, a universal medical scam.
Always by three) And it's just a control study after 5 years of internal in vitro research (that's what the pharmacopeial article is written on), then usually from circumstances and understanding of how the drug works, and if it doesn't, up to 5 years of monitoring the use).
The political risks of breaking the proper rules of medical research are great.))
But the money is too big...)))))
Or maybe it has NOTHING to do with evidence and is at best just advertising, and unfair, aimed at the majority of the statistically illiterate population. A banal thirst for money at any cost.
Now that we've touched on Covid.
If you take the instructions of the Ministry of Health of 20 years ago, written in strict accordance with the requirements of evidence-based medicine, then we, as well as all over the world (1) there was no epidemic and (2) there was no vaccine. And then according to your principle"But it's better to have some evidence than none at all" , we issue temporary regulations and quickly start making billions, ignoring our own instructions. By ignoring statistics, medicine has become dangerous.
It's about honesty in statistics. Either we observe every letter of the requirements of statistics, or it is not statistics at all.
You are only partially right about the requirement for large samples. Asymptotic tests do not work well on small samples, while exact tests work quite well.
There are plenty of other problems in drug testing besides mathematical ones, but it is better to try to solve them somehow than to cancel everything and go back to treatment with plantain.
Interesting approach of working with data, never seen this type of algorithms before
https://www.google.com/search?q=blockcluster+r+package&oq=blockclustering+r+&aqs=chrome.2.69i57j0i22i30l3.8329j0j15&sourceid=chrome&ie=UTF-8
Forester #
Did it this way, if I understood correctly - count the number of units in the column/array with the target and then randomly fill the new array with units, then copy the new one into the old one.
I did it this way, if I understood correctly - count the number of units in the column/array with the target and then randomly fill the new array with units, then copy the new one into the old one.
I mix columns like this. In this example, the index array is mixed, but you can mix the data itself by replacing the data type with
.
You have something complicated, especially with i--
.
This is if the row has already been popped out, it will take another try, and it will be popped out if the value is equal to one.
This is how I mix columns. In this example, the index array is shuffled, but you can also shuffle the data itself by replacing the data type with
.
Interesting, and seems more versatile.
Anyway, two passes have passed - 18% of units in the sample, in the first pass one quantum segment was found, and in the second pass two.
It's even kind of suspiciously small.
Added - in the third one again.
So my method can be recognised as working?
So my method can be recognised as working?
Looking for an error in the code after modification
Looking for a bug in the code after modification
No error no
Thanks, I'll try MathRand increments.
The most universal one is probably Monte Carlo.
Looks like I have an interesting Random generation.
On top is a real symbol, on the bottom is random.
RandomPrice can be applied iteratively. Spreads and time are preserved.
It would be correct to do it via logarithm, but I didn't bother with it. If we refine it, it may be the best option for Monte Carlo to generate a random symbol with the required statistical characteristics.