Discussion of article "Statistical Distributions in MQL5 - taking the best of R" - page 7

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I'm talking about R, but my skill is veri small)) can someone check if the code is correct?
If the code is correct, can you check the benchmark?
The code is wrong.
You measured the compilation time of the function, not its execution:
The function cmpfun compiles the body of a closure and returns a new closure with the same formals and the body replaced by the compiled body expression.
Proof of the error:
If the qqq function had been run during the benchmark, object a would have received the computed data. But instead it turned out that the object was not even created.
As a result, the benchmark counted the compilation time instead of the execution time. In my code, everything is correct - the benchmark counts the actual execution time and object a is created with correct data.
And yes, compilation is quite a costly process: it is shown in milliseconds instead of microseconds
And a separate joke, how you overridden the system function q() - quit - in your example.
There was no way to quit R :)
I mean that the mql compiler already knows all input parameters at compile time. It is enough for it to calculate everything during compilation, and when calling the script, it just returns the pre-calculated result. I saw some articles on the hub where they compared c++ compilers, and judging by the analysis of assembler code, this is exactly what happens there.
Yes, he may be actively using it. Here are some examples: https://www.mql5.com/ru/forum/58241.
But in this case it won't work - you need to count in full because of complexity, loop and array filling.
if the code is correct, can you check the benchmark?
You need to replaceres <- microbenchmark(cmpfun ( q)) with res <- microbenchmark(q())). But previously compiled libraries will not be recompiled into bytecode, I got the same results.
"a" in this case will be a local variable, inaccessible outside the function itself anyway. But you can do it this way -
a <<- dbinom(k, n, pi/10, log = TRUE)
then it will be a global variable.
But in this case it won't work - you need to count in full because of complexity, loop and array filling.
I see, the speed of execution is excellent then
By the way, it costs practically nothing to interpret the primitive call a <- dbinom(k, n, pi/10, log = TRUE) with a direct fall into the R kernel with native execution(dbinom is in r.dll).
So trying to compile this call is obviously useless.
Since I've written many times about the fastness of R, let me put in my five cents.
Dear Renat!
Your example is nothing at all!
You have taken two similar functions and draw a conclusion about R's performance at all.
The functions you have given do not represent the power and diversity of R at all.
You should compare computationally capacious operations.
For example, matrix multiplication...
Let's measure the expression in R
c <- a*b,
where a and b are matrices of at least 100*100 size. In your code, make sure that R uses Intel's MKL. And this is achieved simply by installing the corresponding version of R.
If we look at R, there are mountains of code containing computationally intensive operations. To execute them, libraries are used, which are the most efficient at the moment.
And the usefulness of R in trading is not in the functions you rewrote (although they are also necessary), but in the models. In one of my replies to you I mentioned the caret package. See what it is.... The implementation of any practically useful trading model within the framework of this package and on µl will give you the answer
Besides, you should not forget that loading all the cores of a comp is a routine for R. In addition you can load neighbouring comps on the local network.
PS.
For me the idea of comparing the performance of MKL and R is questionable: these two systems have completely different subject areas
SanSanych, we will test everything and release a benchmark. But first we will complete the functionality.
The test was justified and it immediately revealed the problem. I have presented the theoretical justification and I am sure that the system overhead of R will be preserved for the overwhelming amount of functionality.
We can multiply the matrices in such a way that Intel will lose. This is long ago not rocket science, and Intel (or rather, such third-party programmers within the company affiliation) is not a champion in mythical knowledge of its processors.
Since I've written many times about the fastness of R, let me put in my five cents.
.................
To San-Sanych and the other guys.
San-Sanych, you know how much I respect you ... ((S) Kataev and Feinzilberg, known as "Ilf and Petrov"), despite some of your post-Soviet jokes here.
Let me clarify something important for you:
1). The main job of a programmer is not to write programs, but to READ programs, in particular his own. Any programmer 95...99% of his time sits and stares at the monitor. Does he write a programme? No, he mostly reads it. Therefore, the closer to natural language what he reads on the screen is, i.e. to what he was taught by his mother, father, grandmother, school teacher - the more efficiently he will decipher these linguistically-obedient krakozebras on the screen and find the correspondence between the algorithm and his programme.
2). For the purposes of point (1) there is nothing better on average than the C language. That's why, for example, I personally (as well as 2-3 responsible and not very responsible people) managed to write a project with 700+ subroutines in C, MQL4, CUDA..... And everything works.
3). From the point of view of point (1), the object-oriented variant of C, i.e. C++, is much worse. (But about that another time).
4). Full compatibility of classical C and MQL4 is simply invaluable. Transferring a procedure back and forth takes half a minute.
5). The main advantage of C+MQL4 is CLARITY. That is, the comprehensibility and transparency of everything that is on the screen of the programmer.
If we compare C-MQL4 with your R, we should look not at the speed and volume of the written code, but at the CLARITY of the text. That is, its comprehensibility. Otherwise, the programmer will stare at the screen for 24 hours in vain attempts to understand what the program does, what parameters it has, why the author named them so, and in general why the programmer did it this way and not the other way. It is not the speed of the programme that is important here, but the correctness of its work and the speed of its APPLICABILITY for the final programmer.
From this point of view, what Metaquotes has done is of course a great support for those who want to insert statistics into their EAs. There is nothing to compare in terms of simplicity and comprehensibility of functions. And this is important. Especially if you have delicate calculations (and Forex and trading in general requires delicate calculations).
Let's compare.
Here is how the integration function looks like in C - MQL4:
I'll write in parts, it's easier to write that way.
There is a trapezoidal integration function inside:
Everything is absolutely clear and understandable. And what is important, it always works and works well, i.e. with low error even in MT4-MQL4, which saves a lot of time.
But if you want to find out why you have incomprehensible errors when working in R, or if you just want to understand what parameters there are in the integration procedure or what integration method they have programmed there, you will see the following (God forgive me for posting this to immature programming kids):
http://www.netlib.org/quadpack/
This is only the title of the function originally written in Fortran. The main text will come later. This is the original programme used in the R package for integration.
What is there to understand here, tell me?