For coders : what would you like to see as value for 'x = 1.123 * 0.67' (double) in the MetaEditor Watchlist ? (see first post for more explanation). - page 2
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
It seems you don't understand the problem. In an actual expression coming from a trade system you can't guarantee to have all values of the same decimal precision (due to their underlying nature). And meaning of result value can imply different precision, so you can't eliminate rounding at some (one or more) stage(s).
I worked in a firm developing financial accounting systems - I know the problem.
The same applies to binary floating point. The rounding error considerations are equivalent.
There is a mantissa and an exponent, be it binary or decimal, and there is a limit to the precision of what is stored, and how it is manipulated during calculations.
Just as a "double" has a fixed number of available bits for representing the number, so does the decimal counterpart. I see no issue nor difference, one is binary and the other is decimal, that is all.
Binary is faster and more compact, while decimal may be slower and less compact. But that is where the differences end.
And yes, I have also worked for a financial institution before. Between 2000-2002, I worked for a major well known Spanish bank, in the software department of their trading division, and even visited the trading floor often.
Instead of authority arguments ''I worked there", "I know the problem", it would be more interesting to get technical details from concrete examples.
I think there are problems in both approaches (double or decimal variants), though they seems to me a bit different.
The same applies to binary floating point. The rounding error considerations are equivalent.
There is a mantissa and an exponent, be it binary or decimal, and there is a limit to the precision of what is stored, and how it is manipulated during calculations.
Just as a "double" has a fixed number of available bits for representing the number, so does the decimal counterpart. I see no issue nor difference, one is binary and the other is decimal, that is all.
The rounding error considerations are not equivalent. double takes all complexities upon itself due to the floating point approach, which means a variable decimal precision adapting on-the-fly. With numbers of fixed decimal precision, especially with different precisions, you need to invent something. How would you calculate average price for a position made up of several deals, where lots and prices are of different types: DECIMAL(2) and DECIMAL(5) correspondingly?
You are thinking in terms of fixed point decimals. I did not state "fixed point". I implied "decimal floating point" and even mentioned the terms "mantissa" and an "exponent" in my post #10. These are floating point concepts, which are also used for a "double" (which is double precision binary floating point).
EDIT: To be fair, it seems that for "decimal floating point" the concept of "mantissa" is instead called "coefficient" (at least according to Wikipedia). But, be it called "mantissa", "coefficient", or "fraction", the concept is similar (it is the part with the significant digits).
Standard formats
IEEE 754 specifies three standard floating-point decimal data types of different precision:
Language support
This survey answers one part of the problem and drops the other (leading question bias).
What was asked:
For a double value with a given hexadecimal representation (e.g., 0x3FE813BE22E5DE16), what would you like to see as value:
1) 0.75241 (real value, like in mathematic or a calculator)
2) 0.752410000000000023 (IEEE-754 value converted to decimal, 18 digits)
of course, most answers will be (a)
There should be a second question:
What about doubles that may differ by 1 bit in their hexadecimal representations, would you like to see the same string representation (ignore the bit difference)?
I think most answers will be NO.
A script to demonstrate it practically:
This is what's meant by round-trip accuracy.
RYU / Print() implements the shortest--string than mainatins round-trip accuracy. I think it is a well-balanced solution of accuracy and verbosity.
Note:
I just try a find a practical solution dbl->string conversion, given the limitations of binary base-2 floating-point system (float, double).
Remember that our human decimal number (0.75241) has to converted to an approximate binary fraction (0.752410000000000023) with power 2 denominator x/2^n (binary floating-point number) with inherent some loss of precision < machine_epsilon. Therefore, when we convert back from machine (binary) to human (decimal string) we have to deal with that tiny imprecision (dbl->string conversion).
I will not consider other FP formats like: decimal (base-10), binary-coded decimal, fixed-point, or arbitrary precision (bignum) number systems, because this solutions are not practical for MetaTrader ecosystem.
this is a screenshot of Visual studio 2022 debugger's watch window using the shortest round trip algorithm.
I would like to clarify my original vote, which was for it to display ... "0.75241"
However, depending on what is actually being debugged, it may be necessary to delve deeper, and so, I would suggest that the normal display be " 0.75241", but that the tooltip display the full value as well as the hexadecimal representation too.
Another option is to allow the user to configure the defaults or have a button that switches between "nice value", full value, and hex representation.
This survey answers one part of the problem and drops the other (leading question bias).
It's just a survey in the frame of what is possible here (one line to present the survey), and the result is pretty clear. Additionally I allowed multiple answers.
Feel free to create your own survey if you like.
I would like to clarify my original vote, which was for it to display ... "0.75241"
However, depending on what is actually being debugged, it may be necessary to delve deeper, and so, I would suggest that the normal display be " 0.75241", but that the tooltip display the full value as well as the hexadecimal representation too.
Another option is to allow the user to configure the defaults or have a button that switches between "nice value", full value, and hex representation.