For coders : what would you like to see as value for 'x = 1.123 * 0.67' (double) in the MetaEditor Watchlist ? (see first post for more explanation). - page 2

 
Stanislav Korotky #:

It seems you don't understand the problem. In an actual expression coming from a trade system you can't guarantee to have all values of the same decimal precision (due to their underlying nature). And meaning of result value can imply different precision, so you can't eliminate rounding at some (one or more) stage(s).

I worked in a firm developing financial accounting systems - I know the problem.

Fernando Carreiro #:

The same applies to binary floating point. The rounding error considerations are equivalent.

There is a mantissa and an exponent, be it binary or decimal, and there is a limit to the precision of what is stored, and how it is manipulated during calculations.

Just as a "double" has a fixed number of available bits for representing the number, so does the decimal counterpart. I see no issue nor difference, one is binary and the other is decimal, that is all.

Binary is faster and more compact, while decimal may be slower and less compact. But that is where the differences end.

And yes, I have also worked for a financial institution before. Between 2000-2002, I worked for a major well known Spanish bank, in the software department of their trading division, and even visited the trading floor often. 

Instead of authority arguments ''I worked there", "I know the problem", it would be more interesting to get technical details from concrete examples.

I think there are problems in both approaches (double or decimal variants), though they seems to me a bit different. 

 
Using "cent-values" or the smallest relevant fraction of a value to allow for integer-like representation, like Postgres does it, brings along the problem of storing the relevant digits, as they might be understood globally.

So when changing from 4 digits to 5 digits, all previous stored values would now be wrong, requiring a one-time update/conversion.

So in this case, the data type would also need to encompass the digits as well, which brings us back to start.

For the debugger, it should display what's in memory, and show the interpreted values as well. There is no point in reducing the available information.
 
Fernando Carreiro #:

The same applies to binary floating point. The rounding error considerations are equivalent.

There is a mantissa and an exponent, be it binary or decimal, and there is a limit to the precision of what is stored, and how it is manipulated during calculations.

Just as a "double" has a fixed number of available bits for representing the number, so does the decimal counterpart. I see no issue nor difference, one is binary and the other is decimal, that is all.

The rounding error considerations are not equivalent. double takes all complexities upon itself due to the floating point approach, which means a variable decimal precision adapting on-the-fly. With numbers of fixed decimal precision, especially with different precisions, you need to invent something. How would you calculate average price for a position made up of several deals, where lots and prices are of different types: DECIMAL(2) and DECIMAL(5) correspondingly?

 
To represent the real value (as we humans do), the only way is that MT5 supports a decimal (base 10) floating-point system.

Given that MT5 supports only base 2 fp (float and double), I am voting for the shortest string value (not a one of your choices, Alain) that maintains round trip accuracy (via RYU, GRISU3 or DragonBox libraries). Also a hex value is ok as you're supposed to be a programmer after all. We'll not reinvent the wheel, if major programming languages had found a better way for binary fp system, then it's welcome.

The only other way around is DECIMAL.

(In reality, for a MT5 developpper, it is not what you like, it only depends on how you display the numbers), either wsptintf(%.15g) or using RYU or others.)
 
@Stanislav Korotky #The rounding error considerations are not equivalent. double takes all complexities upon itself due to the floating point approach, which means a variable decimal precision adapting on-the-fly. With numbers of fixed decimal precision, especially with different precisions, you need to invent something. How would you calculate average price for a position made up of several deals, where lots and prices are of different types: DECIMAL(2) and DECIMAL(5) correspondingly?

You are thinking in terms of fixed point decimals. I did not state "fixed point". I implied "decimal floating point" and even mentioned the terms "mantissa" and an "exponent" in my post #10. These are floating point concepts, which are also used for a "double" (which is double precision binary floating point).

EDIT: To be fair, it seems that for "decimal floating point" the concept of "mantissa" is instead called "coefficient" (at least according to Wikipedia). But, be it called "mantissa", "coefficient", or "fraction", the concept is similar (it is the part with the significant digits).

Standard formats

IEEE 754 specifies three standard floating-point decimal data types of different precision:

Language support

  • C# has a built-in data type decimal consisting of 128 bits resulting in 28–29 significant digits. It has an approximate range of ±1.0 × 10−28 to ±7.9228 × 1028.[1]
  • Starting with Python 2.4, Python's standard library includes a Decimal class in the module decimal.[2]
  • Ruby's standard library includes a BigDecimal class in the module bigdecimal.
  • Java's standard library includes a java.math.BigDecimal class.
  • In Objective-C, the Cocoa and GNUstep APIs provide an NSDecimalNumber class and an NSDecimal C data type for representing decimals whose mantissa is up to 38 digits long, and exponent is from −128 to 127.
  • Some IBM systems and SQL systems support DECFLOAT format with at least the two larger formats.[3]
  • ABAP's new DECFLOAT data type includes decimal64 (as DECFLOAT16) and decimal128 (as DECFLOAT34) formats.[4]
  • PL/I natively supports both fixed-point and floating-point decimal data.
  • GNU Compiler Collection (gcc) provides support for decimal floats as an extension to C and C++.[5]
 

This survey answers one part of the problem and drops the other (leading question bias).

What was asked:

For a double value with a given hexadecimal representation (e.g., 0x3FE813BE22E5DE16), what would you like to see as value:

1) 0.75241 (real value, like in mathematic or a calculator)

2) 0.752410000000000023 (IEEE-754 value converted to decimal, 18 digits)

of course, most answers will be (a)

There should be a second question:

What about doubles that may differ by 1 bit in their hexadecimal representations, would you like to see the same string representation (ignore the bit difference)?

I think most answers will be NO.

A script to demonstrate it practically:

//+------------------------------------------------------------------+
//| Convert a double value into the hexadecimal representation.      |
//+------------------------------------------------------------------+
string DoubleToHexadecimal(const double value)
  {
   union _d {double value; long bits;} dbl;
   dbl.value = value;
   return StringFormat("0x%.16I64X", dbl.bits);
  }
  
void RoundTrippable()
  {
   // In this case, the numbers 0.75241 and 0.752410000000000023 (and many other very close numbers, within a half epsilon, above and below) 
   // are encoded to the same binary number in memory (i.e., many-to-one encoding), and this is unlike the one-to-one encoding of integers. 

   // So, we can choose either the string "0.752410000000000023" or the string "0.75241" to convert double->string. 
   // Both are valid and correct results. 
   // RYU algorithm (or MQL5 Print()) the shortest roundtrip string and displays "0.75241" for both.

   Print( DoubleToHexadecimal(0.75241) );                 // 0x3FE813BE22E5DE16
   Print( DoubleToHexadecimal(0.752410000000000023) );    // 0x3FE813BE22E5DE16
  
   Print(0.75241);                                        // 0.75241
   Print(0.752410000000000023);                           // 0.75241
  }

void Non_RoundTrippable()
{
   // These very close numbers 0.76364 and 0.7636400000000001 are encoded to different binary numbers in memory !!
   // To maintain round-trip accuracy: RYU algorithm (or MQL5 Print()) has to display a different string for each.
  
   Print( DoubleToHexadecimal(0.76364) );                 // 0x3FE86FBD273D5BAB
   Print( DoubleToHexadecimal(0.7636400000000001) );      // 0x3FE86FBD273D5BAC

   Print(0.76364);                                        // 0.76364
   Print(0.7636400000000001);                             // 0.7636400000000001
}

void OnStart()
  {
   RoundTrippable();
   
   Non_RoundTrippable();
  }

This is what's meant by round-trip accuracy.

RYU / Print() implements the shortest--string than mainatins round-trip accuracy. I think it is a well-balanced solution of accuracy and verbosity.

Note:

I just try a find a practical solution dbl->string conversion, given the limitations of binary base-2 floating-point system (float, double).

Remember that our human decimal number (0.75241) has to converted to an approximate binary fraction (0.752410000000000023) with power 2 denominator x/2^n (binary floating-point number) with inherent some loss of precision < machine_epsilon. Therefore, when we convert back from machine (binary) to human (decimal string) we have to deal with that tiny imprecision (dbl->string conversion).

I will not consider other FP formats like: decimal (base-10), binary-coded decimal, fixed-point, or arbitrary precision (bignum) number systems, because this solutions are not practical for MetaTrader ecosystem.

 

this is a screenshot of Visual studio 2022 debugger's watch window using the shortest round trip algorithm.

 
amrali #This survey answers one part of the problem and drops the other (leading question bias).

I would like to clarify my original vote, which was for it to display ...  "0.75241"

However, depending on what is actually being debugged, it may be necessary to delve deeper, and so, I would suggest that the normal display be " 0.75241", but that the tooltip display the full value as well as the hexadecimal representation too.

Another option is to allow the user to configure the defaults or have a button that switches between "nice value", full value, and hex representation.

 
amrali #:

This survey answers one part of the problem and drops the other (leading question bias).

It's just a survey in the frame of what is possible here (one line to present the survey), and the result is pretty clear. Additionally I allowed multiple answers. 

Feel free to create your own survey if you like.

 
Fernando Carreiro #:

I would like to clarify my original vote, which was for it to display ...  "0.75241"

However, depending on what is actually being debugged, it may be necessary to delve deeper, and so, I would suggest that the normal display be " 0.75241", but that the tooltip display the full value as well as the hexadecimal representation too.

Another option is to allow the user to configure the defaults or have a button that switches between "nice value", full value, and hex representation.

It's exactly what I proposed (in the original discussion thread).