- ArrayBsearch
- ArrayCopy
- ArrayCompare
- ArrayFree
- ArrayGetAsSeries
- ArrayInitialize
- ArrayFill
- ArrayIsDynamic
- ArrayIsSeries
- ArrayMaximum
- ArrayMinimum
- ArrayPrint
- ArrayRange
- ArrayResize
- ArrayInsert
- ArrayRemove
- ArrayReverse
- ArraySetAsSeries
- ArraySize
- ArraySort
- ArraySwap
- ArrayToFP16
- ArrayToFP8
- ArrayFromFP16
- ArrayFromFP8
ArrayToFP16
Copies an array of type float or double into an array of type ushort with the given format.
bool ArrayToFP16(
|
Overloading for the double type
bool ArrayToFP16(
|
Parameters
dst_array[]
[out] Receiver array or type ushort.
src_array[]
[in] Source array of type float or double.
fmt
[in] Copying format from the ENUM_FLOAT16_FORMAT enumeration.
Return Value
Returns true if successful or false otherwise.
Note
Formats FLOAT16 and BFLOAT16 are defined in the ENUM_FLOAT16_FORMAT enumeration and are used in MQL5 only for operations with ONNX models.
The function converts input parameters of type float or double to type FLOAT16 and BFLOAT16. These input parameters are then used in the OnnxRun function.
FLOAT16, also known as half-precision float, uses 16 bits to represent floating-point numbers. This format provides a balance between accuracy and computational efficiency. FLOAT16 is widely used in deep learning algorithms and neural networks, which require high-performance processing of large datasets. This format accelerates computations calculations by reducing the size of numbers, which is especially important when training deep neural networks on GPUs.
BFLOAT16 (or Brain Floating Point 16) also uses 16 bits but differs from FLOAT16 in the approach to format representation. In this format, 8 bits are allocated for representing the exponent, while the remaining 7 bits are used for representing the mantissa. This format was developed for use in deep learning and artificial intelligence, especially in Google's Tensor Processing Unit (TPU). BFLOAT16 demonstrates excellent performance in neural network training and can effectively accelerate computations.
Example: function from the article Working with ONNX models in float16 and float8 formats
//+------------------------------------------------------------------+
|
See also