Jump to content

decimal32 floating-point format

From Wikipedia, the free encyclopedia
(Redirected from Decimal32)

In computing, decimal32 is a decimal floating-point computer numbering format that occupies 4 bytes (32 bits) in computer memory.

Purpose and use

[edit]

Like the binary16 and binary32 formats, decimal32 uses less space than the actually most common format binary64.

In contrast to the binaryxxx data formats the decimalxxx formats provide exact representation of decimal fractions, exact calculations with them and enable human common 'ties away from zero' rounding (in some range, to some precision, to some degree). In a trade-off for reduced performance. They are intended for applications where it's requested to come near to schoolhouse math, such as financial and tax computations. (In short they avoid plenty of problems like 0.4 + 0.3 -> 0.70000005 which happen with binary32 datatypes.)

Range and precision

[edit]

decimal32 supports 'normal' values which can have 7 digit precision from ±1.000000×10^−95 up to ±9.999999×10^+96, plus 'denormal' values with ramp-down relative precision down to ±1.×10^−101, signed zeros, -/+infinities and NaN (Not a Number). The encoding is somewhat complex, see below.

The binary format with the same bit-size, binary32, has an approximate range from denormal-min ±1×10^−45 over normal-min with full 24-bit precision: ±1.1754944×10^−38 to max ±3.4028235×10^38.

Performance

[edit]

Comparing performance in modern IT is multi-layer difficult, one could say in an actual 64-bit intel(r) / linux / gcc / libdfp / BID implementation decimal32 calculations are between factor 2 and 15 slow vs. binary32 datatypes for basic arithmetic, but 'higher' functions like power ( 400 ) and trigonometric functions like tangent ( 30 000 ) suffer bigger penalty. Evtl. GNU gcc project and 'libdfp' project on github could like some help to improve.

Representation / encoding of decimal32 values

[edit]

decimal32 values are represented in a 'not normalized' near to 'scientific format', with combining some bits of the exponent with the leading bits of the significand in a 'combination field'.

Generic encoding
Sign Combination Trailing significand bits
1 bit 11 bits 20 bits
s mmmmmmmmmmm tttttttttttttttttttt

Besides the special cases infinities and NaNs there are four points relevant to understand the encoding of decimal32.

- BID vs. DPD encoding, Binary Integer Decimal using a positive integer value for the significand, software centric and designed by Intel(r), vs. Densely Packed Decimal encoding for all except the first digit of the significand, hardware centric and promoted by IBM(r), differences see below. Both alternatives provide exactly the same range of representable numbers: up to 7 digits of significand and 3 × 26 = 192 possible exponent values. IEEE 754 allows these two different encodings, without a concept to denote which is used, for instance in a situation where decimal32 values are communicated between systems. CAUTION!: Be aware that transferring binary data between systems using different encodings will mostly produce valid decimal32 numbers, but with different value. Prefer data exchange in íntegral or ASCII 'triplets' for sign, exponent and significand.

- In contrast to the binaryxxx formats the significands of decimal datatypes are not 'normalized' (the leading digit(s) are allowed to be "0"), and thus most values with less than 7 significant digits have multiple possible representations. 1000000 × 10-2=100000 × 10-1=10000 × 100=1000 × 101 all have the value 10000. These sets of representations for a same value are called cohorts, the different members can be used to denote how many digits of the value are known precisely.

- The encodings combine two bits of the exponent with the leading 3 to 4 bits of the significand in a 'combination field', different for 'big' vs. 'small' significands. That enables bigger precision and range, in trade-off that some simple functions like sort and compare, very frequently used in coding, do not work on the bit pattern but require computations to extract exponent and significand and then try to obtain an exponent aligned representation. This effort is partly balanced by saving the effort for normalization, but contributes to the slower performance of the decimal datatypes. Beware: BID and DPD use different bits of the combination field for that, see below.

- Different understanding of significand as integer vs. fraction, and acc. different bias to apply for the exponent (for decimal32 what is stored in bits can be decoded as base to the power of 'stored value for the exponent minus bias of 95' times significand understood as d0 . d−1 d−2 d−3 d−4 d−5 d−6 (note: radix dot after first digit, significand fractional), or base to the power of 'stored value for the exponent minus bias of 101' times significand understood as d6 d5 d4 d3 d2 d1 d0 (note: no radix dot, significand integral), both produce the same result [2019 version[1] of IEEE 754 in clause 3.3, page 18]. Both applies to BID as well as DPD encoding. For decimalxxx datatypes the second view is more common, while for binaryxxx datatypes the first, the biases are different for each datatype.)

In all cases for decimal32, the value represented is

(−1)sign × 10exponent101 × significand, with the significand understood as positive integer.

Alternatively it can be understood as (−1)sign × 10exponent95 × significand with the significand digits understood as d0 . d−1 d−2 d−3 d−4 d−5 d−6, note the radix dot making it a fraction.

For ±Infinity, besides the sign bit, all the remaining bits are ignored (i.e., both the exponent and significand fields have no effect). For NaNs the sign bit has no meaning in the standard, and is ignored. Therefore, signed and unsigned NaNs are equivalent, even though some programs will show NaNs as signed. The bit m5 determines whether the NaN is quiet (0) or signaling (1). The bits of the significand are the NaN's payload and can hold user defined data (e.g., to distinguish how NaNs were generated). Like for normal significands, the payload of NaNs can be either in BID or DPD encoding.

Be aware that the bit numbering used in the tables for e.g. m10 … m0  is in opposite direction than that used in the document for the IEEE 754 standard G0 … G10.

BID Encoding
Combination Field Exponent Significand Description
m10 m9 m8 m7 m6 m5 m4 m3 m2 m1 m0
combination field not! starting with '11', bits ab = 00, 01 or 10
a b c d m m m m e f g abcdmmmm (0)efgtttttttttttttttttttt Finite number with significand < 8388608, fits into 23 bits.
combination field starting with '11', but not 1111, bits ab = 11, bits cd = 00, 01 or 10
1 1 c d m m m m e f g cdmmmmef 100gtttttttttttttttttttt Finite number with significand > 8388607, needs 24 bits.
combination field starting with '1111', bits abcd = 1111
1 1 1 1 0 ±Infinity
1 1 1 1 1 0 quiet NaN
1 1 1 1 1 1 signaling NaN (with payload in significand)

The resulting 'raw' exponent is a 8 bit binary integer where the leading bits are not '11', thus values 0 ... 10111111b = 0 ... 191d, appr. bias is to be subtracted. The resulting significand could be a positive binary integer of 24 bits up to 1001 1111111111 1111111111b = 10485759d, but values above 107 − 1 = 9999999 = 98967F16 = 1001100010010110011111112 are 'illegal' and have to be treated as zeroes. To obtain the individual decimal digits the significand has to be divided by 10 repeatedly.

DPD Encoding
Combination Field Exponent Significand Description
m10 m9 m8 m7 m6 m5 m4 m3 m2 m1 m0
combination field not! starting with '11', bits ab = 00, 01 or 10
a b c d e m m m m m m abmmmmmm (0)cde tttttttttt tttttttttt Finite number with small first digit of significand (0 … 7).
combination field starting with '11', but not 1111, bits ab = 11, bits cd = 00, 01 or 10
1 1 c d e m m m m m m cdmmmmmm 100e tttttttttt tttttttttt Finite number with big first digit of significand (8 or 9).
combination field starting with '1111', bits abcd = 1111
1 1 1 1 0 ±Infinity
1 1 1 1 1 0 quiet NaN
1 1 1 1 1 1 signaling NaN (with payload in significand)

The resulting 'raw' exponent is a 8 bit binary integer where the leading bits are not '11', thus values 0 ... 10111111b = 0 ... 191d, appr. bias is to be subtracted. The significand's leading decimal digit forms from the (0)cde or 100e bits as binary integer. The subsequent digits are encoded in the 10 bit 'declet' fields 'tttttttttt' according the DPD rules (see below). The full decimal significand is then obtained by concatenating the leading and trailing decimal digits.

The 10-bit DPD to 3-digit BCD transcoding for the declets is given by the following table. b9 … b0 are the bits of the DPD, and d2 … d0 are the three BCD digits. Be aware that the bit numbering used here for e.g. b9 … b0 is in opposite direction than that used in the document for the IEEE 754 standard b0 … b9, add. the decimal digits are numbered 0-base here while in opposite direction and 1-based in the IEEE 754 paper. The bits on white background are not counting for the value, but signal how to understand / shift the other bits. The concept is to denote which digits are small (0 … 7) and encoded in three bits, and which are not, then calculated from a prefix of '100', and one bit specifying if 8 or 9.

Densely packed decimal encoding rules[2]
DPD encoded value Decimal digits
Code space
(1024 states)
b9 b8 b7 b6 b5 b4 b3 b2 b1 b0 d2 d1 d0 Values encoded Description Occurrences
(1000 states)
50.0%
(512 states)
a b c d e f 0 g h i 0abc 0def 0ghi (0–7) (0–7) (0–7) 3 small digits 51.2%
(512 states)
37.5%
(384 states)
a b c d e f 1 0 0 i 0abc 0def 100i (0–7) (0–7) (8–9) 2 small digits,
1 large digit
38.4%
(384 states)
a b c g h f 1 0 1 i 0abc 100f 0ghi (0–7) (8–9) (0–7)
g h c d e f 1 1 0 i 100c 0def 0ghi (8–9) (0–7) (0–7)
9.375%
(96 states)
g h c 0 0 f 1 1 1 i 100c 100f 0ghi (8–9) (8–9) (0–7) 1 small digit,
2 large digits
9.6%
(96 states)
d e c 0 1 f 1 1 1 i 100c 0def 100i (8–9) (0–7) (8–9)
a b c 1 0 f 1 1 1 i 0abc 100f 100i (0–7) (8–9) (8–9)
3.125%
(32 states, 8 used)
x x c 1 1 f 1 1 1 i 100c 100f 100i (8–9) (8–9) (8–9) 3 large digits,
b9, b8: don't care
0.8%
(8 states)

The 8 decimal values whose digits are all 8s or 9s have four codings each. The bits marked x in the table above are ignored on input, but will always be 0 in computed results. (The 8 × 3 = 24 non-standard encodings fill in the gap from 103 = 1000 and 210 - 1 = 1023.)

Benefit of this encoding is access to individual digits by de- / encoding only 10 bits, disadvantage is that some simple functions like sort and compare, very frequently used in coding, do not work on the bit pattern but require decoding to decimal digits (and evtl. re-encode to binary integers) first.

An alternate encoding in short BID sections, 10 bits declets encoding 0d ... 1023d and simply using only the range from 0 to 999, would provide the same functionality, direct access to digits by de- / encoding 10 bits, with near zero performance penalty in modern systems, and preserve the option for bit-pattern oriented sort and compare, but the 'Sudoku encoding' shown above was chosen in history, may provide better performance in hardware implementations, and now 'is as it is'.

History

[edit]

decimal32 is a relatively new decimal floating-point format, formally introduced in the 2008 version[3] of IEEE 754 as well as with ISO/IEC/IEEE 60559:2011.[4]

Side effects, more info

[edit]

Zero has 192 possible representations (384 when both signed zeros are included), (even many more if you account the 'illegal' significands which have to be treated as zeroes).

The gain in range and precision by the 'combination encoding' evolves because the taken 2 bits from the exponent only use three states, and the 4 MSBs of the significand stay within 0000 … 1001 (10 states). In total that is 3 × 10 = 30 possible states when combined in one encoding, which is representable in 5 bits ().

The decimalxxx formats include denormal values, for a graceful degradation of precision near zero, but in contrast to the binaryxxx formats they are not marked / do not need a special exponent, they are just values too small to have full 7 digit precision even with the smallest exponent.

See also

[edit]

References

[edit]
  1. ^ 754-2019 - IEEE Standard for Floating-Point Arithmetic ( caution: paywall ). 2019. doi:10.1109/IEEESTD.2019.8766229. ISBN 978-1-5044-5924-2. Archived from the original on 2019-11-01. Retrieved 2019-10-24.
  2. ^ Cowlishaw, Michael Frederic (2007-02-13) [2000-10-03]. "A Summary of Densely Packed Decimal encoding". IBM. Archived from the original on 2015-09-24. Retrieved 2016-02-07.
  3. ^ IEEE Computer Society (2008-08-29). IEEE Standard for Floating-Point Arithmetic. IEEE. doi:10.1109/IEEESTD.2008.4610935. ISBN 978-0-7381-5753-5. IEEE Std 754-2008. Archived from the original on 2016-09-11. Retrieved 2016-02-08.
  4. ^ "ISO/IEC/IEEE 60559:2011". 2011. Archived from the original on 2016-03-04. Retrieved 2016-02-08.