{x86,m68k}/float.h: document LDBL_MIN behavior

It seems that even though both these platforms have 12-byte floats
that are pretty much the same representation and both allegedly
IEEE-compliant, they manifest the top bit of the mantissa and then
differ slightly in the behavior of the extra encodings this permits.

Thanks to riastradh@ for helping sort this out.
This commit is contained in:
dholland 2023-12-31 04:20:40 +00:00
parent ea03157bde
commit ba8892948d
2 changed files with 48 additions and 2 deletions

View File

@ -1,8 +1,31 @@
/* $NetBSD: float.h,v 1.21 2014/03/18 18:20:41 riastradh Exp $ */
/* $NetBSD: float.h,v 1.22 2023/12/31 04:20:40 dholland Exp $ */
#ifndef _M68K_FLOAT_H_
#define _M68K_FLOAT_H_
/*
* LDBL_MIN is half the x86 LDBL_MIN, even though both are 12-byte
* floats with the same base properties and both allegedly
* IEEE-compliant, because both these representations materialize the
* top (integer-part) bit of the mantissa. But on m68k if the exponent
* is 0 and the integer bit is set, it's a regular number, whereas on
* x86 it's called a pseudo-denormal and apparently treated as a
* denormal, so it doesn't count as a valid value for LDBL_MIN.
*
* x86 citation: Intel 64 and IA-32 Architectures Software Developer's
* Manual, vol. 1 (Order Number: 253665-077US, April 2022), Sec. 8.2.2
* `Unsupported Double Extended-Precision Floating-Point Encodings
* and Pseudo-Denormals', p. 8-14.
*
* m86k citation: MC68881/MC68882 Floating-Point Coprocessor User's
* Manual, Second Edition (Prentice-Hall, 1989, apparently issued by
* Freescale), Section 3.2 `Binary Real Data formats', pg. 3-3 bottom
* in particular and pp. 3-2 to 3-5 in general.
*
* If anyone needs to update this comment please make sure the copy in
* m68k/float.h also gets updated.
*/
#if defined(__LDBL_MANT_DIG__)
#define LDBL_MANT_DIG __LDBL_MANT_DIG__
#define LDBL_EPSILON __LDBL_EPSILON__

View File

@ -1,10 +1,33 @@
/* $NetBSD: float.h,v 1.6 2013/04/27 21:35:25 joerg Exp $ */
/* $NetBSD: float.h,v 1.7 2023/12/31 04:20:40 dholland Exp $ */
#ifndef _X86_FLOAT_H_
#define _X86_FLOAT_H_
#include <sys/featuretest.h>
/*
* LDBL_MIN is twice the m68k LDBL_MIN, even though both are 12-byte
* floats with the same base properties and both allegedly
* IEEE-compliant, because both these representations materialize the
* top (integer-part) bit of the mantissa. But on m68k if the exponent
* is 0 and the integer bit is set, it's a regular number, whereas on
* x86 it's called a pseudo-denormal and apparently treated as a
* denormal, so it doesn't count as a valid value for LDBL_MIN.
*
* x86 citation: Intel 64 and IA-32 Architectures Software Developer's
* Manual, vol. 1 (Order Number: 253665-077US, April 2022), Sec. 8.2.2
* `Unsupported Double Extended-Precision Floating-Point Encodings
* and Pseudo-Denormals', p. 8-14.
*
* m86k citation: MC68881/MC68882 Floating-Point Coprocessor User's
* Manual, Second Edition (Prentice-Hall, 1989, apparently issued by
* Freescale), Section 3.2 `Binary Real Data formats', pg. 3-3 bottom
* in particular and pp. 3-2 to 3-5 in general.
*
* If anyone needs to update this comment please make sure the copy in
* x86/float.h also gets updated.
*/
#define LDBL_MANT_DIG 64
#define LDBL_EPSILON 1.0842021724855044340E-19L
#define LDBL_DIG 18