If any member is LINK_STATE_UP then it's LINK_STATE_UP.
Otherwise if any member is LINK_STATE_UNKNOWN then it's LINK_STATE_UNKNOWN.
Otherwise it's LINK_STATE_DOWN.
- Bug fixes to gmp_snprintf, conversion to double, mpz_powm,
and mpf_set_str.
- New functions for factorial, primorial, fibonacci, mpz_2fac_ui,
and mpz_mfac_uiui.
- MIPS r6 cores are now supported.
- Various speeds ups.
does not use mknative.common yet, so always updates files and does
not mark them with NetBSD rcsid. (not a regression from the manual
version at least.)
used to before.
while it uses less total lines of code and looks less ugly,
the merged crash+ddb code here is less correct and harder to
follow for the kernel path.
There is a crucial difference between these functions, in that
Lst_ForEachUntil can cope with a few concurrent modifications while
iterating over the list. This is something that Lst_ForEach doesn't do.
This difference led to a crash very early in NetBSD's build.sh.
These functions made the code larger than necessary. The prev and next
fields are published intentionally since navigating in a doubly-linked
list is simple to do and there is no need to wrap this in a layer of
function calls, not even syntactically. (On the execution level, the
function calls had been inlined anyway.)
The 3 callers of this function passed different flags, and these flags
led to code paths that almost did not overlap.
It's a bit strange that GCC 5 didn't get that, and even marking the
function as inline did not produce much smaller code, even though the
conditions inside that function were obviously constant. Clang 9 did a
better job here.
But even for human readers, inlining the function and then throwing away
the dead code leads to much easier code.
This pattern of squeezing completely different code into a single
function has already occurred in a different part of make, though I
don't remember where exactly.
The previous API had complicated rules for the cases in which the single
function returned NULL or what it did. The flags for that function were
confusing since passing TARG_NOHASH would create a new node even though
TARG_CREATE was not included in that bit mask.
Splitting the function into 3 separate functions avoids this confusion.
It also reveals several places where the complicated API led to
unreachable code. Such code has been removed.
Link state changes are not dependant on the interface being up, but we also
need to guard against more link state changes being scheduled when the
interface is being detached.
We do this by clearing the link queue but keeping if_link_sheduled = true.
We can check for this in both if_link_state_change() and
if_link_state_change_work() to abort early as there is no point in doing
anything if the interface is being detached because if_down() is called
in if_detach() after the workqueue has been drained to the same overall
effect.
Changed __float128 to the type _Float128 specified in ISO/IEC TS 18661.
__float128 is used as a fallback if _Float128 is not supported.
New function mpfr_get_str_ndigits about conversion to a string of digits.
New function mpfr_dot for the dot product (incomplete, experimental).
New functions mpfr_get_decimal128 and mpfr_set_decimal128 (available
only when MPFR has been built with decimal float support).
New function mpfr_cmpabs_ui.
New function mpfr_total_order_p for the IEEE 754 totalOrder predicate.
The mpfr_out_str function now accepts bases from -2 to -36, in order to
follow mpfr_get_str and GMP's mpf_out_str functions (these cases gave an
assertion failure, as with other invalid bases).
Shared caches: cleanup; really detect lock failures (abort in this case).
Improved mpfr_add and mpfr_sub when all operands have a precision
equal to twice the number of bits per word, e.g., 128 bits on a 64-bit
platform.
Optimized the tuning parameters for various architectures.
- Keep a bitmap of eligible interrupt-handling CPUs in the pci_chipset_tag_t.
If this bitmap is 0, then we assume that all PCI interrupts should be
routed to the primary CPU.
- Add an optional PCI chipset callback for setting the CPU affinity of
an interrupt.
- When an establishing an interrupt handler, select the CPU that will
handle this irq using the following algorithm:
==> If the irq already has a CPU assignment, keep it.
==> Otherwise, find the CPU with the fewest registered handlers that
is eligible from both a hardware (based on the pci_chipset_tag_t)
and software (based on cpu_info::ci_schedstate.spc_flags) perspectives.
==> Fall back to the primary CPU failing all else.