better job now at keeping all physical CPUs busy, while using the extra
threads to help out. In particular, during preempt() if we're using SMT,
try to find a better CPU to run on and teleport curlwp there.
- Change the CPU topology stuff so it can work on asymmetric systems. This
mainly entails rearranging one of the CPU lists so it makes sense in all
configurations.
- Add a parameter to cpu_topology_set() to note that a CPU is "slow", for
where there are fast CPUs and slow CPUs, like with the Rockwell RK3399.
Extend the SMT awareness to try and handle that situation too (keep fast
CPUs busy, use slow CPUs as helpers).
cleared because it is also cleared inside the loop.
Not clearing it could trigger DNAs on VMEXITs, because STTS/CLTS are still
here as part of debugging since my FPU overhaul.
"model") property to set the cpu model (in userland aka sysctl hw.model).
When attaching the first cpu, do not overwrite a cpu model if it already
had been set.
for debugging
- nomsix(boolean)
- disable msix support
- stats_interval(signed integer)
- change interval for collecting statistic counters
- nqps_limit(signed integer)
- limitation for the number of queue pairs
- {tx,rx}_ndescs(unsigned integer)
- the number of discriptors in txqueue or rxqueue
when mmap()/mprotect() with only PROT_EXEC, syscall will be successful,
but the page actually hadn't been mapped.
it should be mapped with PROT_READ|PROT_EXEC implicitly. (r-x)
is not changed do that instead of scanning for a single character delim ':',
it scans for "?:". This is because !empty(COMPILE.c:M*-pg*) contains a ':'.
where curcpu() is defined as curlwp->l_cpu:
- mi_switch(): undo the ~2007ish optimisation to unlock curlwp before
calling cpu_switchto(). It's not safe to let other actors mess with the
LWP (in particular l->l_cpu) while it's still context switching. This
removes l->l_ctxswtch.
- Move the LP_RUNNING flag into l->l_flag and rename to LW_RUNNING since
it's now covered by the LWP's lock.
- Ditch lwp_exit_switchaway() and just call mi_switch() instead. Everything
is in cache anyway so it wasn't buying much by trying to avoid saving old
state. This means cpu_switchto() will never be called with prevlwp ==
NULL.
- Remove some KERNEL_LOCK handling which hasn't been needed for years.
Both desc and note header needs to be aligned. Therefore, we need
to realign after skipping past desc as well.
While at it, fix the other alignment fix to use roundup() macro.
Introduce a simple COREDUMP_MACHDEP_LWP_NOTES logic to provide machdep
API for injecting per-LWP notes into coredumps, and use it to append
PT_GETXSTATE note.
Since the XSTATE block uses the same format on i386 and amd64, the code
does not have to conditionalize between 32-bit and 64-bit ELF format
on that. However, it does need to distinguish between 32-bit and 64-bit
PT_* values. In order to do that, it reuses PT32_* constant already
present for ptrace(), and adds a matching PT64_GETXSTATE to satisfy
the cpp logic.
audioclose() freed audio_file_t structure, but only audiobellclose
didn't pass there. I change that all of freeing audio_file_t is done
by each *_close().
- Partially sort the list of per-vnode namecache entries by using a TAILQ.
Put the real name to the head, and put dot and dotdot to the tail so that
cache_lookup_reverse() doesn't have to consider them.
>panic: kernel diagnostic assertion "!pmap_extract(pmap_kernel(), loopva, NULL)" failed: file "../../../../uvm/uvm_km.c", line 674 loopva=0xffffffc001000000'
The space allocated by bootpage_alloc() is only used as a physical page
for pagetable pages, so there is no need to map it with KVA.
And kernend_extra should not have consumed any KVA space.
(because the compiler complains), to use a match with the compile flags
and *pg*, instead of using a match to a target suffix (which is NetBSD
build-specific). Pointed out by phone@.
The preprocessor conditions contradicted each other: __hpux__ or __hpux
would need to be defined, and at the same time none of them would need to
be defined.