is empty besides calling switch_exit(). So, rename switch_exit() to
cpu_exit() and modify the routine to call lwp_exit2() direct.
This saves couple cycles on the exit path.
virtual memory reservation and a private pool of memory pages -- by a scheme
based on memory pools.
This allows better utilization of memory because buffers can now be allocated
with a granularity finer than the system's native page size (useful for
filesystems with e.g. 1k or 2k fragment sizes). It also avoids fragmentation
of virtual to physical memory mappings (due to the former fixed virtual
address reservation) resulting in better utilization of MMU resources on some
platforms. Finally, the scheme is more flexible by allowing run-time decisions
on the amount of memory to be used for buffers.
On the other hand, the effectiveness of the LRU queue for buffer recycling
may be somewhat reduced compared to the traditional method since, due to the
nature of the pool based memory allocation, the actual least recently used
buffer may release its memory to a pool different from the one needed by a
newly allocated buffer. However, this effect will kick in only if the
system is under memory pressure.
the standard flags. See also PR#19163.
Before:
cpu0: AMD Athlon XP 1800+ (686-class), 1532.11 MHz
cpu0: features 383f9ff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,SEP,MTRR>
cpu0: features 383f9ff<PGE,MCA,CMOV,FGPAT,PSE36,MMX>
cpu0: features 383f9ff<FXSR,SSE>
After:
cpu0: AMD Athlon XP 1800+ (686-class), 1532.11 MHz
cpu0: features c3cbf9ff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,SEP,MTRR>
cpu0: features c3cbf9ff<PGE,MCA,CMOV,PAT,PSE36,MPC,MMXX,MMX>
cpu0: features c3cbf9ff<FXSR,SSE,3DNOW2,3DNOW>
While I'm here, amd_cpuid_cpu_cacheinfo() is an info function rather
than a probe function.
in interrupt controllers in struct pic, and try to keep as much
common code as possible. At the lowest (asm) level, this is done
with CPP macros.
The main structure is now struct intrsource, describing an established
interrupt line, of any kind (soft/hard local apic/legacy apic/IO apic).
For quick masking, there may be a maximum of 32 sources per CPU.
Sources can be assigned to any CPU in the MP case, though currently they
all go to the boot CPU.
caveats, but works quite well in a lot of MP cases, and all
UP cases that I have tested. Parts of this will hopefully be
reworked in the not-too-distant future.
- If we detect SSE/SSE2 support in the CPU, enable SSE exceptions
and set i386_has_{sse,sse2} as appropriate.
- Expose i386_use_fxsave and i386_has_{sse,sse2} through sysctl
as machdep.{osfsxr,sse,sse2}.
- Add "associativity" to the cache_info structure.
- Add a (*cpu_cacheinfo)() function pointer, like we have a
(*cpu_setup)() function pointer. Cache info in the `cpuid'
is vendor-specific.
Remove duplicate prototype for i386_{set,get}_ldt() from sys_machdep.c.
Change i386_iopl() and i386_{set,get}_{ldt,ioperm}() to take a second
argument of "void *" instead of "char *", for consistency with other syscalls.
defined, call addupc_intr() directly from statclock() in the system time case,
using the same P_OWEUPC path if the copyin/copyout fails.
Use this in i386 to remove profiling code from the normal userret() path.