Commit Graph

4025 Commits

Author SHA1 Message Date
riastradh
2b7335b2eb alpha: Use load-acquire/store-release.
Omit needless membar in pmap_kenter_pa while here -- caller must
ensure pmap_kenter_pa on one CPU happens before use of the VA on
another CPU anyway, so there is no benefit to a membar here.

ok thorpej@
2020-09-08 21:41:37 +00:00
thorpej
e446c2af54 Track the SSIR per-cpu, rather than globally. 2020-09-05 18:01:42 +00:00
thorpej
2871f8c08b - Document all of the various interrupt levels in the Processor Stataus
register, and provide symbolic names for them as well.
- Use ALPHA_PSL_IPL_* values directly for IPL_*.
2020-09-05 16:29:07 +00:00
maya
517bc6b8ea fix typo 2020-09-05 04:11:10 +00:00
thorpej
d685f9904b Update a comment. 2020-09-05 03:47:16 +00:00
thorpej
39676bd9eb Add siisata. 2020-09-05 01:28:18 +00:00
thorpej
5172d67b23 Build GENERIC with debug symbols, not just GENERIC.MP. 2020-09-05 01:02:02 +00:00
thorpej
27064dd14d Remove the RAWHIDE kernel; there is not need to keep it around. 2020-09-05 00:58:59 +00:00
thorpej
59bd389dae Put the MI cpu_data at the beginning of cpu_info so that it is
cache line aligned.
2020-09-04 15:50:09 +00:00
thorpej
f8870e8295 Save a few instructions every time we manipulate pcb::pcb_onfault. 2020-09-04 04:09:52 +00:00
thorpej
a82352701e Use SysValue to store curlwp rather than curcpu. curlwp is acceessed
much more frequently, and this makes curlwp preemption-safe.
2020-09-04 03:53:12 +00:00
thorpej
65c9e4ee5f Include <sys/lwp.h> 2020-09-04 03:41:49 +00:00
thorpej
fd31bbf714 Fix a typo. 2020-09-04 03:36:44 +00:00
thorpej
bc1ecb4578 Missed one in last change. 2020-09-04 02:59:44 +00:00
thorpej
22b67e22b2 Garbage-collect GET_CPUINFO; it's no longer used. 2020-09-04 02:58:18 +00:00
thorpej
8c5e0feccc - Make the GET_CURLWP actually return curlwp, not &curlwp.
- exception_return(): Use GET_CURLWP directly, rather than a dance
  acount GET_CPUINFO.
- Introduce SET_CURLWP(), to set the curlwp value.
- Garbage-collect GET_FPCURLWP.
2020-09-04 02:54:56 +00:00
thorpej
f4c65a306a Shuffle fields in cpu_info for better cache behavior.
XXX More changes to come after curlwp is overhauled.
2020-09-04 01:57:29 +00:00
thorpej
68cc89a0e2 Decorate some symbols with the appropriate things for better cache
behavior.  Assert that cpu_infos are cache line aligned.
2020-09-04 01:56:29 +00:00
thorpej
5f3d40cbf2 Define COHERENCY_UNIT and CACHE_LINE_SIZE as 64, which is the primary cache
line size on EV6 / EV7.  This is also the default MI fallback definition,
but now we're not relying on that value.
2020-09-03 22:56:11 +00:00
thorpej
c93f173033 Garbage-collect the SWITCH_CONTEXT macro, since it now expands to
just "call_pal PAL_OSF1_swpctx".
2020-09-03 15:38:17 +00:00
thorpej
8c51ba45e8 Garbage-collect fpcurlwp -- it has been obsolete since FPU tracking
was converted over to PCU.
2020-09-03 14:27:47 +00:00
thorpej
7794783b37 Garabage-collect curpcb / cpu_info::ci_curpcb. 2020-09-03 04:20:54 +00:00
thorpej
41e9b45267 The only remaining consumer of curpcb was the PROM mapping code, for if
PROM console routines are being used (only on KN8AE).  We have access to
the sam information via curlwp, so use that, and eliminate the need to set
cpu_info::ci_curpcb when context switching, which saves an extra all into
PALcode.
2020-09-03 04:18:30 +00:00
thorpej
139cbc3f89 Clean up all of the _PMAP_MAY_USE_PROM_CONSOLE crapola, centralizing the
logic in prom.c, and rename it _PROM_MAY_USE_PROM_CONSOLE in a few places
it's still needed.
2020-09-03 02:09:09 +00:00
thorpej
3112101988 - Remove redundant memory barriers. For the ones that remain,
use the membar_ops(3) names to make it clear how they pair up (even
  though most of them expand to the MB instruction anyway).
2020-09-03 02:05:03 +00:00
thorpej
45ff3d1f21 - alpha_ipi_process(): Continue processing IPIs until the ipimask
reads 0.  Issue a memory barrier between the atomic swap and performing
  the work.
- alpha_send_ipi(): Issue a memory barrier before setting the ipimask
  to ensure all memory accesses prior to signalling the IPI have
  completed.  Also issue a memory barrier getween setting the ipimask
  and calling PALcode to write the IPIR.
2020-09-03 02:03:14 +00:00
riastradh
4037d9e4d9 Nix trailing whitespace. 2020-09-02 17:40:23 +00:00
thorpej
9f195b2aba - compare_{le,lt)(): Use float64_{le,lt}_quiet() to avoid raising
exceptions on QNaNs.
- alpha_fp_interpret(): Instructions are 32-bits wide, so don't use a
  uint64_t to contain them.
- alpha_fp_complete(): Operations on NaNs trap on Alpha, but the exception
  summary reports INV (invalid operation) rather than SWC (software
  completion) in this case.  So also interpret the instruction if INV
  is set in the exception summary.  This will emulate operations on
  NaN and correctly suppress FP traps for QNaNs.

This fixes bin/55633, which was caused by:

  -> Input string "nanotime" is passed to awk's internal is_number().
  -> strtod() interprets as "nan" and returns QNaN as the result.
  -> Result compared against HUGE_VAL, blows up because cmptle is called
     with a NaN operand, and the hardware doesn't care that it's quiet.
2020-09-01 08:22:36 +00:00
thorpej
f73bd5175e When initializing the PROM interface, check to see if we're running
inside Qemu by consulting the system serial number, and quickly abort
calls into the PROM if we are.

This is a temporary measure until I can figure out why calling into
the Qemu PROM interface blows up.
2020-08-30 16:26:56 +00:00
thorpej
a63d213ab6 G/C GET_IDLE_PCB -- it hasn't been used for some time. 2020-08-29 22:50:27 +00:00
thorpej
b0669daa38 Bump UBC_WINSHIFT to 16 (64KB), and UBC_NWINS to 4096 (256MB total).
Alpha has plenty of KVA to use for this.
2020-08-29 20:08:08 +00:00
thorpej
13c5b655f2 - Centralize per-CPU pmap initialization into a new pmap_init_cpu()
function.  Call in from pmap_bootstrap() for the boot CPU, and
  from cpu_hatch() for secondaary CPUs.
- Eliminiate the dedicated I-stream memory barrier IPI; handle it all from
  the TLB shootdown IPI.  Const poison, and add some additional memory
  barriers and a TBIA to the PAUSE IPI.
- Completly rewrite TLB management in the alpha pmap module, borrowing
  somoe ideas from the x86 pmap and adapting them to the alpha environment.
  See the comments for theory of operation.  Add a bunch of stats that
  can be reported (disabled by default).
- Add some additional symbol decorations to improve cache behavior on
  MP systems.  Ensure coherency unit alignment for several structures
  in the pmap module.  Use hashed locks for pmap structures.
- Start out all new processes on the kernel page tables until their
  first trip though pmap_activate() to avoid the potential of polluting
  the current ASN in TLB with cross-process mappings.
2020-08-29 20:06:59 +00:00
thorpej
7e4c09e726 - cpu_need_resched(): Explicitly cover each RESCHED_* case, and add a
comment explaining why we don't need to act on IDLE+REMOTE.
- cpu_signotify(): Move to machdep.c, and if we're asked to notify
  an LWP running on another CPU, send an AST IPI to that CPU.  Add some
  assertions.
- cpu_need_proftick(): Move to machdep.c, add some assertions.
2020-08-29 19:06:32 +00:00
thorpej
78fcb07de0 Enable DIAGNOSTIC by default in -current. Should be commented out
in release branches.  Add commented-out LOCKDEBUG option.
2020-08-29 16:00:36 +00:00
thorpej
4ddf659796 In ipl2psl_table[], use IPL_SCHED instead of IPL_CLOCK (the legacy name),
and add a comment noting that this is the level IPIs come in with on
alpha.
2020-08-29 15:29:30 +00:00
thorpej
b8e5e342e7 Make sure init_prom_interface() only runs once, otherwise we initialize
a mutex twice, and that upsets LOCKDEBUG.  But instead of seeing a
proper message about it, the output happens before the PROM console
interfcace is initialized, we would end up with a translation fault
back into SRM.
2020-08-29 15:16:12 +00:00
thorpej
44d0e3f040 - Track the currently-activated pmap in struct cpu_info.
- Reserve some space in struct cpu_info for future pmap changes.
2020-08-17 00:57:37 +00:00
thorpej
13ce49a3d0 - Undo part of rev 1.264; go back to not acquiring the pmap lock in
pmap_activate().  As of rev 1.211, the pmap::pm_lev1map field is
  stable across the life of the pmap, and so the conditino that
  the change in 1.264 was intended to avoid would not have happened
  anyway.
- Explicitly use __cacheline_aligned / COHERENCY_UNIT rather than 64
  in a couple of places.
- Update comments around the lev1map lifecycle, and add some assertions
  to enforce the assumptions being described.
- Remove some dubious DEBUG tests that are not MP-safe.
- Chage some long-form #ifdef DIAGNOSTIC checks / panics to KASSERTs.
- Remove the PMAP_ACTIVATE() macro because it's no longer used anywhere
  except for pmap_activate().  Just open-code the equivalent there.
- In pmap_activate(), only perform the SWPCTX if either the PTBR or the
  ASN are different than what the PCB already has.  Also assert that
  preemption is disabled and that the specified lwp is curlwp.
- In pmap_deactivate(), add similar assertions, and add a comment explaining
  why a SWPCTX to get off of the deactivated lev1map is not necessaray.
- Refactor some duplicated code in pmap_growkernel() into a new
  pmap_kptpage_alloc() function.
- In pmap_growkernel(), assert that any user pmap published on the all-pmaps
  list does not reference the kernel_lev1map.
- In pmap_asn_alloc(), get out early if we're called with the kernel pmap,
  since all kernel mappings are ASM.  Remove bogus assertions around the
  value of pmap::pm_lev1map and the current ASN, and simply assert that
  pmap::pm_lev1map is never kernel_lev1map.  Also assert that preemption
  is disabled, since we're manipulating per-cpu data structures.
- Convert the "too much uptime" panic to a simple KASSERT, and update the
  comment to reflect that we're only subject to the longer 75 billion year
  ASN generation overflow (because CPUs that don't implement ASNs never go
  through this code path).
2020-08-16 20:04:36 +00:00
thorpej
1c09ed02a1 In cpu_lwp_fork(), make sure that the PTBR field in l2's HWPCB references
the lev1map associated with l2's pmap.  Otherwise, the first time we
SWPCTX to l2, we'll be on l1's page tables until the first pmap_activate()
call for l2.
2020-08-16 18:05:52 +00:00
jdolecek
dd45d45423 make COMPAT_LINUX option disabled by default
leave the option enabled only in amd64/i386 ALL kernels to make
sure it continues to be compilable also when included in kernel
2020-08-16 10:27:47 +00:00
thorpej
f61f342c10 Convert some #ifdef DIAGNOSTIC checks to KASSERTs. NFCI. 2020-08-15 16:09:07 +00:00
fcambus
a7806f4de1 Use CPU_IS_PRIMARY macro on alpha and sparc64. 2020-08-07 14:20:08 +00:00
maxv
b84521f2f3 Remove references to BRIDGE_IPF, it is now compiled in by default. 2020-08-01 08:20:47 +00:00
skrll
c18d55d431 unifdef -U_LKM 2020-07-23 19:22:13 +00:00
thorpej
13dd0fa419 Sort op_mskqh, op_insqh, and op_extqh. 2020-07-21 13:37:18 +00:00
maxv
ca08b3e761 Make copystr() a MI C function, part of libkern and shared on all
architectures.

Notes:

 - On alpha and ia64 the function is kept but gets renamed locally to avoid
   symbol collision. This is because on these two arches, I am not sure
   whether the ASM callers do not rely on fixed registers, so I prefer to
   keep the ASM body for now.
 - On Vax, only the symbol is removed, because the body is used from other
   functions.
 - On RISC-V, this change fixes a bug: copystr() was just a wrapper around
   strlcpy(), but strlcpy() makes the operation less safe (strlen on the
   source beyond its size).
 - The kASan, kCSan and kMSan wrappers are removed, because now that
   copystr() is in C, the compiler transformations are applied to it,
   without the need for manual wrappers.

Could test on amd64 only, but should be fine.
2020-06-30 16:20:00 +00:00
thorpej
4dd7c241c5 Typo: vmem_free -> vmem_xfree 2020-06-17 05:52:13 +00:00
thorpej
1fd968ef01 Switch from an extent mao to a vmem arena to manage SGMAP address space. 2020-06-17 04:12:39 +00:00
thorpej
1be67ebe62 #include <sys/extent.h> explicitly. 2020-06-17 03:50:04 +00:00
ad
4b8a875ae2 uvm_availmem(): give it a boolean argument to specify whether a recent
cached value will do, or if the very latest total must be fetched.  It can
be called thousands of times a second and fetching the totals impacts not
only the calling LWP but other CPUs doing unrelated activity in the VM
system.
2020-06-11 19:20:42 +00:00