Commit Graph

70 Commits

Author SHA1 Message Date
maxv
dd1f161320 Don't decrement the number of offline cpus if we fail to shut down one.
ok christos@, via tech-kern@
2015-08-29 12:24:00 +00:00
christos
9e1f6b1ae6 include ioconf.h instead of locally declaring the prototype of the attach
function
2015-08-20 09:45:45 +00:00
uebayasi
bf97b90378 Mark pseudo attach unused arg with __unused. 2015-08-20 08:27:09 +00:00
uebayasi
fa089ff763 Convert pseudo attach functions to take no arguments, as some functions
(pppattach(), putterattach(), etc.) already do.  This means that pseudo
attach function will be able to become a constructor.
2015-08-18 13:46:20 +00:00
ozaki-r
3cde4cbc35 Pass a correct firmware size (instead of 0) to firmware_free
firmware_free now uses kmem_free(9) instead of free(9),
so we need to pass a correct size to it.
2015-01-07 07:05:48 +00:00
dholland
f9228f4225 Add d_discard to all struct cdevsw instances I could find.
All have been set to "nodiscard"; some should get a real implementation.
2014-07-25 08:10:31 +00:00
macallan
00c16ffd7f snprintf -> vsnprintf in cpu_setmodel()
now this can actually work
hi christos
2014-03-25 12:50:53 +00:00
christos
2788907516 - create cpu_{g,s}etmodel() and hide cpu_model from direct access. 2014-03-24 20:07:40 +00:00
dholland
a68f9396b6 Change (mostly mechanically) every cdevsw/bdevsw I can find to use
designated initializers.

I have not built every extant kernel so I have probably broken at
least one build; however I've also found and fixed some wrong
cdevsw/bdevsw entries so even if so I think we come out ahead.
2014-03-16 05:20:22 +00:00
mlelstv
757ba59472 cpu_infos is a NULL terminated array, not an array followed by a 0 byte. 2013-12-19 23:36:07 +00:00
rmind
df64447ca6 Remove cpu_queue (and thus eleminate another use of CIRCLEQ) by replacing
its uses with cpu_infos array.  Extra testing by christos@.
2013-11-24 21:58:38 +00:00
drochner
69aeb16c07 -extend the pcu(9) API by a function which saves all context on the
current CPU, and use it if a CPU is taken offline
-add a bool argument to pcu_discard which tells whether the internal
 "LWP has used the coprocessor" flag should be set or reset. The flag
 is reported by pcu_used_p(). If set, future accesses should use the
 state stored in the PCB. If reset, it should be reset to default.
 The former case is useful for setmcontext().
 With that, it should not be necessary anymore to manage the "FPU used"
 state by an additional MD variable.

approved by matt
2013-08-22 19:50:54 +00:00
drochner
035939be53 put binary compatibility support for the old AMD-only CPU microcode
update API inside COMPAT_60
2012-10-17 20:19:55 +00:00
matt
584846fa01 Add a kcpuset_t which just includes ourself.
Add a ci_cpuname for convenience
2012-09-01 00:24:43 +00:00
drochner
312c339026 Extend the CPU microcode update framework to support Intel x86 CPUs.
Contrary to the AMD implementation, it doesn't use xcalls to distribute
the update to all CPUs but relies on cpuctl(8) to bind itself to the
right CPU -- to keep it simple and avoid possible problems with
hyperthreading.
Also, it doesn't parse the vendor supplied file to pick the right
part for the present CPU model but relies on userland to prepare
files with specific filenames. I'll commit a pkg for this in a minute
(pkgsrc/sysutils/intel-microcode).
The ioctl interface changed; compatibility is provided (should be
limited to COMPAT_NETBSD6 as soon as this is available).
2012-08-29 17:13:21 +00:00
joerg
110cea35a1 Kill conditionals that are always true. Drop a dead assignment. 2012-06-13 23:00:05 +00:00
rmind
f76667381c - Add mi_cpu_init() and initialise cpu_lock and kcpuset_attached/running there.
- Add kcpuset_running which gets set in idle_loop().
- Use kcpuset_running in pserialize_perform().
2012-01-29 22:55:40 +00:00
cegger
a02b2c29fa fix secmodel implementation of CPU_UCODE.
ok wiz@ for the manpages
ok elad@
2012-01-17 10:47:26 +00:00
cegger
a3f6c06746 Support CPU microcode loading via cpuctl(8).
Implemented and enabled via CPU_UCODE kernel config option
for x86 and Xen Dom0.
Tested on different AMD machines with different
CPU families.

ok wiz@ for the manpages
ok releng@
ok core@ via releng@
2012-01-13 16:05:14 +00:00
jym
f34c1ce282 Fix comment. 2011-10-29 11:41:32 +00:00
jdc
f8dbae1d18 Add a cs_hwid field to cpustate and use this to store the ci_cpuid (hardware
ID).  Report this as the HwID in cpuctl.
OK jruoho@.
2011-09-11 14:54:49 +00:00
rmind
e71c0035e7 - Add an argument to kcpuset_create() for zeroing.
- Add kcpuset_atomic_set(), kcpuset_atomic_clear() and kcpuset_merge().
2011-08-07 21:38:32 +00:00
rmind
501dd321fb Remove LW_AFFINITY flag and fix some bugs affinity mask handling. 2011-08-07 21:13:05 +00:00
rmind
52b220e91d Add kcpuset(9) - a reworked dynamic CPU set implementation for kernel.
Suitable for use during the early boot.  MD and other implementations
should be replaced with this interface.

Discussed on: tech-kern@
2011-08-07 13:33:01 +00:00
matt
2699f92cdb Add the new ci to cpu_infos *before* calling routines which may want to
cpu_lookup.
2011-06-29 06:22:21 +00:00
rmind
f132c365c0 Sprinkle __cacheline_aligned and __read_mostly. 2011-05-13 22:16:43 +00:00
matt
86c8e7039d Add CTASSERT to verify __HAVE_CPU_DATA_FIRST is correct defined or undefined. 2010-12-22 02:43:23 +00:00
ad
a0f75dc2db Allocate the cpu_infos array dynamically. 2010-04-25 15:57:59 +00:00
mrg
efc854cf68 introduce a new function that returns a unique string for each cpu:
char *cpu_name(struct cpu_info *);

and use it when setting up the runq event counters, avoiding an 8 byte
kmem(4) allocation for each cpu.  there are more places the cpuname is
used that can be converted to using this new interface, but that can
and will be done as future work.

as discussed with rmind.
2010-01-13 01:57:17 +00:00
ad
4d8f47ae2f cpuctl:
- Add interrupt shielding (direct hardware interrupts away from the
  specified CPUs). Not documented just yet but will be soon.

- Redo /dev/cpu time_t compat so no kernel changes are needed.

x86:

- Make intr_establish, intr_disestablish safe to use when !cold.

- Distribute hardware interrupts among the CPUs, instead of directing
  everything to the boot CPU.

- Add MD code for interrupt sheilding. This works in most cases but there is
  a bug where delivery is not accepted by an LAPIC after redistribution. It
  also needs re-balancing to make things fair after interrupts are turned
  back on for a CPU.
2009-04-19 14:11:36 +00:00
njoly
97781244ee Clear error value on exit for IOC_CPU_OGETSTATE ioctl command. 2009-01-19 23:04:26 +00:00
christos
d610baec20 provide compat_50 2009-01-19 17:39:02 +00:00
ad
7ab182873b Add cpu_softintr_p() for assertions 2008-12-07 11:40:53 +00:00
rmind
9d3a4ed2de cpuctl_ioctl: use cpu_index(), instead of cpuid.
Fixes cpuctl(8) on some processors.
2008-11-06 16:48:51 +00:00
rmind
8f1873ea3b - Avoid the race with CPU online/offline state changes, when setting the
affinity (cpu_lock protects these operations now).
- Disallow setting of state of CPU to to offline, if there are bound LWPs,
  which have no CPU to migrate.
- Disallow setting of affinity for the LWP(s), if all CPUs in the dynamic
  CPU-set are offline.
- sched_setaffinity: fix invalid check of kcpuset_isset().
- Rename cpu_setonline() to cpu_setstate().

Should fix PR/39349.
2008-10-31 00:36:22 +00:00
ad
1ec58d56ef - Rename cpu_lookup_byindex() to cpu_lookup(). The hardware ID isn't of
interest to MI code. No functional change.
- Change /dev/cpu to operate on cpu index, not hardware ID. Now cpuctl
  shouldn't print confused output.
2008-10-15 08:13:17 +00:00
yamt
52bfe81965 cpu_xc_offline: fix races with eg. sleepq_remove. 2008-08-28 06:18:26 +00:00
rmind
7c330ba82f Fix the locking against oneself, migrate LWPs only from runqueue.
Part of the fix for PR/38882.
2008-07-14 01:27:15 +00:00
ad
bce675d015 When offlining a CPU, ensure that at least one other CPU within the same
processor set remains online, otherwise the system can deadlock.
2008-06-22 13:59:06 +00:00
ad
cbbf514e2c - vm_page: put listq, pageq into a union alongside a LIST_ENTRY, so we can
use both types of list.

- Make page coloring and idle zero state per-CPU.

- Maintain per-CPU page freelists. When freeing, put pages onto the local
  CPU's lists and the global lists. When allocating, prefer to take pages
  from the local CPU. If none are available take from the global list as
  done now. Proposed on tech-kern@.
2008-06-04 12:45:28 +00:00
rmind
29170d3854 Simplifcation for running LWP migration. Removes double-locking in
mi_switch(), migration for LSONPROC is now performed via idle loop.
Handles/fixes on-CPU case in lwp_migrate(), misc.

Closes PR/38169, idea of migration via idle loop by Andrew Doran.
2008-05-29 22:33:27 +00:00
ad
a4e0004be3 LOCKDEBUG: try to speed it up a bit by not using so much global state.
This will break the build briefly but will be followed by another commit
to fix that..
2008-05-06 18:40:57 +00:00
martin
ce099b4099 Remove clause 3 and 4 from TNF licenses 2008-04-28 20:22:51 +00:00
ad
6d70f903e6 Network protocol interrupts can now block on locks, so merge the globals
proclist_mutex and proclist_lock into a single adaptive mutex (proc_lock).
Implications:

- Inspecting process state requires thread context, so signals can no longer
  be sent from a hardware interrupt handler. Signal activity must be
  deferred to a soft interrupt or kthread.

- As the proc state locking is simplified, it's now safe to take exit()
  and wait() out from under kernel_lock.

- The system spends less time at IPL_SCHED, and there is less lock activity.
2008-04-24 15:35:27 +00:00
ad
ecebc8b473 Implement MP callouts as discussed on tech-kern. The CPU binding code is
disabled for the moment until we figure out what we want to do with CPUs
being offlined.
2008-04-22 11:45:28 +00:00
ad
b60416c0e2 Move the LW_BOUND flag into the thread-private flag word. It can be tested
by other threads/CPUs but that is only done when the LWP is known to be in a
quiescent state (for example, on a run queue).
2008-04-12 17:16:09 +00:00
ad
06e0894e76 Take the run queue management code from the M2 scheduler, and make it
mandatory. Remove the 4BSD run queue code. Effects:

- Pluggable scheduler is only responsible for co-ordinating timeshared jobs.
- All systems run with per-CPU run queues.
- 4BSD scheduler gets processor sets / affinity.
- 4BSD scheduler gets a significant peformance boost on some workloads.

Discussed on tech-kern@.
2008-04-12 17:02:08 +00:00
ad
3f5f5fa2a4 Maintain a circular queue of cpu_info's. 2008-04-11 15:31:34 +00:00
ad
1e11b07bfa Restructure the name cache code to eliminate most lock contention
resulting from forward lookups. Discussed on tech-kern@.
2008-04-11 15:25:24 +00:00
ad
40379c8716 Commit the "per-CPU" select patch. This is the result of much work and
testing by rmind@ and myself.

Which approach to use is still being discussed, but I would like to get
this out of my working tree. If we decide to use a different approach
there is no problem with revisiting this.
2008-03-22 18:04:42 +00:00