Commit Graph

265646 Commits

Author SHA1 Message Date
christos
dcaaa9c384 get rid of binutils=227-specific files. 2019-02-14 20:42:40 +00:00
christos
88b70b62ed add breaks for done() since it might not be __dead. 2019-02-14 20:19:51 +00:00
palle
97adc61850 sun4v: add debug printout for ALIGN trap 2019-02-14 20:09:40 +00:00
christos
ccbb6255be PR/53981: Jonathan Perkins: history_list should null-terminate 2019-02-14 20:09:12 +00:00
christos
df2505e193 done is not always done (it returns, it is not dead) 2019-02-14 17:08:54 +00:00
prlw1
eb6ad6b651 libpthread isn't used 2019-02-14 14:40:07 +00:00
maxv
27a60aeb62 Harmonize the handling of the CPL between AMD and Intel.
AMD has a separate guest CPL field, because on AMD, the SYSCALL/SYSRET
instructions do not force SS.DPL to predefined values. On Intel they do,
so the CPL on Intel is just the guest's SS.DPL value.

Even though technically possible on AMD, there is no sane reason for a
guest kernel to set a non-three SS.DPL, doing that would mess up several
common segmentation practices and wouldn't be compatible with Intel.

So, force the Intel behavior on AMD, by always setting SS.DPL<=>CPL.
Remove the now unused CPL field from nvmm_x64_state::misc[]. This actually
increases performance on AMD: to detect interrupt windows the virtualizer
has to modify some fields of misc[], and because CPL was there, we had to
flush the SEG set of the VMCB cache. Now there is no flush necessary.

While here remove the CPL check for XSETBV on Intel, contrary to AMD
Intel checks the CPL before the intercept, so if we receive an XSETBV
VMEXIT, we are certain that it was executed at CPL=0 in the guest. By the
way my check was wrong in the first place, it was reading SS.RPL instead
of SS.DPL.
2019-02-14 14:30:20 +00:00
kre
2ec3f71485 DEBUG mode only change. When pretty-printing a word from a parse
tree, don't display a CTLESC which is there only to protect a CTL*
char (a data char that happens to have the same value).  No actual
CTL* chars are printed as data, so no escaping is needed to protect
data which just happens to look the same.  Dropping this avoids the
possibility of confusion/ambiguity in what the word actually contains.

NFC for any normal shell build (very little of this file gets compiled there)
2019-02-14 13:27:59 +00:00
wiz
721279f428 Sort, and add a couple obsolete files for binutils=231. 2019-02-14 12:49:28 +00:00
mrg
d40c522149 remove the hack to remove .eh_frame -- gcc7 is fixed it seems. 2019-02-14 12:22:06 +00:00
kre
727a664bee Add the "specialvar" built-in command. Discussed (well, mentioned
anway) on tech-userlevel with no adverse response.

This allows the magic of vars like HOSTNAME SECONDS, ToD (etc) to be
restored should it be lost - perhaps by having a var of the same name
imported from the environment (which needs to remove the magic in case
a set of scripts are using the env to pass data, and the var name chosen
happens to be one of our magic ones).

No change to SMALL shells (or smaller) - none of the magic vars (except
LINENO, which is exempt from all of this) exist in those, hence such a
shell has no need for this command either.
2019-02-14 11:15:24 +00:00
mrg
2358f4548e implement return_one for hppa, mips, ppc64, and vax. 2019-02-14 10:36:33 +00:00
mrg
5b04a93ad5 put joerg's varasm.c patch back with additional upstream fixes. now
crtbegin.o has a read-only .eh_frame, and libstdc++ builds.


2017-09-01  Joerg Sonnenberger  <joerg@bec.de>
            Jeff Law  <law@redhat.com>

        * varasm.c (bss_initializer_p): Do not put constants into .bss
        (categorize_decl_for_section): Handle bss_initializer_p returning
        false when DECL_INITIAL is NULL.

2017-11-27  Jakub Jelinek  <jakub@redhat.com>

        PR target/83100
        * varasm.c (bss_initializer_p): Return true for DECL_COMMON
        TREE_READONLY decls.

2018-02-09  Jakub Jelinek  <jakub@redhat.com>

        PR middle-end/84237
        * output.h (bss_initializer_p): Add NAMED argument, defaulted to false.
        * varasm.c (bss_initializer_p): Add NAMED argument, if true, ignore
        TREE_READONLY bit.
        (get_variable_section): For decls in named .bss* sections pass true as
        second argument to bss_initializer_p.
2019-02-14 10:29:58 +00:00
maxv
72977a18e9 On AMD, the segments have a simple "present" bit. On Intel however there
is an extra "unusable" bit, which has a twisted meaning. We can't just
ignore this bit, because when unset, the CPU performs extra checks on the
other attributes, which may cause VMENTRY to fail and the guest to be
killed.

Typically, on Qemu, some guests like Windows XP trigger two consecutive
getstate+setstate calls, and while processing them, we end up wrongfully
removing the "unusable" bits that were previously set.

Fix that by forcing "unusable = !present". Each hypervisor I could check
does something different, but this seems to be the least problematic
solution for now.

While here, the fields of vmx_guest_segs are VMX indexes, so they should
be uint64_t (no functional change).
2019-02-14 09:37:31 +00:00
cherry
1e3d30b0e6 Welcome XENPVHVM mode.
It is UP only, has xbd(4) and xennet(4) as PV drivers.

The console is com0 at isa and the native portion is very
rudimentary AT architecture, so is probably suboptimal to
run without PV support.
2019-02-14 08:18:25 +00:00
cherry
8644947bb7 Fix NLAPIC, NISA and NIOAPIC related conditional compile errors.
This will allow us to now compile an amd64 kernel without PCI.

No functional changes.
2019-02-14 07:12:40 +00:00
cherry
f3ad8a34c4 Snag the final bits of PV only code to conditionally compile under
-DXENPV

This completes the bifurcation.

The next step is to add -DXENPVHVM code.
2019-02-14 06:59:24 +00:00
kamil
c26a89103c Replace signal2 in t_ptrace_wait* with new tests
Add new tests traceme_raisesignal_masked[1-8].

New tests to verify that masking (with SIG_BLOCK) signal in tracee
stops tracer from catching this raised signal. Masked crash signals are
invisible to tracer as well.

All tests pass.
2019-02-14 06:47:32 +00:00
kamil
0c1b4f39e5 Add new regression scenarios for crash signals in t_ptrace_wait*
Verify correct behavior of crash signals (SIGTRAP, SIGBUS, SIGILL, SIGFPE,
SIGSEGV) in existing test scenarios:
 - traceme_raise
 - traceme_sendsignal_handle
 - traceme_sendsignal_masked
 - traceme_sendsignal_ignored
 - traceme_sendsignal_simple
 - traceme_vfork_raise

These tests verify signals out of the context of CPU trap. These new tests
will help to retain expected behavior in future changes in semantics of
the trapsignals in the kernel.
2019-02-14 05:38:45 +00:00
msaitoh
05d6458099 Sort in alphebetical order a bit. 2019-02-14 04:34:37 +00:00
msaitoh
f16620c10b Add KSZ8081 support from FreeBSD. 2019-02-14 04:13:40 +00:00
nonaka
55c94440ba separate RNDIS definitions from urndis(4) for use with Hyper-V NetVSC. 2019-02-14 03:33:55 +00:00
msaitoh
c0bdba551a Regen. 2019-02-14 03:26:10 +00:00
msaitoh
a0b033f79e Add Tundra (now IDT) TSI381 and PEB383 from OpenBSD. 2019-02-14 03:25:47 +00:00
kre
ec83c7c484 Delete a no-longer-used #define that referred to a struct field that
no longer exists.   Also correct a couple of typos in comments.    NFC.
2019-02-13 21:40:50 +00:00
mrg
26711b697b while we're still figuring out the gcc7 vs .eh_frame issue, apply
the don't remove eh_frame hack to mips as well.  hpcmips testbed
is also failing currently:

[   3.1238738] panic: init died (signal 6, exit 12)
2019-02-13 20:48:56 +00:00
jakllsch
661f966e99 sun50i_h6_ccu: add PCIe clocks 2019-02-13 18:31:11 +00:00
jakllsch
6376e18f06 sun50i_h6_ccu: add "pll_cpux"
Currently intended for display of existing clock rate via the sysctl
tree, and not yet for DVFS.
2019-02-13 18:18:38 +00:00
kamil
e807f4b65a Silent UB alignment issues in acpica under kUBSan
Pass -DACPI_MISALIGNMENT_NOT_SUPPORTED under kUBSan enabled. This option
is dedicated for alignment sensitive CPUs in acpica. It was originally
designed for Itanium CPUs, but nowadays it's wanted for aarch64 as well.

Define it in acpica code under kUBSan in order to pacify Undefined Behavior
reports on all ports (in particular x86). The number of reports is now
halved with this patch applied. The remaining alignment alarms in acpica
will be addressed in future.

Patch contributed by <Akul Pillai>
2019-02-13 18:04:35 +00:00
kamil
075cfd7e0e Fix kUBSan build with GCC7
Add missing __unreachable() and FALLTHROUGH keywords.

Reported by <Akul Pillai>
2019-02-13 17:17:02 +00:00
maxv
9c3a39c8a5 Note Intel support. 2019-02-13 16:06:28 +00:00
maxv
8567964145 Add Intel-VMX support in NVMM. This allows us to run hardware-accelerated
VMs on Intel CPUs. Overall this implementation is fast and reliable, I am
able to run NetBSD VMs with many VCPUs on a quad-core Intel i5.

NVMM-Intel applies several optimizations already present in NVMM-AMD, and
has a code structure similar to it. No change was needed in the NVMM MI
frontend, or in libnvmm.

Some differences exist against AMD:

 - On Intel the ASID space is big, so we don't fall back to a shared ASID
   when there are more VCPUs executing than available ASIDs in the host,
   contrary to AMD. There are enough ASIDs for the maximum number of VCPUs
   supported by NVMM.

 - On Intel there are two TLBs we need to take care of, one for the host
   (EPT) and one for the guest (VPID). Changes in EPT paging flush the
   host TLB, changes to the guest mode flush the guest TLB.

 - On Intel there is no easy way to set/fetch the VTPR, so we intercept
   reads/writes to CR8 and maintain a software TPR, that we give to the
   virtualizer as if it was the effective TPR in the guest.

 - On Intel, because of SVS, the host CR4 and LSTAR are not static, so
   we're forced to save them on each VMENTRY.

 - There is extra Intel weirdness we need to take care of, for example the
   reserved bits in CR0 and CR4 when accesses trap.

While this implementation is functional and can already run many OSes, we
likely have a problem on 32bit-PAE guests, because they require special
care on Intel CPUs, and currently we don't handle that correctly; such
guests may misbehave for now (without altering the host stability). I
expect to fix that soon.
2019-02-13 16:03:16 +00:00
kamil
0709d444f2 Align the kASan message style with kUBSan
Print messages with initial 'ASan', simiarly to kUBSan printing 'UBSan'.
2019-02-13 14:55:29 +00:00
wiz
aa6b736126 Bump date for previous. 2019-02-13 11:40:41 +00:00
maxv
5f0aeb6deb Drop support for software interrupts. I had initially added that to cover
the three event types available on AMD, but Intel has seven of them, all
with weird and twisted meanings, and they require extra parameters.

Software interrupts should not be used anyway.
2019-02-13 10:55:13 +00:00
cherry
d0de4cfc64 Conditionally compile a conditionally used variable. 2019-02-13 09:57:46 +00:00
rin
d7e5ad524a Fix DIAGNOSTIC build; replace FreeBSD-specific function with ours. 2019-02-13 08:46:40 +00:00
msaitoh
0d3fe29069 Add ICS1893C support from FreeBSD. 2019-02-13 08:42:26 +00:00
msaitoh
c19dafd12f Add CS8204, CS8244 VSC8211 and VSC8601 support from {Free,Open}BSD. 2019-02-13 08:41:43 +00:00
msaitoh
ff8b2613f5 regen. 2019-02-13 08:40:14 +00:00
msaitoh
b9e0ae2c61 Change CS8244's OUI from xxCICADA to CICADA. I don't know whether this
change is correct or not...
2019-02-13 08:39:55 +00:00
maxv
d25b7653a7 Add the EPT pmap code, used by Intel-VMX.
The idea is that under NVMM, we don't want to implement the hypervisor page
tables manually in NVMM directly, because we want pageable guests; that is,
we want to allow UVM to unmap guest pages when the host comes under
pressure.

Contrary to AMD-SVM, Intel-VMX uses a different set of PTE bits from
native, and this has three important consequences:

 - We can't use the native PTE bits, so each time we want to modify the
   page tables, we need to know whether we're dealing with a native pmap
   or an EPT pmap. This is accomplished with callbacks, that handle
   everything PTE-related.

 - There is no recursive slot possible, so we can't use pmap_map_ptes().
   Rather, we walk down the EPT trees via the direct map, and that's
   actually a lot simpler (and probably faster too...).

 - The kernel is never mapped in an EPT pmap. An EPT pmap cannot be loaded
   on the host. This has two sub-consequences: at creation time we must
   zero out all of the top-level PTEs, and at destruction time we force
   the page out of the pool cache and into the pool, to ensure that a next
   allocation will invoke pmap_pdp_ctor() to create a native pmap and not
   recycle some stale EPT entries.

To create an EPT pmap, the caller must invoke pmap_ept_transform() on a
newly-allocated native pmap. And that's about it, from then on the EPT
callbacks will be invoked, and the pmap can be destroyed via the usual
pmap_destroy(). The TLB shootdown callback is not initialized however,
it is the responsibility of the hypervisor (NVMM) to set it.

There are some twisted cases that we need to handle. For example if
pmap_is_referenced() is called on a physical page that is entered both by
a native pmap and by an EPT pmap, we take the Accessed bits from the
two pmaps using different PTE sets in each case, and combine them into a
generic PP_ATTRS_U flag (that does not depend on the pmap type).

Given that the EPT layout is a 4-Level tree with the same address space as
native x86_64, we allow ourselves to use a few native macros in EPT, such
as pmap_pa2pte(), rather than re-defining them with "ept" in the name.

Even though this EPT code is rather complex, it is not too intrusive: just
a few callbacks in a few pmap functions, predicted-false to give priority
to native. So this comes with no messy #ifdef or performance cost.
2019-02-13 08:38:25 +00:00
gson
7f592895ee Bump pmax install ramdisk size by another 100k, as 3500k is no longer
enough with GCC 7.
2019-02-13 07:55:33 +00:00
maxv
af1f1361ca Micro optimization: the STAR/LSTAR/CSTAR/SFMASK MSRs are static, so rather
than saving them on each VMENTRY, save them only once, at VCPU creation
time.
2019-02-13 07:04:12 +00:00
cherry
14037d51a0 Further restrict the scope of XENPV to relevant parts. 2019-02-13 06:52:43 +00:00
maxv
43f97eae48 Reorder the GPRs to match the CPU encoding, simplifies things on Intel. 2019-02-13 06:32:45 +00:00
cherry
c4e6273b58 Catchup with struct intrstub; unification.
This should fix dom0 build breakage.
2019-02-13 06:15:51 +00:00
cherry
a141ce0848 Rig the hypercall callback page such that when the kernel happens to
run without a XEN domain loader having previously overwritten the
hypercall page with its hypercall trampoline machine code, we still
get to detect its presence by calling the xen_version hypercall stub.

We use this hack to detect the presence or absence of the hypervisor,
without relying on the MSR support on HVM domains.

This works as an added sanity check that the hypercall page
registration has indeed succeeded in HVM mode.
2019-02-13 05:36:59 +00:00
cherry
471cf8eaf8 Missed the crucial header file in previous commit.
struct intrstub; is now uniform across native and XEN

This should fix the XEN builds.
2019-02-13 05:28:50 +00:00
cherry
19888fd484 In preparation for debut-ing PVHVM mode:
- Make the struct intrstub uniform across native and XEN.
 - Introduce vector callback entrypoints for PVHVM mode.
2019-02-13 05:01:57 +00:00