In apparently rare cases the (REPE) CMPS instruction can trigger an memory
assist. NVMM wouldn't recognize the instruction and thus couldn't assist and
Qemu would abort.
- Add 'mach' and 'vcpu' backpointers in the nvmm_io and nvmm_mem
structures.
- Rename 'nvmm_callbacks' to 'nvmm_assist_callbacks'.
- Rename and migrate NVMM_MACH_CONF_CALLBACKS to NVMM_VCPU_CONF_CALLBACKS,
it now becomes per-VCPU.
issues in the libnvmm API.
- Rename NVMM_CAPABILITY_VERSION to NVMM_KERN_VERSION, and check it in
libnvmm. Introduce NVMM_USER_VERSION, for future use.
- In libnvmm, open "/dev/nvmm" as read-only and with O_CLOEXEC. This is to
avoid sharing the VMs with the children if the process forks. In the
NVMM driver, force O_CLOEXEC on open().
- Rename the following things for consistency:
nvmm_exit* -> nvmm_vcpu_exit*
nvmm_event* -> nvmm_vcpu_event*
NVMM_EXIT_* -> NVMM_VCPU_EXIT_*
NVMM_EVENT_INTERRUPT_HW -> NVMM_VCPU_EVENT_INTR
NVMM_EVENT_EXCEPTION -> NVMM_VCPU_EVENT_EXCP
Delete NVMM_EVENT_INTERRUPT_SW, unused already.
- Slightly reorganize the MI/MD definitions, for internal clarity.
- Split NVMM_VCPU_EXIT_MSR in two: NVMM_VCPU_EXIT_{RD,WR}MSR. Also provide
separate u.rdmsr and u.wrmsr fields. This is more consistent with the
other exit reasons.
- Change the types of several variables:
event.type enum -> u_int
event.vector uint64_t -> uint8_t
exit.u.*msr.msr: uint64_t -> uint32_t
exit.u.io.type: enum -> bool
exit.u.io.seg: int -> int8_t
cap.arch.mxcsr_mask: uint64_t -> uint32_t
cap.arch.conf_cpuid_maxops: uint64_t -> uint32_t
- Delete NVMM_VCPU_EXIT_MWAIT_COND, it is AMD-only and confusing, and we
already intercept 'monitor' so it is never armed.
- Introduce vmx_exit_insn() for NVMM-Intel, similar to svm_exit_insn().
The 'npc' field wasn't getting filled properly during certain VMEXITs.
- Introduce nvmm_vcpu_configure(). Similar to nvmm_machine_configure(),
but as its name indicates, the configuration is per-VCPU and not per-VM.
Migrate and rename NVMM_MACH_CONF_X86_CPUID to NVMM_VCPU_CONF_CPUID.
This becomes per-VCPU, which makes more sense than per-VM.
- Extend the NVMM_VCPU_CONF_CPUID conf to allow triggering VMEXITs on
specific leaves. Until now we could only mask the leaves. An uint32_t
is added in the structure:
uint32_t mask:1;
uint32_t exit:1;
uint32_t rsvd:30;
The two first bits select the desired behavior on the leaf. Specifying
zero on both resets the leaf to the default behavior. The new
NVMM_VCPU_EXIT_CPUID exit reason is added.
address size is 16 bits, regardless of the actual operating mode. With
this special map there can be two registers referenced at once, and
also disp16-only.
Implement this special behavior, and add associated tests. While here
simplify a few things.
With this in place, the Windows 95 installer initializes correctly.
Part of PR/54611.
Provide three ranges in the conf space: <libnvmm:0-100>, <MI:100-200> and
<MD:200-...>. Remove nvmm_callbacks_register(), and replace it by the conf
op NVMM_MACH_CONF_CALLBACKS, handled by libnvmm. The callbacks are now
per-machine, and the emulators should now do:
- nvmm_callbacks_register(&cbs);
+ nvmm_machine_configure(&mach, NVMM_MACH_CONF_CALLBACKS, &cbs);
This provides more granularity, for example if the process runs two VMs
and wants different callbacks for each.
introduce a bidirectionnal "comm page", a page of memory shared between
the kernel and userland, and used to transfer data in and out in a more
performant manner than ioctls.
The comm page contains the VCPU state, plus three flags:
- "wanted": the states the kernel must get/set when requested via ioctls
- "cached": the states that are in the comm page
- "commit": the states the kernel must set in vcpu_run
The idea is to avoid performing expensive syscalls, by using the VCPU
state cached, either explicitly or speculatively, in the comm page. For
example, if the state is cached we do a direct 1->5 with no syscall:
+---------------------------------------------+
| Qemu |
+---------------------------------------------+
| ^
| (0) nvmm_vcpu_getstate | (6) Done
| |
V |
+---------------------------------------+
| libnvmm |
+---------------------------------------+
| ^ | ^
(1) State | | (2) No | (3) Ioctl: | (5) Ok, state
cached? | | | "please cache | fetched
| | | the state" |
V | | |
+-----------+ | |
| Comm Page |------+---------------+
+-----------+ |
^ |
(4) "Alright | V
babe" | +--------+
+-----| Kernel |
+--------+
The main changes in behavior are:
- nvmm_vcpu_getstate(): won't emit a syscall if the state is already
cached in the comm page, will just fetch from the comm page directly
- nvmm_vcpu_setstate(): won't emit a syscall at all, will just cache
the wanted state in the comm page
- nvmm_vcpu_run(): will commit the to-be-set state in the comm page,
as previously requested by nvmm_vcpu_setstate()
In addition to this, the kernel NVMM driver is changed to speculatively
cache certain states known to be of interest, so that the future
nvmm_vcpu_getstate() calls libnvmm or the emulator will perform will use
the comm page rather than expensive syscalls. For example, if an I/O
VMEXIT occurs, the I/O Assist in libnvmm will want GPRS+SEGS+CRS+MSRS,
and now the kernel caches all of that in the comm page before returning
to userland.
Overall, in a normal run of Windows 10, this saves several millions of
syscalls. Eg on a 4CPU Intel with 4VCPUs, booting the Win10 install ISO
goes from taking 1min35 to taking 1min16.
The libnvmm API is not changed, but the ABI is. If we changed the API it
would be possible to save expensive memcpys on libnvmm's side. This will
be avoided in a future version. The comm page can also be extended to
implement future services.
- Compress x86_rexpref, x86_regmodrm, x86_opcode and x86_instr.
- Cache-align the register, opcode and group tables.
- Modify the opcode tables to have 256 entries, and avoid a lookup.
- Reorder it, to match the CPU encoding. This is the universal order,
also used by Qemu. Drop the seg_to_nvmm[] tables.
- Compress it. This divides its size by two.
- Rename some of its fields, to better match the x86 spec. Also, take S
out of Type, this was a NetBSD-ism that was likely confusing to other
people.
fetching the displacement, so the node would always think there was no
displacement.
This didn't alter the final GPA we would be touching - because it is
fetched from the kernel directly and not from the computation -, but it
altered the instruction length, and on some guests (like Fedora 64bit),
the VCPU would resume execution at the wrong RIP and crash.
Now these guests work.
AMD has a separate guest CPL field, because on AMD, the SYSCALL/SYSRET
instructions do not force SS.DPL to predefined values. On Intel they do,
so the CPL on Intel is just the guest's SS.DPL value.
Even though technically possible on AMD, there is no sane reason for a
guest kernel to set a non-three SS.DPL, doing that would mess up several
common segmentation practices and wouldn't be compatible with Intel.
So, force the Intel behavior on AMD, by always setting SS.DPL<=>CPL.
Remove the now unused CPL field from nvmm_x64_state::misc[]. This actually
increases performance on AMD: to detect interrupt windows the virtualizer
has to modify some fields of misc[], and because CPL was there, we had to
flush the SEG set of the VMCB cache. Now there is no flush necessary.
While here remove the CPL check for XSETBV on Intel, contrary to AMD
Intel checks the CPL before the intercept, so if we receive an XSETBV
VMEXIT, we are certain that it was executed at CPL=0 in the guest. By the
way my check was wrong in the first place, it was reading SS.RPL instead
of SS.DPL.
- Emulate the instructions by executing them directly on the host CPU.
This is easier and probably faster than doing it in software
manually.
- Decode SUB from Primary, CMP from Group1, TEST from Group3, and add
associated tests.
- Handle correctly the cases where an instruction that always implicitly
reads the register operand is executed with the mem operand as source
(eg: "orq (%rbx),%rax").
- Fix the MMU handling of 32bit-PAE. Under PAE CR3 is not page-aligned,
so there are extra bits that are valid.
With these changes in place I can boot Windows XP on Qemu+NVMM.
* Uh I put the wrong masks in some GPRs, fuck.
* When the opsize of MOVZX is 4, we need to combine the zero-extend from
the instruction with the natural zero-extend of long mode.
Add two associated tests.
the exit structure provided by the kernel. This saves an MMU translation,
and sometimes complex address computation (eg SIB).
Drop the GVA field, it is not useful to virtualizers.
* Decode AND/OR/XOR from Group1.
* Sign-extend the immediates and displacements in 64bit mode.
* Fix the storage of {read,write}_guest_memory, now that we batch certain
IO operations we can copy more than 8 bytes, and shit hits the fan.
* Remove the CR4_PSE check in the 64bit MMU. This bit is actually ignored
in long mode, and some systems (like FreeBSD) don't set it.
Kernel driver:
* Don't take an extra (unneeded) reference to the UAO.
* Provide npc for HLT. I'm not really happy with it right now, will
likely be revisited.
* Add the INT_SHADOW, INT_WINDOW_EXIT and NMI_WINDOW_EXIT states. Provide
them in the exitstate too.
* Don't take the TPR into account when processing INTs. The virtualizer
can do that itself (Qemu already does).
* Provide a hypervisor signature in CPUID, and hide SVM.
* Ignore certain MSRs. One special case is MSR_NB_CFG in which we set
NB_CFG_INITAPICCPUIDLO. Allow reads of MSR_TSC.
* If the LWP has pending signals or softints, leave, rather than waiting
for a rescheduling to happen later. This reduces interrupt processing
time in the guest (Qemu sends a signal to the thread, and now we leave
right away). This could be improved even more by sending an actual IPI
to the CPU, but I'll see later.
Libnvmm:
* Fix the MMU translation of large pages, we need to add the lower bits
too.
* Change the IO and Mem structures to take a pointer rather than a
static array. This provides more flexibility.
* Batch together the str+rep IO transactions. We do one big memory
read/write, and then send the IO commands to the hypervisor all at
once. This considerably increases performance.
* Decode MOVZX.
With these changes in place, Qemu+NVMM works. I can install NetBSD 8.0
in a VM with multiple VCPUs, connect to the network, etc.
is needed on certain AMD CPUs (like mine): the segment base of OUTS can be
overridden, and it is wrong to just assume DS.
We fetch the instruction and look at the prefixes if any to determine the
correct segment.
* Change the Assist API. Rather than passing callbacks in each call, the
callbacks are now registered beforehand. Then change the I/O Assist to
fetch MMIO data via the Mem callback. This allows a guest to perform an
I/O string operation on a memory that is itself an MMIO.
* Introduce two new functions internal to libnvmm, read_guest_memory and
write_guest_memory. They can handle mapped memory, MMIO memory and
cross-page transactions.
* Allow nvmm_gva_to_gpa and nvmm_gpa_to_hva to take non-page-aligned
addresses. This simplifies a lot of things.
* Support the MOVS instruction, and add a test for it. This instruction
is special, in that it takes two implicit memory operands. In
particular, it means that the two buffers can both be in MMIO memory,
and we handle this case.
* Fix gross copy-pasto in nvmm_hva_unmap. Also fix a few things here and
there.
- Fix the I/O Assist, for INS* it is RDI and not RSI, and the register
gets updated regardless of the REP prefix.
- Fill in the Mem Assist. We decode and emulate certain instructions,
and pass a mem descriptor to the callback to handle the transaction.
The disassembler could use some polishing, and there are still a
few instructions missing; but basically it works.
software to effortlessly create and manage virtual machines via NVMM.
It is mostly complete, only nvmm_assist_mem needs to be filled -- I have
a draft for that, but it needs some more care. This Mem Assist should
not be needed when emulating a system in x2apic mode, so theoretically
the current form of libnvmm is sufficient to emulate a whole class of
systems.
Generally speaking, there are so many modes in x86 that it is difficult
to handle each corner case without introducing a ton of checks that just
slow down the common-case execution. Currently we check a limited number
of things; we may add more checks in the future if they turn out to be
needed, but that's rather low priority.
Libnvmm is compiled and installed only on amd64. A man page (reviewed by
wiz@) is provided.