target/i386: Walk NPT in guest real mode

When translating virtual to physical address with a guest CPU that
supports nested paging (NPT), we need to perform every page table walk
access indirectly through the NPT, which we correctly do.

However, we treat real mode (no page table walk) special: In that case,
we currently just skip any walks and translate VA -> PA. With NPT
enabled, we also need to then perform NPT walk to do GVA -> GPA -> HPA
which we fail to do so far.

The net result of that is that TCG VMs with NPT enabled that execute
real mode code (like SeaBIOS) end up with GPA==HPA mappings which means
the guest accesses host code and data. This typically shows as failure
to boot guests.

This patch changes the page walk logic for NPT enabled guests so that we
always perform a GVA -> GPA translation and then skip any logic that
requires an actual PTE.

That way, all remaining logic to walk the NPT stays and we successfully
walk the NPT in real mode.

Cc: qemu-stable@nongnu.org
Fixes: fe441054bb ("target-i386: Add NPT support")
Signed-off-by: Alexander Graf <graf@amazon.com>
Reported-by: Eduard Vlad <evlad@amazon.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240921085712.28902-1-graf@amazon.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit b56617bbcb)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This commit is contained in:
Alexander Graf 2024-09-21 08:57:12 +00:00 committed by Michael Tokarev
parent 6d36cf89d3
commit c363ebaa19

View File

@ -147,6 +147,7 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in,
uint32_t pkr;
int page_size;
int error_code;
int prot;
restart_all:
rsvd_mask = ~MAKE_64BIT_MASK(0, env_archcpu(env)->phys_bits);
@ -295,7 +296,7 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in,
/* combine pde and pte nx, user and rw protections */
ptep &= pte ^ PG_NX_MASK;
page_size = 4096;
} else {
} else if (pg_mode) {
/*
* Page table level 2
*/
@ -340,6 +341,15 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in,
ptep &= pte | PG_NX_MASK;
page_size = 4096;
rsvd_mask = 0;
} else {
/*
* No paging (real mode), let's tentatively resolve the address as 1:1
* here, but conditionally still perform an NPT walk on it later.
*/
page_size = 0x40000000;
paddr = in->addr;
prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
goto stage2;
}
do_check_protect:
@ -355,7 +365,7 @@ do_check_protect_pse36:
goto do_fault_protect;
}
int prot = 0;
prot = 0;
if (!is_mmu_index_smap(in->mmu_idx) || !(ptep & PG_USER_MASK)) {
prot |= PAGE_READ;
if ((ptep & PG_RW_MASK) || !(is_user || (pg_mode & PG_MODE_WP))) {
@ -417,6 +427,7 @@ do_check_protect_pse36:
/* merge offset within page */
paddr = (pte & PG_ADDRESS_MASK & ~(page_size - 1)) | (addr & (page_size - 1));
stage2:
/*
* Note that NPT is walked (for both paging structures and final guest
@ -558,7 +569,7 @@ static bool get_physical_address(CPUX86State *env, vaddr addr,
addr = (uint32_t)addr;
}
if (likely(env->cr[0] & CR0_PG_MASK)) {
if (likely(env->cr[0] & CR0_PG_MASK || use_stage2)) {
in.cr3 = env->cr[3];
in.mmu_idx = mmu_idx;
in.ptw_idx = use_stage2 ? MMU_NESTED_IDX : MMU_PHYS_IDX;