Commit Graph

65 Commits

Author SHA1 Message Date
Michael S. Tsirkin
ca82180603 kvm: add API to set ioeventfd
Comment on kvm usage: rather than require users to do if (kvm_enabled())
and/or ifdefs, this patch adds an API that, internally, is defined to
stub function on non-kvm build, and checks kvm_enabled for non-kvm
run.

While rest of qemu code still uses if (kvm_enabled()), I think this
approach is cleaner, and we should convert rest of code to it
long term.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-04-01 13:56:43 -05:00
Blue Swirl
d745bef890 Move KVM and Xen global flags to vl.c
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-03-29 19:23:49 +00:00
Jan Kiszka
ea375f9ab8 KVM: Rework VCPU state writeback API
This grand cleanup drops all reset and vmsave/load related
synchronization points in favor of four(!) generic hooks:

- cpu_synchronize_all_states in qemu_savevm_state_complete
  (initial sync from kernel before vmsave)
- cpu_synchronize_all_post_init in qemu_loadvm_state
  (writeback after vmload)
- cpu_synchronize_all_post_init in main after machine init
- cpu_synchronize_all_post_reset in qemu_system_reset
  (writeback after system reset)

These writeback points + the existing one of VCPU exec after
cpu_synchronize_state map on three levels of writeback:

- KVM_PUT_RUNTIME_STATE (during runtime, other VCPUs continue to run)
- KVM_PUT_RESET_STATE   (on synchronous system reset, all VCPUs stopped)
- KVM_PUT_FULL_STATE    (on init or vmload, all VCPUs stopped as well)

This level is passed to the arch-specific VCPU state writing function
that will decide which concrete substates need to be written. That way,
no writer of load, save or reset functions that interact with in-kernel
KVM states will ever have to worry about synchronization again. That
also means that a lot of reasons for races, segfaults and deadlocks are
eliminated.

cpu_synchronize_state remains untouched, just as Anthony suggested. We
continue to need it before reading or writing of VCPU states that are
also tracked by in-kernel KVM subsystems.

Consequently, this patch removes many cpu_synchronize_state calls that
are now redundant, just like remaining explicit register syncs.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-04 00:29:28 -03:00
Jan Kiszka
b0b1d69079 KVM: Rework of guest debug state writing
So far we synchronized any dirty VCPU state back into the kernel before
updating the guest debug state. This was a tribute to a deficite in x86
kernels before 2.6.33. But as this is an arch-dependent issue, it is
better handle in the x86 part of KVM and remove the writeback point for
generic code. This also avoids overwriting the flushed state later on if
user space decides to change some more registers before resuming the
guest.

We furthermore need to reinject guest exceptions via the appropriate
mechanism. That is KVM_SET_GUEST_DEBUG for older kernels and
KVM_SET_VCPU_EVENTS for recent ones. Using both mechanisms at the same
time will cause state corruptions.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-04 00:29:26 -03:00
Marcelo Tosatti
85199474d0 kvm-all.c: define smp_wmb and use it for coalesced mmio
Acked-by: "Michael S. Tsirkin" <mst@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2010-02-22 19:04:13 +02:00
Marcelo Tosatti
6312b92853 kvm: remove pre-entry exit_request check with iothread enabled
With SIG_IPI blocked vcpu loop exit notification happens via -EAGAIN
from KVM_RUN.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2010-02-22 10:58:33 +02:00
Marcelo Tosatti
cc84de9570 kvm: consume internal signal with sigtimedwait
Change the way the internal qemu signal, used for communication between
iothread and vcpus, is handled.

Block and consume it with sigtimedwait on the outer vcpu loop, which
allows more precise timing control.

Change from standard signal (SIGUSR1) to real-time one, so multiple
signals are not collapsed.

Set the signal number on KVM's in-kernel allowed sigmask.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2010-02-22 10:58:33 +02:00
Amit Shah
a2eebe88fd kvm: reduce code duplication in config_iothread
We have some duplicated code in the CONFIG_IOTHREAD #ifdef and #else
cases. Fix that.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-02-10 12:45:04 -06:00
Michael S. Tsirkin
7b8f3b7834 kvm: move kvm to use memory notifiers
remove direct kvm calls from exec.c, make
kvm use memory notifiers framework instead.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-02-09 16:56:13 -06:00
Michael S. Tsirkin
46dbef6ade kvm: move kvm_set_phys_mem around
move kvm_set_phys_mem so that it will
be later available earlier in the file.
needed for next patch using memory notifiers.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-02-09 16:56:13 -06:00
Jan Kiszka
9ded274466 KVM: Move and rename regs_modified
Touching the user space representation of KVM's VCPU state is -
naturally - a per-VCPU thing. So move the dirty flag into KVM_CPU_COMMON
and rename it at this chance to reflect its true meaning.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
2010-02-03 19:47:34 -02:00
Sheng Yang
62a2744ca0 kvm: Flush coalesced MMIO buffer periodly
The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.

But this would result in a very late writing in some condition.
1. The each time write to MMIO content is small.
2. The writing interval is big.
3. No need for input or accessing other devices frequently.

This issue was observed in a experimental embbed system. The test image
simply print "test" every 1 seconds. The output in QEmu meets expectation,
but the output in KVM is delayed for seconds.

Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
handler. By this way, We don't need vcpu explicit exit to QEmu to
handle this issue.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-02-03 19:47:33 -02:00
Jan Kiszka
a0fb002c64 kvm: x86: Add support for VCPU event states
This patch extends the qemu-kvm state sync logic with support for
KVM_GET/SET_VCPU_EVENTS, giving access to yet missing exception,
interrupt and NMI states.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-12-03 15:25:57 -06:00
Kevin Wolf
40ff6d7e8d Don't leak file descriptors
We're leaking file descriptors to child processes. Set FD_CLOEXEC on file
descriptors that don't need to be passed to children to stop this misbehaviour.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-12-03 11:45:50 -06:00
Jan Kiszka
caa5af0ff3 kvm: Add arch reset handler
Will be required by succeeding changes.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-11-17 08:49:37 -06:00
Hollis Blanchard
9bdbe550f0 kvm: Move KVM mp_state accessors to i386-specific code
Unbreaks PowerPC and S390 KVM builds.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-11-12 11:23:55 -06:00
Glauber Costa
d549db5a73 unlock iothread mutex before running kvm ioctl
Without this, kvm will hold the mutex while it issues its run ioctl,
and never be able to step out of it, causing a deadlock.

Patchworks-ID: 35359
Signed-off-by: Glauber Costa <glommer@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-10-12 09:42:31 -05:00
Glauber Costa
828566bc33 temporary fix for on_vcpu
Recent changes made on_vcpu hit the abort() path, even with the IO thread
disabled. This is because cpu_single_env is no longer set when we call this
function. Although the correct fix is a little bit more complicated that that,
the recent thread in which I proposed qemu_queue_work (which fixes that, btw),
is likely to go on a quite different direction.

So for the benefit of those using guest debugging, I'm proposing this simple
fix in the interim.

Signed-off-by: Glauber Costa <glommer@redhat.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-10-05 09:32:45 -05:00
Jan Kiszka
b3807725f6 kvm: Fix guest single-stepping
Hopefully the last regression of 4c0960c0: KVM_SET_GUEST_DEBUG requires
properly synchronized guest registers (on x86: eflags) on entry.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-10-05 09:32:45 -05:00
Anthony Liguori
c227f0995e Revert "Get rid of _t suffix"
In the very least, a change like this requires discussion on the list.

The naming convention is goofy and it causes a massive merge problem.  Something
like this _must_ be presented on the list first so people can provide input
and cope with it.

This reverts commit 99a0949b72.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-10-01 16:12:16 -05:00
malc
99a0949b72 Get rid of _t suffix
Some not so obvious bits, slirp and Xen were left alone for the time
being.

Signed-off-by: malc <av1474@comtv.ru>
2009-10-01 22:45:02 +04:00
Blue Swirl
afcea8cbde ioports: remove unused env parameter and compile only once
The CPU state parameter is not used, remove it and adjust callers. Now we
can compile ioport.c once for all targets.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-09-20 16:05:47 +00:00
Blue Swirl
72cf2d4f0e Fix sys-queue.h conflict for good
Problem: Our file sys-queue.h is a copy of the BSD file, but there are
some additions and it's not entirely compatible. Because of that, there have
been conflicts with system headers on BSD systems. Some hacks have been
introduced in the commits 15cc923584,
f40d753718,
96555a96d7 and
3990d09adf but the fixes were fragile.

Solution: Avoid the conflict entirely by renaming the functions and the
file. Revert the previous hacks.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-09-12 07:36:22 +00:00
Avi Kivity
4c0960c0c4 kvm: Simplify cpu_synchronize_state()
cpu_synchronize_state() is a little unreadable since the 'modified'
argument isn't self-explanatory.  Simplify it by making it always
synchronize the kernel state into qemu, and automatically flush the
registers back to the kernel if they've been synchronized on this
exit.

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-08-27 20:35:30 -05:00
Anthony Liguori
6e489f3f88 Revert "Fake dirty loggin when it's not there"
This reverts commit bd83677612.

PPC should just implement dirty logging so we can avoid all the fall-out from
this changeset.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-07-27 15:26:43 -05:00
Luiz Capitulino
fc5d642fca Fix broken build
The only caller of on_vcpu() is protected by ifdef
KVM_CAP_SET_GUEST_DEBUG, so protect on_vcpu() too otherwise QEMU
may not to build.

Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-07-27 14:09:15 -05:00
Alexander Graf
96c1606b33 Use Little Endian for Dirty Log
We currently use host endian long types to store information
in the dirty bitmap.

This works reasonably well on Little Endian targets, because the
u32 after the first contains the next 32 bits. On Big Endian this
breaks completely though, forcing us to be inventive here.

So Ben suggested to always use Little Endian, which looks reasonable.

We only have dirty bitmap implemented in Little Endian targets so far
and since PowerPC would be the first Big Endian platform, we can just
as well switch to Little Endian always with little effort without
breaking existing targets.

This is the userspace part of the patch. It shouldn't change anything
for existing targets, but help PowerPC.

It replaces my older patch called "Use 64bit pointer for dirty log".

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-07-27 14:09:14 -05:00
Alexander Graf
1c7936e377 Use 64bit pointer for dirty log
Dirty logs currently get written with native "long" size. On little endian
it doesn't matter if we use uint64_t instead though, because we'd still end
up using the right bytes.

On big endian, this does become a bigger problem, so we need to ensure that
kernel and userspace talk the same language, which means getting rid of "long"
and using a defined size instead.

So I decided to use 64 bit types at all times. This doesn't break existing
targets but will in conjunction with a patch I'll send to the KVM ML make
dirty logs work with 32 bit userspace on 64 kernel with big endian.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-07-27 14:09:14 -05:00
Glauber Costa
6f725c139a provide tests for pit in kernel and irqchip in kernel
KVM can have an in-kernel pit or irqchip. While we don't implement it
yet, having a way for test for it (that always returns zero) will allow us
to reuse code in qemu-kvm that tests for it.

Signed-off-by: Glauber Costa <glommer@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-07-22 10:58:49 -05:00
Glauber Costa
452e475196 introduce on_vcpu
on_vcpu is a qemu-kvm function that will make sure that a specific
piece of code will run on a requested cpu. We don't need that because
we're restricted to -smp 1 right now, but those days are likely to end soon.

So for the benefit of having qemu-kvm share more code with us, I'm
introducing our own version of on_vcpu(). Right now, we either run
a function on the current cpu, or abort the execution, because it would
mean something is seriously wrong.

As an example code, I "ported" kvm_update_guest_debug to use it,
with some slight differences from qemu-kvm.

This is probably 0.12 material

Signed-off-by: Glauber Costa <glommer@redhat.com>
CC: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-07-22 10:58:49 -05:00
Alexander Graf
bd83677612 Fake dirty loggin when it's not there
Some KVM platforms don't support dirty logging yet, like IA64 and PPC,
so in order to still have screen updates on those, we need to fake it.

This patch just tells the getter function for dirty bitmaps, that all
pages within a slot are dirty when the slot has dirty logging enabled.

That way we can implement dirty logging on those platforms sometime when
it drags down performance, but share the rest of the code with dirty
logging capable platforms.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-07-22 10:58:46 -05:00
Alexander Graf
b80a55e67b Fix warning in kvm-all.c
This fixes a warning I stumbled across while compiling qemu on PPC64.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-07-22 10:58:46 -05:00
Jan Kiszka
a08d43677f Revert "Introduce reset notifier order"
This reverts commit 8217606e6e (and
updates later added users of qemu_register_reset), we solved the
problem it originally addressed less invasively.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-06-29 14:18:08 -05:00
Jan Kiszka
8d2ba1fb9c kvm: Rework VCPU synchronization
During startup and after reset we have to synchronize user space to the
in-kernel KVM state. Namely, we need to transfer the VCPU registers when
they change due to VCPU as well as APIC reset.

This patch refactors the required hooks so that kvm_init_vcpu registers
its own per-VCPU reset handler and adds a cpu_synchronize_state to the
APIC reset. That way we no longer depend on the new reset order (and can
drop this disliked interface again) and we can even drop a KVM hook in
main().

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-06-29 14:18:07 -05:00
Jan Kiszka
8c14c17395 kvm: Fix IRQ injection into full queue
User space may only inject interrupts during kvm_arch_pre_run if
ready_for_interrupt_injection is set in kvm_run. But that field is
updated on exit from KVM_RUN, so we must ensure that we enter the
kernel after potentially queuing an interrupt, otherwise we risk to
loose one - like it happens with the current code against latest
kernel modules (since kvm-86) that started to queue only a single
interrupt.

Fix the problem by reordering kvm_cpu_exec.

Credits go to Gleb Natapov for analyzing the issue in details.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-06-16 15:36:47 -05:00
Jan Kiszka
168ccc11c3 kvm: Improve upgrade notes when facing unsupported kernels
Users complained that it is not obvious what to do when kvm refuses to
build or run due to an unsupported host kernel, so let's improve the
hints.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
2009-06-07 16:40:22 +03:00
Jan Kiszka
f8d926e9cd kvm: x86: Save/restore KVM-specific CPU states
Save and restore all so far neglected KVM-specific CPU states. Handling
the TSC stabilizes migration in KVM mode. The interrupt_bitmap and
mp_state are currently unused, but will become relevant for in-kernel
irqchip support. By including proper saving/restoring already, we avoid
having to increment CPU_SAVE_VERSION later on once again.

v2:
 - initialize mp_state runnable (for the boot CPU)

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-05-22 10:50:34 -05:00
Jan Kiszka
d33a1810d7 kvm: Rework VCPU reset
Use standard callback with highest order to synchronize VCPU on reset
after all device callbacks were execute. This allows to remove the
special kvm hook in qemu_system_reset.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-05-22 10:50:34 -05:00
Jan Kiszka
151f7749f2 kvm: Rework dirty bitmap synchronization
Extend kvm_physical_sync_dirty_bitmap() so that is can sync across
multiple slots. Useful for updating the whole dirty log during
migration. Moreover, properly pass down errors the whole call chain.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-05-22 10:50:33 -05:00
Jan Kiszka
62518b8b1d kvm: Fix dirty log temporary buffer size
The buffer passed to KVM_GET_DIRTY_LOG requires one bit per page. Fix
the size calculation in kvm_physical_sync_dirty_bitmap accordingly,
avoiding allocation of extremly oversized buffers.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-05-22 10:50:33 -05:00
Jan Kiszka
4495d6a745 kvm: Introduce kvm_set_migration_log
Introduce a global dirty logging flag that enforces logging for all
slots. This can be used by the live migration code to enable/disable
global logging withouth destroying the per-slot setting.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-05-22 10:50:33 -05:00
Jan Kiszka
e69917e29a kvm: Conditionally apply workaround for KVM slot handling bug
Only apply the workaround for broken slot joining in KVM when the
capability was not found that signals the corresponding fix existence.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-05-22 10:50:32 -05:00
Mark McLoughlin
9f8fd69460 kvm: add error message for when SMP is requested
Right now, if you try e.g. '-smp 2' you just get 'failed to
initialize KVM'.

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
2009-05-20 09:24:23 -05:00
Anthony Liguori
ad7b8b3310 Introduce kvm_check_extension to check if KVM extensions are supported
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-05-08 15:42:42 -05:00
Jan Kiszka
6f0437e8de kvm: Avoid COW if KVM MMU is asynchronous
Avi Kivity wrote:
> Suggest wrapping in a function and hiding it deep inside kvm-all.c.
>

Done in v2:

---------->

If the KVM MMU is asynchronous (kernel does not support MMU_NOTIFIER),
we have to avoid COW for the guest memory. Otherwise we risk serious
breakage when guest pages change there physical locations due to COW
after fork. Seen when forking smbd during runtime via -smb.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-05-01 09:44:11 -05:00
Jan Kiszka
e6f4afe029 kvm: Relax aligment check of kvm_set_phys_mem
There is no need to reject an unaligned memory region registration if
the region will be I/O memory and it will not split an existing KVM
slot. This fixes KVM support on PPC.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-05-01 09:38:33 -05:00
aliguori
6152e2ae43 kvm: improve handling of overlapping slots (Jan Kiszka)
This reworks the slot management to handle more patterns of
cpu_register_physical_memory*, finally allowing to reset KVM guests (so
far address remapping on reset broke the slot management).

We could actually handle all possible ones without failing, but a KVM
kernel bug in older versions would force us to track all previous
fragmentations and maintain them (as that bug prevents registering
larger slots that overlap also deleted ones). To remain backward
compatible but avoid overly complicated workarounds, we apply a simpler
workaround that covers all currently used patterns.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7139 c046a42c-6fe2-441c-8c8c-71466251a162
2009-04-17 14:26:33 +00:00
aliguori
d3f8d37fe2 kvm: Add sanity checks to slot management (Jan Kiszka)
Fail loudly if we run out of memory slot.

Make sure that dirty log start/stop works with consistent memory regions
by reporting invalid parameters. This reveals several inconsistencies in
the vga code, patch to fix them follows later in this series.

And, for simplicity reasons, also catch and report unaligned memory
regions passed to kvm_set_phys_mem (KVM works on page basis).

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7138 c046a42c-6fe2-441c-8c8c-71466251a162
2009-04-17 14:26:29 +00:00
aliguori
494ada4234 kvm: Cleanup unmap condition in kvm_set_phys_mem (Jan Kiszka)
Testing for TLB_MMIO on unmap makes no sense as A) that flag belongs to
CPUTLBEntry and not to io_memory slots or physical addresses and B) we
already use a different condition before mapping. So make this test
consistent.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7137 c046a42c-6fe2-441c-8c8c-71466251a162
2009-04-17 14:26:25 +00:00
pbrook
5579c7f37e Remove code phys_ram_base uses.
Signed-off-by: Paul Brook <paul@codesourcery.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7085 c046a42c-6fe2-441c-8c8c-71466251a162
2009-04-11 14:47:08 +00:00