Current code doesn't actually work in 32-bit mode at all. Since
no one really noticed, drop the complication of v7 and v8 cpus.
Eliminate the --sparc_cpu configure option and standardize macro
testing on TCG_TARGET_REG_BITS / HOST_LONG_BITS
Signed-off-by: Richard Henderson <rth@twiddle.net>
The address we pick in sparc64.ld is also 0x60000000, so doing a fixed map
on top of that is guaranteed to blow up. Choosing 0x40000000 is exactly
right for the max of code_gen_buffer_size set below.
No need to ever use MAP_FIXED. While getting our desired address helps
optimize the generated code, we won't fail if we don't get it.
Signed-off-by: Richard Henderson <rth@twiddle.net>
cpu_physical_memory_write_rom(), despite the name, can also be used to
write images into RAM - and will often be used that way if the machine
uses load_image_targphys() into RAM addresses.
However, cpu_physical_memory_write_rom(), unlike cpu_physical_memory_rw()
doesn't invalidate any cached TBs which might be affected by the region
written.
This was breaking reset (under full emu) on the pseries machine - we loaded
our firmware image into RAM, and while executing it rewrite the code at
the entry point (correctly causing a TB invalidate/refresh). When we
reset the firmware image was reloaded, but the TB from the rewrite was
still active and caused us to get an illegal instruction trap.
This patch fixes the bug by duplicating the tb invalidate code from
cpu_physical_memory_rw() in cpu_physical_memory_write_rom().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
It allows to disable memory merge support (KSM on Linux), which is
enabled by default otherwise.
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Add a new '[,dump-guest-core=on|off]' option to the '-machine' option. When
'dump-guest-core=off' is specified, guest memory is omitted from the core dump.
The default behavior continues to be to include guest memory when a core dump is
triggered. In my testing, this brought the core dump size down from 384MB to 6MB
on a 2GB guest.
Is anything additional required to preserve this setting for migration or
savevm? I don't believe so.
Changelog:
v3:
Eliminate globals as per Anthony's suggestion
set no dump from qemu_ram_remap() as well
v2:
move the option from -m to -machine, rename option dump -> dump-guest-core
Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
For each newly created RAM block, dirty bitmap is reallocated with g_realloc, which doesn't
make any promises on initial content of new extra data in returned buffer. In theory,
we initialize this new data with cpu_physical_memory_set_dirty_range() call. The
problem is, cpu_physical_memory_set_dirty_range() has a side effect of incrementing
ram_list.dirty_pages variable, but only for pages which are not already dirty. And
page "cleanliness" is determined using the same not yet uninitialized dirty bitmap
we've just reallocated. This results in inconsistency between real dirty page number
and value in ram_list.dirty_pages variable, which in turn could (and will) result
in errors during VM migration.
Zero initialize new dirty bitmap bytes to fix this problem.
Signed-off-by: Igor Mitsyanko <i.mitsyanko@samsung.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Remove an out of date comment: this comment used to be attached to
cpu_register_physical_memory_log(), before commit 0f0cb164 accidentally
inserted a couple of other functions between the comment and its function.
It is in any case obsolete since (a) the function arguments it refers
to have been replaced with a single MemoryRegionSection* argument and
(b) the inability to handle regions whose offset_within_address_space
and offset_within_region aren't equally aligned was fixed as part of
the rewrite of this code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Registering a multi-page memory region that is non-page-aligned results
in a subpage from the start to the page boundary, some number of full
pages, and possibly another subpage from the last page boundary to the
end. The full pages will have a value for offset_within_region that is
not a multiple of TARGET_PAGE_SIZE. Accesses through softmmu are unable
to handle this and will segfault.
Handling full pages through subpages is not optimal, but only
non-page-aligned mappings take the penalty.
Signed-off-by: Tyler Hall <tylerwhall@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
subpage_register() expects "end" to be the last byte in the mapping.
Registering a non-page-aligned memory region that extends up to or
beyond a page boundary causes subpage_register() to silently fail
through the (end >= PAGE_SIZE) check.
This bug does not cause noticeable problems for mappings that do not
extend to a page boundary, though they do register an extra byte.
Signed-off-by: Tyler Hall <tylerwhall@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
* qemu-kvm/uq/master:
virtio: move common irqfd handling out of virtio-pci
virtio: move common ioeventfd handling out of virtio-pci
event_notifier: add event_notifier_set_handler
memory: pass EventNotifier, not eventfd
ivshmem: wrap ivshmem_del_eventfd loops with transaction
ivshmem: use EventNotifier and memory API
event_notifier: add event_notifier_init_fd
event_notifier: remove event_notifier_test
event_notifier: add event_notifier_set
apic: Defer interrupt updates to VCPU thread
apic: Reevaluate pending interrupts on LVT_LINT0 changes
apic: Resolve potential endless loop around apic_update_irq
kvm: expose tsc deadline timer feature to guest
kvm_pv_eoi: add flag support
kvm: Don't abort on kvm_irqchip_add_msi_route()
Under Win32, EventNotifiers will not have event_notifier_get_fd, so we
cannot call it in common code such as hw/virtio-pci.c. Pass a pointer to
the notifier, and only retrieve the file descriptor in kvm-specific code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
By default qemu will use MAP_PRIVATE for guest pages. This will write
protect pages and thus break on s390 systems that dont support this feature.
Therefore qemu has a hack to always use MAP_SHARED for s390. But MAP_SHARED
has other problems (no dirty pages tracking, a lot more swap overhead etc.)
Newer systems allow the distinction via KVM_CAP_S390_COW. With this feature
qemu can use the standard qemu alloc if available, otherwise it will use
the old s390 hack.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Acked-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Refactor the code that is only needed for tcg to an static function.
Call that only when tcg is enabled. We can't refactor to a dummy
function in the kvm case, as qemu can be compiled at the same time
with tcg and kvm.
Signed-off-by: Juan Quintela <quintela@redhat.com>
This makes it easier to remove it from BusInfo.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[AF: Drop now unnecessary NULL initialization in scsibus_get_dev_path()]
Signed-off-by: Andreas Färber <afaerber@suse.de>
tb_invalidate_phys_addr has to be called with the exact physical address of
the breakpoint we add/remove, not just the page's base address.
Otherwise we easily fail to flush the right TB.
This breakage was introduced by the commit f3705d5329 "memory: make
phys_page_find() return an unadjusted".
This appeared to work for some guest architectures because their
cpu_get_phys_page_debug implementation returns full translated physical
address, not just the base of the TARGET_PAGE_SIZE-sized page.
Reported-by: TeLeMan <geleman@gmail.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
They could suggest that all TBs of the page containing the range would
be invalidated.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
This API will be used in the following patch.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
If we execute linux-user code that does the following:
* A = mmap()
* execute code in A
* munmap(A)
* B = mmap(), but mmap returns the same address as A
* execute code in B
we end up executing a stale cached tb that contains translated code
from A, while we want new code from B.
This patch adds a TB flush for mmap'ed regions, before we return them,
avoiding the whole issue. It also adds a flush for munmap, so that we
don't execute stale TBs instead of getting a segfault.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Riku Voipio <riku.voipio@linaro.org>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Fold is_ram_rom and is_ram_rom_romd() into callers.
Change is_romd() and section_addr() to take MemoryRegion
instead of MemoryRegionSection for consistency and
use memory_region_ prefix.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Make s_cputlb_empty_entry 'const'.
Rename tlb_flush_jmp_cache() to tb_flush_jmp_cache().
Refactor code to add cpu_tlb_reset_dirty_all(),
memory_region_section_get_iotlb() and
memory_region_is_unassigned().
Remove unused cpu_tlb_update_dirty().
Fix coding style in areas to be moved.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Replace all type casts to 'long' or 'unsigned long' by 'intptr_t' or 'uintptr_t'.
For type casts which are only used to extract the lower bits of an address
or to modify those bits, signedness does not matter. There I always use 'uintptr_t'.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Allow TB invalidation by its physical address, extract implementation
from the breakpoint_invalidate function.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Use uintptr_t instead of void * or unsigned long in
several op related functions, env->mem_io_pc and
GETPC() macro.
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
QEMU host addresses must use uintptr_t to be portable for hosts with
an unusual size of long (w64).
tb_jmp_offset is an uint16_t value, therefore the local variable offset
in function tb_set_jmp_target was changed from unsigned long to uint16_t.
The type cast to long in function tb_add_jump now also uses uintptr_t.
For the bit operation used here, the signedness of the type cast does
not matter.
Some remaining unsigned long values are either only used for ARM assembler
code or will be fixed in a later patch for PPC.
v2:
Fix signature of tb_find_pc in exec.c, too (hint from Blue Swirl, thanks).
There remain lots of other long / unsigned long in exec.c which must be
replaced by uintptr_t. This will be done in a separate patch. Here
only one of these type casts is fixed.
v3:
Also fix signature of page_unprotect.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
This allows us to generate unwind info for the dynamicly generated
code in the code_gen_buffer. Only i386 is converted at this point.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
In cpu_physical_memory_rw, a change has been introduced and qemu_get_ram_ptr is
no longuer called with the ram addr we want to access, but only with the
section address. This patch fixes this. (All other call to qemu_get_ram_ptr are
already called with the right address.)
This patch fixes Xen guest.
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
The code to get the ram_addr from a (tlb entry, vaddr) pair
checks that the resulting memory is not MMIO, but neglects to
check whether the region is hidden by a watchpoint page.
Add the missing check.
Signed-off-by: Avi Kivity <avi@redhat.com>
A couple of code paths check the lower bits of CPUTLBEntry::addr_write
against io_mem_ram as a way of looking for a dirty RAM page. This works
by accident since the value is zero, which matches all clear bits for
TLB_INVALID, TLB_MMIO, and TLB_NOTDIRTY (indicating dirty RAM).
Make it work by design by checking for the proper bits.
Signed-off-by: Avi Kivity <avi@redhat.com>
Optionally, make memory access helpers take a parameter for CPUState
instead of relying on global env.
On most targets, perform simple moves to reorder registers. On i386,
switch from regparm(3) calling convention to standard stack-based
version.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Scripted conversion:
for file in *.[hc] hw/*.[hc] hw/kvm/*.[hc] linux-user/*.[hc] linux-user/m68k/*.[hc] bsd-user/*.[hc] darwin-user/*.[hc] tcg/*/*.[hc] target-*/cpu.h; do
sed -i "s/CPUState/CPUArchState/g" $file
done
All occurrences of CPUArchState are expected to be replaced by QOM CPUState,
once all targets are QOM'ified and common fields have been extracted.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
The return value of cpu_register_io_memory() is no longer used anywhere, so
we can remove it and all associated data and code.
Signed-off-by: Avi Kivity <avi@redhat.com>
Instead of indirecting via io_mem_region, dispatch directly
through the MemoryRegion obtained from the iotlb or phys_page_find().
Signed-off-by: Avi Kivity <avi@redhat.com>
get_page_addr_code() reads a code tlb entry, but interprets it as an
iotlb entry. This works by accident since the low bits of a RAM code
tlb entry are clear, and match a RAM iotlb entry. This accident is
about to unhappen, so fix the code to use an iotlb entry (using the
code entry with TLB_MMIO may fail if the page is a watchpoint).
Signed-off-by: Avi Kivity <avi@redhat.com>
We'd like to store the section index in the iotlb, so we can't
adjust it before returning. Return an unadjusted section and
instead introduce section_addr(), which does the adjustment later.
Signed-off-by: Avi Kivity <avi@redhat.com>
Commit e58ac72b6a0 ("ioport: change portio_list not to use
memory_region_set_offset()") started using aliases of I/O memory
regions. Since the IORange used for the I/O was contained in the
target region, the alias information (specifically, the offset
into the region) was lost. This broke -vga std.
Fix by allocating an independent object to hold the IORange and
also the new offset.
Note that I/O memory regions were conceptually broken wrt aliases
in a different way: an alias can cause the same region to appear
twice in an address space, but we had just one IORange to service it.
This patch fixes that problem as well, since we can now have multiple
IORange/MemoryRegion associations.
Signed-off-by: Avi Kivity <avi@redhat.com>
When storing large contiguous ranges in phys_map, all values tend to
be the same pointers to a single MemoryRegionSection. Collapse them
by marking nodes with level > 0 as leaves. This reduces tree memory
usage dramatically.
Signed-off-by: Avi Kivity <avi@redhat.com>
Instead of considering subpage on a per-page basis, split each section
into a subpage head, multipage body, and subpage tail, and register
each separately. This simplifies the registration functions.
Signed-off-by: Avi Kivity <avi@redhat.com>
We no longer describe memory in terms of individual pages; use sections
throughout instead.
PhysPageDesc no longer used - remove.
Signed-off-by: Avi Kivity <avi@redhat.com>
If the first subpage installed in a page is RAM, then we install it as
a full page, instead of a subpage. Fix by not special casing RAM.
The issue dates to commit db7b5426a4, which introduced subpages.
Signed-off-by: Avi Kivity <avi@redhat.com>
Use an expanding vector to store nodes. Allocation is baroque to g_renew()
potentially invalidating pointers; this will be addressed later.
Signed-off-by: Avi Kivity <avi@redhat.com>
Instead of storing PhysPageDesc, store pointers to MemoryRegionSections.
The various offsets (phys_offset & ~TARGET_PAGE_MASK,
PHYS_OFFSET & TARGET_PAGE_MASK, region_offset) can all be synthesized
from the information in a MemoryRegionSection. Adjust phys_page_find()
to synthesize a PhysPageDesc.
The upshot is that phys_map now contains uniform values, so it's easier
to generate and compress.
The end result is somewhat clumsy but this will be improved as we we
propagate MemoryRegionSections throughout the code instead of transforming
them to PhysPageDesc.
The MemoryRegionSection pointers are stored as uint16_t offsets in an
array. This saves space (when we also compress node pointers) and is
more cache friendly.
Signed-off-by: Avi Kivity <avi@redhat.com>
L1 and the lower levels in l1_phys_map are equivalent, except that L1 has
a different size, and is always allocated. Simplify the code by removing
L1. This leaves us with a tree composed solely of L2 tables, but that
problem can be renamed away later.
Signed-off-by: Avi Kivity <avi@redhat.com>
Instead of incrementally building the memory map, rebuild it every time.
This allows later simplification, since the code need not consider overlaying
a previous mapping. It is also RCU friendly.
With large memory guests this can get expensive, since the operation is
O(mem size), but this will be optimized later.
As a side effect subpage and L2 leaks are fixed here.
Signed-off-by: Avi Kivity <avi@redhat.com>
Current memory listeners are incremental; that is, they are expected to
maintain their own state, and receive callbacks for changes to that state.
This patch adds support for stateless listeners; these work by receiving
a ->begin() callback (which tells them that new state is coming), a
sequence of ->region_add() and ->region_nop() callbacks, and then a
->commit() callback which signifies the end of the new state. They should
ignore ->region_del() callbacks.
Signed-off-by: Avi Kivity <avi@redhat.com>
This transforms memory.c into a library which can then be unit tested
easily, by feeding it inputs and listening to its outputs.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
It can be derived from the MemoryRegion itself (which is why it is not
used there).
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
In case of BP_STOP_BEFORE_ACCESS watchpoint check_watchpoint intends to
signal EXCP_DEBUG exception on exit from cpu loop, but later overwrites
exception code by the cpu_resume_from_signal call.
Use cpu_loop_exit with BP_STOP_BEFORE_ACCESS watchpoints.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Clarify the comment about tlb_flush()'s flush_global parameter,
so it is clearer what it does and why it is OK that the implementation
currently ignores it.
Reviewed-by: Andreas F=C3=A4rber <afaerber@suse.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
The virtio config area in PIO space is a bit special. The initial
header is little endian but the rest (device specific) is guest
native endian.
The PIO accessors for PCI on machines that don't have native IO ports
assume that all PIO is little endian, which works fine for everything
except the above.
A complicated way to fix it would be to split the BAR into two memory
regions with different endianess settings, but this isn't practical
to do, besides, the PIO code doesn't honor region endianness anyway
(I have a patch for that too but it isn't necessary at this stage).
So I decided to go for the quick fix instead which consists of
reverting the swap in virtio-pci in selected places, hoping that when
we eventually do a "v2" of the virtio protocols, we sort that out once
and for all using a fixed endian setting for everything.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
[agraf: keep virtio in libhw and determine endianness through a
helper function in exec.c]
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
ARM still doesn't support 16GB buffers in 32-bit modes, replace the
16GB by 16MB in the comment.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
We no longer use any of the lower bits of a ram_addr, so we might as well
use them for the io table index. This increases the number of potential
I/O handlers by a factor of 8.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Unlike ->readonly, ->readable is not inherited from aliase, so we can simply
query the memory region.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Now that all mmio goes through MemoryRegions, we can convert
io_mem_opaque to be a MemoryRegion pointer, and remove the thunks
that convert from old-style CPU{Read,Write}MemoryFunc to MemoryRegionOps.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Convert the fixed-address IO_MEM_RAM, IO_MEM_ROM, IO_MEM_UNASSIGNED,
and IO_MEM_NOTDIRTY io handlers to MemoryRegions. These aren't real
regions, since they are never added to the memory hierarchy, but they
allow reuse of the dispatch functionality.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Its use of IO_MEM_ROM and friends will later cause #include loops; and it
is too large to merit inlining.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The code sometimes uses range comparisons on io indexes (e.g.
index =< IO_MEM_ROM). Avoid these as they make moving to objects harder.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
cpu_register_physical_memory_log() does not update region_offset
if a page was previously registered for the same address. This
could cause mmio accesses going to the wrong place, by using the
old region_offset.
Signed-off-by: Avi Kivity <avi@redhat.com>
Acked-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Currently mmio access goes directly to the io_mem_{read,write} arrays.
In preparation for eliminating them, add indirection via a function.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Instead of returning a PhysPageDesc pointer, return a temporary.
This lets us move away from actually storing PhysPageDesc's, and
instead sythesising them when needed.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Instead of doing device endianness compensation in cpu_register_io_memory(),
do it in the memory core.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The getter is no longer used, so it is completely removed.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
As a step in moving live migration from RAMBlocks to MemoryRegions,
store the MemoryRegion in a RAMBlock.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Currently creating a memory region automatically registers it for
live migration. This differs from other state (which is enumerated
in a VMStateDescription structure) and ties the live migration code
into the memory core.
Decouple the two by introducing a separate API, vmstate_register_ram(),
for registering a RAM block for migration. Currently the same
implementation is reused, but later it can be moved into a separate list,
and registrations can be moved to VMStateDescription blocks.
Signed-off-by: Avi Kivity <avi@redhat.com>
Add an API that allows a client to observe changes in the global
memory map:
- region added (possibly with logging enabled)
- region removed (possibly with logging enabled)
- logging started on a region
- logging stopped on a region
- global logging started
- global logging removed
This API will eventually replace cpu_register_physical_memory_client().
Signed-off-by: Avi Kivity <avi@redhat.com>
Currently xen_ram_alloc() relies on ram_addr, which is going away.
Give it something else to use as a cookie.
Signed-off-by: Avi Kivity <avi@redhat.com>
This fixes a common bug with initial region_offset value.
Usually, the pages are re-assigned afterwards, so the bug
has a very small effect on regular QEMU use flows.
Signed-off-by: Alex Rozenman <Alex_Rozenman@mentor.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Commit 95c318f5e1 (Fix segfault in mmio
subpage handling code.) prevented a segfault by making all subpage
registrations over an existing memory page perform an unassigned access.
Symptoms were writes not taking effect and reads returning zero.
Very small page sizes are not currently supported either,
so subpage memory areas cannot fully be avoided.
Therefore change the previous fix to use a new IO_MEM_SUBPAGE_RAM
instead of IO_MEM_UNASSIGNED. Suggested by Avi.
Reviewed-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Cc: Avi Kivity <avi@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
On ARM, don't map the code buffer at a fixed location, and fix up the
call/goto tcg routines to let it do long jumps.
Mapping the code buffer at a fixed address could sometimes result in it being
mapped over the top of the heap with pretty random results.
Signed-off-by: Dr. David Alan Gilbert <david.gilbert@linaro.org>
Signed-off-by: Andrzej Zaborowski <andrew.zaborowski@intel.com>
W32 does not support line buffering, but it supports unbuffered output.
Unbuffered output is better for writing to qemu.log than fully buffered
output because it also shows the latest log messages when an application
crash occurs.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Make cpu_single_env thread-local. This fixes a regression
in handling of multi-threaded programs in linux-user mode
(bug 823902).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[Peter Maydell: rename tls_cpu_single_env to cpu_single_env]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Spotted via code review, we initialize offset to 0 to avoid a
compiler warning, but in the unlikely case that offset is
never set to something else, we should abort instead of return
a value that will almost certainly cause problems.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>