Whenever memory regions are accessed outside the BQL, they need to be
preserved against hot-unplug. MemoryRegions actually do not have their
own reference count; they piggyback on a QOM object, their "owner".
The owner is set at creation time, and there is a function to retrieve
the owner.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This decouples memory.h from ioport.h, concentrating all portio related
types in a single header.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Remove unused ioport_register and isa_unassign_ioport along with
everything that only those services used.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The current ioport dispatcher is a complex beast, mostly due to the
need to deal with old portio interface users. But we can overcome it
without converting all portio users by embedding the required base
address of a MemoryRegionPortio access into that data structure. That
removes the need to have the additional MemoryRegionIORange structure
in the loop on every access.
To handle old portio memory ops, we simply install dispatching handlers
for portio memory regions when registering them with the memory core.
This removes the need for the old_portio field.
We can drop the additional aliasing of ioport regions and also the
special address space listener. cpu_in and cpu_out now simply call
address_space_read/write. And we can concentrate portio handling in a
single source file.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Open-code isa_is_ioport_assigned via a memory region lookup. As all IO
ports are now directly or indirectly registered via the memory API, this
becomes possible and will finally allow us to drop the ioport tables.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
While not normally needed for *-user, it can safely be used there since
always based on uint64_t, to avoid ifdeffery.
To avoid accidental uses, move the guards from exec/hwaddr.h to its
inclusion sites. No need for them in include/hw/.
Prepares for hwaddr use in qom/cpu.h.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Use CPUState::env_ptr for now.
Prepares for changing cpu_handle_guest_debug() argument to CPUState.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Make cpustats monitor command available unconditionally.
Prepares for changing kvm_handle_internal_error() and kvm_cpu_exec()
arguments to CPUState.
Signed-off-by: Andreas Färber <afaerber@suse.de>
It no longer depends on CPUArchState, so move it to qom/cpu.c.
Prepares for changing GDBState::c_cpu to CPUState.
Signed-off-by: Andreas Färber <afaerber@suse.de>
This is used during RDMA initialization in order to
transmit a description of all the RAM blocks to the
peer for later dynamic chunk registration purposes.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The "info mtree" command in QEMU console prints only "memory" and "I/O"
address spaces while there are actually a lot more other AddressSpace
structs created by PCI and VIO devices. Those devices do not normally
have names and therefore not present in "info mtree" output.
The patch fixes this.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch adds a NotifierList to MemoryRegions which represent IOMMUs
allowing other parts of the code to register interest in mappings or
unmappings from the IOMMU. All IOMMU implementations will need to call
memory_region_notify_iommu() to inform those waiting on the notifier list,
whenever an IOMMU mapping is made or removed.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a new memory region type that translates addresses it is given,
then forwards them to a target address space. This is similar to
an alias, except that the mapping is more flexible than a linear
translation and trucation, and also less efficient since the
translation happens at runtime.
The implementation uses an AddressSpace mapping the target region to
avoid hierarchical dispatch all the way to the resolved region; only
iommu regions are looked up dynamically.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
[Modified to put translation in address_space_translate; assume
IOMMUs are not reachable from TCG. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
So far, the size of all regions passed to listeners could fit in 64 bits,
because artificial regions (containers and aliases) are eliminated by
the memory core, leaving only device regions which have reasonable sizes
An IOMMU however cannot be eliminated by the memory core, and may have
an artificial size, hence we may need 65 bits to represent its size.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Only address_space_translate_for_iotlb needs to return the section.
Every caller of address_space_translate now uses only section->mr,
return it directly.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Except for the case of setting the IOTLB entry in TCG mode, we can avoid
the subpage dispatching handlers and do the resolution directly on
address_space_lookup_region. An IOTLB entry describes a full page, not
only the region that the first access to a sub-divided page may return.
This patch therefore introduces a special translation function,
address_space_translate_for_iotlb, that avoids the subpage resolutions.
In contrast, callers of the existing address_space_translate service
will now always receive the terminal memory region section. This will be
important for breaking the BQL and for enabling unaligned memory region.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
# By Claudio Fontana (9) and others
# Via Peter Maydell
* pmaydell/tcg-aarch64.next:
MAINTAINERS: add tcg/aarch64 maintainer
configure: permit compilation on arm aarch64
tcg/aarch64: implement user mode qemu ld/st
user-exec.c: aarch64 initial implementation of cpu_signal_handler
tcg/aarch64: implement sign/zero extend operations
tcg/aarch64: implement byte swap operations
tcg/aarch64: implement AND/TEST immediate pattern
tcg/aarch64: improve arith shifted regs operations
tcg/aarch64: implement new TCG target for aarch64
include/elf.h: add aarch64 ELF machine and relocs
configure: Drop CONFIG_ATFILE test
linux-user: Drop direct use of openat etc syscalls
linux-user: Allow getdents to be provided by getdents64
Message-id: 1371052645-9006-1-git-send-email-peter.maydell@linaro.org
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
add preliminary support for TCG target aarch64.
Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 51A5C596.3090108@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rather than a hand-coded version of the same thing.
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: liguang <lig.fnst@cn.fujitsu.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The memory API is able to split it in two 4-byte accesses.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The old-style IOMMU lets you check whether an access is valid in a
given DMAContext. There is no equivalent for AddressSpace in the
memory API, implement it with a lookup of the dispatch tree.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We'll use it to implement address_space_access_valid.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Using phys_page_find to translate an AddressSpace to a MemoryRegionSection
is unwieldy. It requires to pass the page index rather than the address,
and later memory_region_section_addr has to be called. Replace
memory_region_section_addr with a function that does all of it: call
phys_page_find, compute the offset within the region, and check how
big the current mapping is. This way, a large flat region can be written
with a single lookup rather than a page at a time.
address_space_translate will also provide a single point where IOMMU
forwarding is implemented.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There is no reason to avoid a recompile before accessing unassigned
memory. In the end it will be treated as MMIO anyway.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It is never used, the IOTLB always goes through io_mem_notdirty.
In fact in softmmu_template.h, if it were, QEMU would crash just
below the tests, as soon as io_mem_read/write dispatches to
error_mem_read/write.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The radix tree is statically sized to fit TARGET_PHYS_ADDR_SPACE_BITS.
If a larger memory region is registered, it will overflow.
Fix by limiting any section in the radix tree to the supported size.
This problem was not observed earlier since artificial regions (containers
and aliases) are eliminated by the memory core, leaving only device regions
which have reasonable sizes. An IOMMU however cannot be eliminated by the
memory core, and may have an artificial size.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
[ Fail the build if TARGET_PHYS_ADDR_SPACE_BITS is too large - Paolo ]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since this is a MemoryListener operation, it only makes sense
on an AddressSpace granularity.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
"Readable" is a very unfortunate name for this flag because even a
rom_device region will always be readable from the guest POV. What
differs is the mapping, just like the comments had to explain already.
Also, readable could currently be understood as being a generic region
flag, but it only applies to rom_device regions.
So rename the flag and the function to modify it after the original term
"ROMD" which could also be interpreted as "ROM direct", i.e. ROM mode
with direct access. In any case, the scope of the flag is clearer now.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
memory_region_find() is similar to registering a MemoryListener and
checking for the MemoryRegionSections that come from a particular
region. There is no reason for this to be limited to a root memory
region.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It is a private interface between exec.c and memory.c.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move the slow path out of line, as the TODO's mention.
This allows the fast path to be unconditional, which can
speed up the fast path as well, depending on the core.
Signed-off-by: Richard Henderson <rth@twiddle.net>
The alignment is a characteristic of the ABI, not the CPU.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Previously, this was done for target_long/ulong, and propagated to
abi_long/ulong via a typedef. But target_long/ulong should not
have any specific alignment, it is never used to access guest
memory.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
The alignment is a characteristic of the ABI, not the CPU.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
The alignment is a characteristic of the ABI, not the CPU.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Many of these should be cleaned up with proper qdev-/QOM-ification.
Right now there are many catch-all headers in include/hw/ARCH depending
on cpu.h, and this makes it necessary to compile these files per-target.
However, fixing this does not belong in these patches.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move it to qom/cpu.h to avoid issues with include order.
Change pc_acpi_smi_interrupt() opaque to X86CPU.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Move it to qom/cpu.c to avoid build failures depending on include order
of cpu-qom.h and exec/cpu-all.h.
Change opaques of various ..._irq_handler() functions to the
appropriate CPU type to facilitate using cpu_reset_interrupt().
Fix Coding Style issues while at it (missing braces, indentation).
Signed-off-by: Andreas Färber <afaerber@suse.de>
Both fields are used in VMState, thus need to be moved together.
Explicitly zero them on reset since they were located before
breakpoints.
Pass PowerPCCPU to kvmppc_handle_halt().
Signed-off-by: Andreas Färber <afaerber@suse.de>
The value is not actually live across basic blocks, so there's no
need for the local property. This eliminates storing the temporary
to its home location at the branch.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The gen_icount_start/end functions are now somewhat misnamed since they
are useful for generic "start/end of TB" code, used for more than just
icount. Rename them to gen_tb_start/end.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Fix some of the nasty TCG race conditions and crashes by implementing
cpu_exit() as setting a flag which is checked at the start of each TB.
This avoids crashes if a thread or signal handler calls cpu_exit()
while the execution thread is itself modifying the TB graph (which
may happen in system emulation mode as well as in linux-user mode
with a multithreaded guest binary).
This fixes the crashes seen in LP:668799; however there are another
class of crashes described in LP:1098729 which stem from the fact
that in linux-user with a multithreaded guest all threads will
use and modify the same global TCG date structures (including the
generated code buffer) without any kind of locking. This means that
multithreaded guest binaries are still in the "unsupported"
category.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Document tcg_qemu_tb_exec(). In particular, its return value is a
combination of a pointer to the next translation block and some
extra information in the low two bits. Provide some #defines for
the values passed in these bits to improve code clarity.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The setjmp() function doesn't specify whether signal masks are saved and
restored; on Linux they are not, but on BSD (including MacOSX) they are.
We want to have consistent behaviour across platforms, so we should
always use "don't save/restore signal mask" (this is also generally
going to be faster). This also works around a bug in MacOSX where the
signal-restoration on longjmp() affects the signal mask for a completely
different thread, not just the mask for the thread which did the longjmp.
The most visible effect of this was that ctrl-C was ignored on MacOSX
because the CPU thread did a longjmp which resulted in its signal mask
being applied to every thread, so that all threads had SIGINT and SIGTERM
blocked.
The POSIX-sanctioned portable way to do a jump without affecting signal
masks is to siglongjmp() to a sigjmp_buf which was created by calling
sigsetjmp() with a zero savemask parameter, so change all uses of
setjmp()/longjmp() accordingly. [Technically POSIX allows sigsetjmp(buf, 0)
to save the signal mask; however the following siglongjmp() must not
restore the signal mask, so the pair can be effectively considered as
"sigjmp/longjmp which don't touch the mask".]
For Windows we provide a trivial sigsetjmp/siglongjmp in terms of
setjmp/longjmp -- this is OK because no user will ever pass a non-zero
savemask.
The setjmp() uses in tests/tcg/test-i386.c and tests/tcg/linux-test.c
are left untouched because these are self-contained singlethreaded
test programs intended to be run under QEMU's Linux emulation, so they
have neither the portability nor the multithreading issues to deal with.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Tested-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Explictly NULL it on CPU reset since it was located before breakpoints.
Change vapic_report_tpr_access() argument to CPUState. This also
resolves the use of void* for cpu.h independence.
Change vAPIC patch_instruction() argument to X86CPU.
Signed-off-by: Andreas Färber <afaerber@suse.de>
It's worth to clean-up translation blocks variables and move them
into one context as was suggested by Swirl.
Also if we use this context directly inside tcg_ctx, then it
speeds up code generation a bit.
Signed-off-by: Evgeny Voevodin <evgenyvoevodin@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
s390x-linux-user now also uses GETPC. Instead of adding it to the list of
targets which use GETPC, the macro is now defined unconditionally.
This avoids future build regressions like this one:
CC s390x-linux-user/target-s390x/int_helper.o
cc1: warnings being treated as errors
qemu/target-s390x/int_helper.c: In function ‘helper_divs32’:
qemu/target-s390x/int_helper.c:47: error: implicit declaration of function ‘GETPC’
qemu/target-s390x/int_helper.c:47: error: nested extern declaration of ‘GETPC’
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Commit c64ca8140e (cpu: Move
queued_work_{first,last} to CPUState) moved the qemu_work_item fields
away. Clean up the now unused prototype.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Move the declaration to qemu/cpu.h and add documentation.
The implementation still depends on CPUArchState for CPU iteration.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Note that target-alpha accesses this field from TCG, now using a
negative offset. Therefore the field is placed last in CPUState.
Pass PowerPCCPU to [kvm]ppc_fixup_cpu() to facilitate this change.
Move common parts of mips cpu_state_reset() to mips_cpu_reset().
Acked-by: Richard Henderson <rth@twiddle.net> (for alpha)
[AF: Rebased onto ppc CPU subclasses and openpic changes]
Signed-off-by: Andreas Färber <afaerber@suse.de>
To facilitate the field movements, pass MIPSCPU to malta_mips_config();
avoid that for mips_cpu_map_tc() since callers only access MIPS Thread
Contexts, inside TCG helpers.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Add the new mutex that protects shared state between ram_save_live
and the iothread. If the iothread mutex has to be taken together
with the ramlist mutex, the iothread shall always be _outside_.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Umesh Deshpande <udeshpan@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
This will be used to detect if last_block might have become invalid
across different calls to ram_save_live.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Umesh Deshpande <udeshpan@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Most of the time, only 2 items will be active (from/to for a string operation,
or code/data). But TCG guests likely won't have gigabytes of memory, so
this actually goes down to 1 item.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>