It is possible that a ramblock doesn't have memory that QEMU can
access, this is the case with the Xen hypervisor.
In order to avoid to trigger an assert, only call ramblock_ptr() when
needed in qemu_ram_writeback(). This should fix migration of Xen
guests that was broken with bd108a44bc ("migration: ram: Switch to
ram block writeback").
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20191219154323.325255-1-anthony.perard@citrix.com>
This series removes ad hoc RAM allocation API (memory_region_allocate_system_memory)
and consolidates it around hostmem backend. It allows to
* resolve conflicts between global -mem-prealloc and hostmem's "policy" option,
fixing premature allocation before binding policy is applied
* simplify complicated memory allocation routines which had to deal with 2 ways
to allocate RAM.
* reuse hostmem backends of a choice for main RAM without adding extra CLI
options to duplicate hostmem features. A recent case was -mem-shared, to
enable vhost-user on targets that don't support hostmem backends [1] (ex: s390)
* move RAM allocation from individual boards into generic machine code and
provide them with prepared MemoryRegion.
* clean up deprecated NUMA features which were tied to the old API (see patches)
- "numa: remove deprecated -mem-path fallback to anonymous RAM"
- (POSTPONED, waiting on libvirt side) "forbid '-numa node,mem' for 5.0 and newer machine types"
- (POSTPONED) "numa: remove deprecated implicit RAM distribution between nodes"
Introduce a new machine.memory-backend property and wrapper code that aliases
global -mem-path and -mem-alloc into automatically created hostmem backend
properties (provided memory-backend was not set explicitly given by user).
A bulk of trivial patches then follow to incrementally convert individual
boards to using machine.memory-backend provided MemoryRegion.
Board conversion typically involves:
* providing MachineClass::default_ram_size and MachineClass::default_ram_id
so generic code could create default backend if user didn't explicitly provide
memory-backend or -m options
* dropping memory_region_allocate_system_memory() call
* using convenience MachineState::ram MemoryRegion, which points to MemoryRegion
allocated by ram-memdev
On top of that for some boards:
* missing ram_size checks are added (typically it were boards with fixed ram size)
* ram_size fixups are replaced by checks and hard errors, forcing user to
provide correct "-m" values instead of ignoring it and continuing running.
After all boards are converted, the old API is removed and memory allocation
routines are cleaned up.
Ram blocks were marked MADV_DONTFORK breaking fuzzing-tests which
execute each test-input in a forked process.
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Message-id: 20200220041118.23264-14-alxndr@bu.edu
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The address_space_rw() function allows either reads or writes
depending on the is_write argument passed to it; this is useful
when the direction of the access is determined programmatically
(as for instance when handling the KVM_EXIT_MMIO exit reason).
Under the hood it just calls either address_space_write() or
address_space_read_full().
We also use it a lot with a constant is_write argument, though,
which has two issues:
* when reading "address_space_rw(..., 1)" this is less
immediately clear to the reader as being a write than
"address_space_write(...)"
* calling address_space_rw() bypasses the optimization
in address_space_read() that fast-paths reads of a
fixed length
This commit was produced with the included Coccinelle script
scripts/coccinelle/exec_rw_const.cocci.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20200218112457.22712-1-peter.maydell@linaro.org>
[PMD: Update macvm_set_cr0() reported by Laurent Vivier]
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Since its introduction in commit ac1970fbe8, address_space_rw()
takes a boolean 'is_write' argument. Fix the codebase by using
an explicit boolean type.
This commit was produced with the included Coccinelle script
scripts/coccinelle/exec_rw_const.
Inspired-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
As we are only dealing with a blob buffer, use a void pointer
argument. This will let us simplify other APIs.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
As we are only dealing with a blob buffer, use a void pointer
argument. This will let us simplify other APIs.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Only flatview_[read/write]_continue use a byte pointer to increment
an offset. For the users, we are only dealing with a blob buffer.
Use a void pointer argument. This will let us simplify the
address_space API in the next commit.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
When option -mem-prealloc is used with one or more memory-backend
objects, created backends may not obey configured bind policy or
creation may fail after kernel attempts to move pages according
to bind policy.
Reason is in file_ram_alloc(), which will pre-allocate
any descriptor based RAM if global mem_prealloc != 0 and that
happens way before bind policy is applied to memory range.
One way to fix it would be to extend memory_region_foo() API
and add more invariants that could broken later due implicit
dependencies that's hard to track.
Another approach is to drop adhoc main RAM allocation and
consolidate it around memory-backend. That allows to have
single place that allocates guest RAM (main and memdev)
in the same way and then global mem_prealloc could be
replaced by backend's property[s] that will affect created
memory-backend objects but only in correct order this time.
With main RAM now converted to hostmem backends, there is no
point in keeping global mem_prealloc around, so alias
-mem-prealloc to "memory-backend.prealloc=on"
machine compat[*] property and make mem_prealloc a local
variable to only stir registration of compat property.
*) currently user accessible -global works only with DEVICE
based objects and extra work is needed to make it work
with hostmem backends. But that is convenience option
and out of scope of this already huge refactoring.
Hence machine compat properties were used.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20200219160953.13771-78-imammedo@redhat.com>
Function will report error that will mention global mem_path,
which was valid the only if legacy -mem-path was used and
only in case of main RAM.
However it doesn't work with hostmem backends
(for example:
"
qemu: -object memory-backend-file,id=ram0,size=128M,mem-path=foo:
backing store (null) size 0x200000 does not match 'size' option 0x8000000
")
and couldn't possibly work in general FD case the function
is supposed to handle.
Taking in account that main RAM was converted into
memory-backend-foo object, there is no point in printing
file name (from inappropriate place) as failing path is
a part of backend's error message.
Hence drop bogus mem_path usage from qemu_ram_alloc_from_fd(),
it's a job of its user to add file name to error message if applicable.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200219160953.13771-75-imammedo@redhat.com>
Since all RAM is backed by hostmem backends, drop
global -mem-path invariant and simplify code.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200219160953.13771-74-imammedo@redhat.com>
Add support for tcg helpers with 7 arguments.
Add support for gvec helpers with 5 arguments.
-----BEGIN PGP SIGNATURE-----
iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAl5Elo4dHHJpY2hhcmQu
aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV+QHQgAiUHK3gZZZc6t4M7C
YRGYNHSN4A3LV1NrsYXWkcZgjiWJPclmf7oSQmsLGRKlolQvbRiEPK6VZ4LppwUo
XDK66T1NrfFexu0sS93PQZd22wOHZm/6m+mYEluHEmfVpFit1+yQh6kZLrg1zKF4
9nap7rcp1xzSaqnCccyGBndIUCozJ3mMuaGStkKD3Q69JSq1u4LzVR5Wuyepcz0T
zWPiR4p0Qv2V+zB4ATzLXVDmTEFOPkSgENjstVtFKgMiOerD6DHkpL8WMzIOLfwL
TIZB/N3zQzOXjAK3w1LECTA69gYB9/c3lxlK7iyAyjGpDMjAdqV4A+XhLwe8/ZPK
NmnLZg==
=HzfK
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20200212' into staging
Fix breakpoint invalidation.
Add support for tcg helpers with 7 arguments.
Add support for gvec helpers with 5 arguments.
# gpg: Signature made Thu 13 Feb 2020 00:21:34 GMT
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth/tags/pull-tcg-20200212:
tcg: Add tcg_gen_gvec_5_ptr
tcg: Add support for a helper with 7 arguments
exec: flush CPU TB cache in breakpoint_invalidate
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
use_icount is also defined by stubs/cpu-get-icount.c, we do not need
to have a useless definition in exec.c.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: <20200204161036.20889-1-pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
When a breakpoint is inserted at location for which there's currently no
virtual to physical translation no action is taken on CPU TB cache. If a
TB for that virtual address already exists but is not visible ATM the
breakpoint won't be hit next time an instruction at that address will be
executed.
Flush entire CPU TB cache in breakpoint_invalidate to force
re-translation of all TBs for the breakpoint address.
This change fixes the following scenario:
- linux user application is running
- a breakpoint is inserted from QEMU gdbstub for a user address that is
not currently present in the target CPU TLB
- an instruction at that address is executed, but the external debugger
doesn't get control.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Message-Id: <20191127220602.10827-2-jcmvbkbc@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
We currently search both the root and the tcg/ directories for tcg
files:
$ git grep '#include "tcg/' | wc -l
28
$ git grep '#include "tcg[^/]' | wc -l
94
To simplify the preprocessor search path, unify by expliciting the
tcg/ directory.
Patch created mechanically by running:
$ for x in \
tcg.h tcg-mo.h tcg-op.h tcg-opc.h \
tcg-op-gvec.h tcg-gvec-desc.h; do \
sed -i "s,#include \"$x\",#include \"tcg/$x\"," \
$(git grep -l "#include \"$x\""); \
done
Acked-by: David Gibson <david@gibson.dropbear.id.au> (ppc parts)
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200101112303.20724-2-philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
- test tci with Travis
- enable multiarch testing in Travis
- default to out-of-tree builds
- make changing logfile safe via RCU
- remove redundant tests
- remove gtester test from docker
- convert DEBUG_MMAP to tracepoints
- remove hand rolled glob function
- trigger tcg re-configure when needed
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAl37M6gACgkQ+9DbCVqe
KkQ0Rwf/d0lGDPQN1Uf0zqZRQQmDCqWVuqHhZJ5xWDjbyVT2eUwR07TvNZeUKEWX
iO+u6S7Tv91oLjZN5WjhaiuSjtJaEzCdcpkIJAWXLP/lzse37HEwvLBsdg71y+46
LNvBrJRPpQotdb7fjr8RgCwc1qg2Bz15ekSn7XIA175zTMmUsshLJBVhLbGNqrVm
F2UmjB9oFJ0+nzrcEnpFmWw7xvVrX1dImZXv5C2pvuHF7efSjGwiFviTRZgDjOGs
V7HiWRV1QcgTigncncxTMbhMTKTVKK+e7O+y0DZWt/NSrT/yLDy5rcwySpmvu6C+
cRmh/0tMo1KAhiz8Xy8LookhVj6hdA==
=OAkV
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stsquad/tags/pull-tesing-and-misc-191219-1' into staging
Various testing and logging updates
- test tci with Travis
- enable multiarch testing in Travis
- default to out-of-tree builds
- make changing logfile safe via RCU
- remove redundant tests
- remove gtester test from docker
- convert DEBUG_MMAP to tracepoints
- remove hand rolled glob function
- trigger tcg re-configure when needed
# gpg: Signature made Thu 19 Dec 2019 08:24:08 GMT
# gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full]
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-tesing-and-misc-191219-1: (25 commits)
tests/tcg: ensure we re-configure if configure.sh is updated
trace: replace hand-crafted pattern_glob with g_pattern_match_simple
linux-user: convert target_munmap debug to a tracepoint
linux-user: log page table changes under -d page
linux-user: add target_mmap_complete tracepoint
linux-user: convert target_mmap debug to tracepoint
linux-user: convert target_mprotect debug to tracepoint
travis.yml: Remove the redundant clang-with-MAIN_SOFTMMU_TARGETS entry
docker: gtester is no longer used
Added tests for close and change of logfile.
Add use of RCU for qemu_logfile.
qemu_log_lock/unlock now preserves the qemu_logfile handle.
Add a mutex to guarantee single writer to qemu_logfile handle.
Cleaned up flow of code in qemu_set_log(), to simplify and clarify.
Fix double free issue in qemu_set_log_filename().
ci: build out-of-tree
travis.yml: Enable builds on arm64, ppc64le and s390x
tests/test-util-filemonitor: Skip test on non-x86 Travis containers
tests/hd-geo-test: Skip test when images can not be created
iotests: Skip test 079 if it is not possible to create large files
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
qemu_log_lock() now returns a handle and qemu_log_unlock() receives a
handle to unlock. This allows for changing the handle during logging
and ensures the lock() and unlock() are for the same file.
Also in target/tilegx/translate.c removed the qemu_log_lock()/unlock()
calls (and the log("\n")), since the translator can longjmp out of the
loop if it attempts to translate an instruction in an inaccessible page.
Signed-off-by: Robert Foley <robert.foley@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20191118211528.3221-5-robert.foley@linaro.org>
When os_mem_prealloc() fails, file_ram_alloc() calls qemu_ram_munmap()
and returns null. Except it doesn't when its @errp argument is null,
because it checks for failure with (errp && *errp). Introduced in
commit 056b68af77 "fix qemu exit on memory hotplug when allocation
fails at prealloc time".
No caller actually passes null.
Fix anyway: splice in a local Error *err, and error_propagate().
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20191204093625.14836-6-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Since the start addr is already checked, to make sure the range is
aligned, checking the length is enough.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190712032704.7826-1-richardw.yang@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add an option to trigger memory writeback to sync given memory region
with the corresponding backing store, case one is available.
This extends the support for persistent memory, allowing syncing on-demand.
Signed-off-by: Beata Michalska <beata.michalska@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191121000843.24844-3-beata.michalska@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- use --enable-plugins @ configure
- low impact introspection (-plugin empty.so to measure overhead)
- plugins cannot alter guest state
- example plugins included in source tree (tests/plugins)
- -d plugin to enable plugin output in logs
- check-tcg runs extra tests when plugins enabled
- documentation in docs/devel/plugins.rst
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAl23BZMACgkQ+9DbCVqe
KkRPegf/QHygZ4ER2jOaWEookxiOEcik+dzQKVGNqLNXeMLvo5fGjGVpFoFxSgfv
ZvCAL4xbW44zsYlVfh59tfn4Tu9qK7s7/qM3WXpHsmuvEuhoWef0Lt2jSe+D46Rs
KeG/aX+rHLUR8rr9eCgE+1/MQmxPUj3VUonkUpNkk2ebBbSNoLSOudB4DD9Vcyl7
Pya1kPvA6W9bwI20ZSWihE7flg13o62Pp+LgAFLrsfxXOxOMkPrU8Pp+B0Dvr+hL
5Oh0clZLhiRi75x+KVGZ90TVsoftdjYoOWGMOudS/+NNmqKT1NTLm0K1WJYyRMQ1
V0ne4/OcGNq7x8gcOx/xs09ADu5/VA==
=UXR/
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stsquad/tags/pull-tcg-plugins-281019-4' into staging
TCG Plugins initial implementation
- use --enable-plugins @ configure
- low impact introspection (-plugin empty.so to measure overhead)
- plugins cannot alter guest state
- example plugins included in source tree (tests/plugins)
- -d plugin to enable plugin output in logs
- check-tcg runs extra tests when plugins enabled
- documentation in docs/devel/plugins.rst
# gpg: Signature made Mon 28 Oct 2019 15:13:23 GMT
# gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full]
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-tcg-plugins-281019-4: (57 commits)
travis.yml: enable linux-gcc-debug-tcg cache
MAINTAINERS: add me for the TCG plugins code
scripts/checkpatch.pl: don't complain about (foo, /* empty */)
.travis.yml: add --enable-plugins tests
include/exec: wrap cpu_ldst.h in CONFIG_TCG
accel/stubs: reduce headers from tcg-stub
tests/plugin: add hotpages to analyse memory access patterns
tests/plugin: add instruction execution breakdown
tests/plugin: add a hotblocks plugin
tests/tcg: enable plugin testing
tests/tcg: drop test-i386-fprem from TESTS when not SLOW
tests/tcg: move "virtual" tests to EXTRA_TESTS
tests/tcg: set QEMU_OPTS for all cris runs
tests/tcg/Makefile.target: fix path to config-host.mak
tests/plugin: add sample plugins
linux-user: support -plugin option
vl: support -plugin option
plugin: add qemu_plugin_outs helper
plugin: add qemu_plugin_insn_disas helper
plugin: expand the plugin_init function to include an info block
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
The next patch will play a trick with "const" that will
confuse the compiler about the uses of target_page_bits
within exec.c. Moving everything to a new file prevents
this confusion.
No functional change so far.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There are three page size in qemu:
real host page size
host page size
target page size
All of them have dedicate variable to represent. For the last two, we
use the same form in the whole qemu project, while for the first one we
use two forms: qemu_real_host_page_size and getpagesize().
qemu_real_host_page_size is defined to be a replacement of
getpagesize(), so let it serve the role.
[Note] Not fully tested for some arch or device.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20191013021145.16011-3-richardw.yang@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20191007143642.301445-6-dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Currently, when a notifier is attempted to be registered and its
flags are not supported (especially the MAP one) by the IOMMU MR,
we generally abruptly exit in the IOMMU code. The failure could be
handled more nicely in the caller and especially in the VFIO code.
So let's allow memory_region_register_iommu_notifier() to fail as
well as notify_flag_changed() callback.
All sites implementing the callback are updated. This patch does
not yet remove the exit(1) in the amd_iommu code.
in SMMUv3 we turn the warning message into an error message saying
that the assigned device would not work properly.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit 9458a9a1df added synchronization
of vCPU and migration operations through calling run_on_cpu operation.
However, in replay mode this synchronization is unneeded, because
I/O and vCPU threads are already synchronized.
This patch disables such synchronization for record/replay mode.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@gmail.com>
Fixes the previous TLB_WATCHPOINT patches because we are currently
failing to set cpu->mem_io_pc with the call to cpu_check_watchpoint.
Pass down the retaddr directly because it's readily available.
Fixes: 50b107c5d6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
All callers pass false to this argument. Remove it and pass the
constant on to tb_invalidate_phys_page_range__locked.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since 9458a9a1df, all readers of the dirty bitmaps wait
for the rcu lock, which means that they wait until the end
of any executing TranslationBlock.
As a consequence, there is no need for the actual access
to happen in between the _prepare and _complete. Therefore,
we can improve things by merging the two functions into
notdirty_write and dropping the NotDirtyInfo structure.
In addition, the only users of notdirty_write are in cputlb.c,
so move the merged function there. Pass in the CPUIOTLBEntry
from which the ram_addr_t may be computed.
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There is only one caller, tlb_set_page_with_attrs. We cannot
inline the entire function because the AddressSpaceDispatch
structure is private to exec.c, and cannot easily be moved to
include/exec/memory-internal.h.
Compute is_ram and is_romd once within tlb_set_page_with_attrs.
Fold the number of tests against these predicates. Compute
cpu_physical_memory_is_clean outside of the tlb lock region.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Pages that we want to track for NOTDIRTY are RAM. We do not
really need to go through the I/O path to handle them.
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
It does not require going through the whole I/O path
in order to discard a write.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The memory_region_tb_read tracepoint is unreachable, since notdirty
is supposed to apply only to writes. The memory_region_tb_write
tracepoint is mis-named, because notdirty is not only used for TB
invalidation. It is also used for e.g. VGA RAM updates and migration.
Replace memory_region_tb_write with memory_notdirty_write_access,
and place it in memory_notdirty_write_prepare where it can catch
all of the instances. Add memory_notdirty_set_dirty to log when
we no longer intercept writes to a page.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Neither stat(2) nor lseek(2) report the size of Linux devdax pmem
character device nodes. Commit 314aec4a6e
("hostmem-file: reject invalid pmem file sizes") added code to
hostmem-file.c to fetch the size from sysfs and compare against the
user-provided size=NUM parameter:
if (backend->size > size) {
error_setg(errp, "size property %" PRIu64 " is larger than "
"pmem file \"%s\" size %" PRIu64, backend->size,
fb->mem_path, size);
return;
}
It turns out that exec.c:qemu_ram_alloc_from_fd() already has an
equivalent size check but it skips devdax pmem character devices because
lseek(2) returns 0:
if (file_size > 0 && file_size < size) {
error_setg(errp, "backing store %s size 0x%" PRIx64
" does not match 'size' option 0x" RAM_ADDR_FMT,
mem_path, file_size, size);
return NULL;
}
This patch moves the devdax pmem file size code into get_file_size() so
that we check the memory size in a single place:
qemu_ram_alloc_from_fd(). This simplifies the code and makes it more
general.
This also fixes the problem that hostmem-file only checks the devdax
pmem file size when the pmem=on parameter is given. An unchecked
size=NUM parameter can lead to SIGBUS in QEMU so we must always fetch
the file size for Linux devdax pmem character device nodes.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20190830093056.12572-1-stefanha@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The maximum level is defined as P_L2_LEVELS and skip is defined with 6
bits, which means if P_L2_LEVELS < (1 << 6), skip never exceeds the
boundary.
Since this check is between two constants, which leverages compiler
to optimize the code based on different configuration.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190321082555.21118-7-richardw.yang@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
skip is defined with 6 bits. So the maximum value should be (1 << 6).
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190321082555.21118-6-richardw.yang@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In subpage_init(), we will set subpage->sub_section to
PHYS_SECTION_UNASSIGNED by subpage_register. Since
PHYS_SECTION_UNASSIGNED is defined to be 0, and we allocate subpage with
g_malloc0, this means subpage->sub_section is already initialized to 0.
This patch removes the redundant setup for a new subpage and also fix
the code style.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190321082555.21118-5-richardw.yang@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The purpose of these two MAX here is to get the maximum of these three
variables:
A: map->nodes_nb + nodes
B: map->nodes_nb_alloc
C: alloc_hint
We can write it like MAX(A, B, C). Since the if condition says A > B,
this means MAX(A, B, C) = MAX(A, C).
This patch just simplify the calculation a bit.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190321082555.21118-4-richardw.yang@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Function phys_page_set() and phys_page_set_level() 's argument *nb*
stands for number of pages to set instead of hardware address.
This would be more proper to use uint64_t instead of hwaddr for its
type.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190321082555.21118-2-richardw.yang@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reorganize watchpoints out of i/o path.
Return host address from probe_write / probe_access.
-----BEGIN PGP SIGNATURE-----
iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAl1uiyYdHHJpY2hhcmQu
aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV8AuwgAnYLQQbL8kjSqzp7q
gRlj0M2SX41ZW3fMkI794RwsljD9Z0QS7YGnpzHolig9XUYrGnip7STrMvlCr/1L
CIMWNHlgitgBMszLqg42/TB+6RxXn+DMX/ShUzTagC6xQhinCIpdEjoLaTKSgeP+
foIyJ2uoJLKOBP8cPTQp8evongtoQIljpsZZ0K8a4sreO1d6ytH+olkuoGiROft+
VoJkA+kNHd9cE+LPCva8UFGu1QE6uCySvhepzOpnvOtK+SXKUm2yLOFGu7RWP1pT
RkE0oRyRnImtg+cViHfUUFogIffFROdL5tuYMQVuqbINeROPUgJPav+R1Nz1P60a
xM2HEw==
=bLLU
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20190903' into staging
Allow page table bit to swap endianness.
Reorganize watchpoints out of i/o path.
Return host address from probe_write / probe_access.
# gpg: Signature made Tue 03 Sep 2019 16:47:50 BST
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth/tags/pull-tcg-20190903: (36 commits)
tcg: Factor out probe_write() logic into probe_access()
tcg: Make probe_write() return a pointer to the host page
s390x/tcg: Pass a size to probe_write() in do_csst()
hppa/tcg: Call probe_write() also for CONFIG_USER_ONLY
mips/tcg: Call probe_write() for CONFIG_USER_ONLY as well
tcg: Enforce single page access in probe_write()
tcg: Factor out CONFIG_USER_ONLY probe_write() from s390x code
s390x/tcg: Fix length calculation in probe_write_access()
s390x/tcg: Use guest_addr_valid() instead of h2g_valid() in probe_write_access()
tcg: Check for watchpoints in probe_write()
cputlb: Handle watchpoints via TLB_WATCHPOINT
cputlb: Remove double-alignment in store_helper
cputlb: Fix size operand for tlb_fill on unaligned store
exec: Factor out cpu_watchpoint_address_matches
cputlb: Fold TLB_RECHECK into TLB_INVALID_MASK
exec: Factor out core logic of check_watchpoint()
exec: Move user-only watchpoint stubs inline
target/sparc: sun4u Invert Endian TTE bit
target/sparc: Add TLB entry with attributes
cputlb: Byte swap memory transaction attribute
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The raising of exceptions from check_watchpoint, buried inside
of the I/O subsystem, is fundamentally broken. We do not have
the helper return address with which we can unwind guest state.
Replace PHYS_SECTION_WATCH and io_mem_watch with TLB_WATCHPOINT.
Move the call to cpu_check_watchpoint into the cputlb helpers
where we do have the helper return address.
This allows watchpoints on RAM to bypass the full i/o access path.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We want to move the check for watchpoints from
memory_region_section_get_iotlb to tlb_set_page_with_attrs.
Isolate the loop over watchpoints to an exported function.
Rename the existing cpu_watchpoint_address_matches to
watchpoint_address_matches, since it doesn't actually
have a cpu argument.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>