Pad VHDFooter as specified in the "Virtual Hard Disk Image Format
Specification" version 1.0[*]. Change footer buffers from
uint8_t[HEADER_SIZE] to VHDFooter. Their size remains the same.
The VHDFooter * variables pointing to a VHDFooter variable right next
to it are now silly. Eliminate them, and shorten the remaining
variables' names.
Most variables pointing to s->footer are now also silly. Eliminate
them, too.
[*] http://download.microsoft.com/download/f/f/e/ffef50a5-07dd-4cf8-aaa3-442c0673a029/Virtual%20Hard%20Disk%20Format%20Spec_10_18_06.doc
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20201217162003.1102738-8-armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20201217162003.1102738-7-armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Pad VHDDynDiskHeader as specified in the "Virtual Hard Disk Image
Format Specification" version 1.0[*]. Change dynamic disk header
buffers from uint8_t[1024] to VHDDynDiskHeader. Their size remains
the same.
The VHDDynDiskHeader * variables pointing to a VHDDynDiskHeader
variable right next to it are now silly. Eliminate them.
[*] http://download.microsoft.com/download/f/f/e/ffef50a5-07dd-4cf8-aaa3-442c0673a029/Virtual%20Hard%20Disk%20Format%20Spec_10_18_06.doc
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20201217162003.1102738-6-armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Some of the next commits will checksum structs. Change vpc_checksum()
to take void * instead of uint8_t, to save us pointless casts to
uint8_t *.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20201217162003.1102738-5-armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
create_dynamic_disk() takes a buffer holding the footer as first
argument. It writes out the footer (512 bytes), then reuses the
buffer to initialize and write out the dynamic header (1024 bytes).
Works, because the caller passes a buffer that is large enough for
both purposes. I hate that.
Use a separate buffer for the dynamic header, and adjust the caller's
buffer.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20201217162003.1102738-4-armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
create_dynamic_disk() takes a buffer holding the footer as first
argument. It writes out the footer (512 bytes), then reuses the
buffer to initialize and write out the dynamic header (1024 bytes),
then reuses it again to initialize and write out BAT sectors (512).
Works, because the caller passes a buffer that is large enough for all
three purposes. I hate that.
Use a separate buffer for writing out BAT sectors. The next commit
will do the same for the dynamic header.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20201217162003.1102738-3-armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The dynamic header's size is 1024 bytes.
vpc_open() reads only the 512 bytes of the dynamic header into buf[].
Works, because it doesn't actually access the second half. However, a
colleague told me that GCC 11 warns:
../block/vpc.c:358:51: error: array subscript 'struct VHDDynDiskHeader[0]' is partly outside array bounds of 'uint8_t[512]' [-Werror=array-bounds]
Clean up to read the full header.
Rename buf[] to dyndisk_header_buf[] while there.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20201217162003.1102738-2-armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
NVMe drive cannot be shrunk.
Since commit c80d8b06cf we can use the @exact parameter (set
to false) to return success if the block device is larger than
the requested offset (even if we can not be shrunk).
Use this parameter to implement the NVMe truncate() coroutine,
similarly how it is done for the iscsi and file-posix drivers
(see commit 82325ae5f2 "Evaluate @exact in protocol drivers").
Reported-by: Xueqiang Wei <xuwei@redhat.com>
Suggested-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20201210125202.858656-1-philmd@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This simply calls bdrv_co_pwrite_zeroes() in all children.
bs->supported_zero_flags is also set to the flags that are supported
by all children.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-Id: <2f09c842781fe336b4c2e40036bba577b7430190.1605286097.git.berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The quorum driver does not implement bdrv_co_block_status() and
because of that it always reports to contain data even if all its
children are known to be empty.
One consequence of this is that if we for example create a quorum with
a size of 10GB and we mirror it to a new image the operation will
write 10GB of actual zeroes to the destination image wasting a lot of
time and disk space.
Since a quorum has an arbitrary number of children of potentially
different formats there is no way to report all possible allocation
status flags in a way that makes sense, so this implementation only
reports when a given region is known to contain zeroes
(BDRV_BLOCK_ZERO) or not (BDRV_BLOCK_DATA).
If all children agree that a region contains zeroes then we can return
BDRV_BLOCK_ZERO using the smallest size reported by the children
(because all agree that a region of at least that size contains
zeroes).
If at least one child disagrees we have to return BDRV_BLOCK_DATA.
In this case we use the largest of the sizes reported by the children
that didn't return BDRV_BLOCK_ZERO (because we know that there won't
be an agreement for at least that size).
Signed-off-by: Alberto Garcia <berto@igalia.com>
Tested-by: Tao Xu <tao3.xu@intel.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <db83149afcf0f793effc8878089d29af4c46ffe1.1605286097.git.berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Benchmark for new preallocate filter.
Example usage:
./bench_prealloc.py ../../build/qemu-img \
ssd-ext4:/path/to/mount/point \
ssd-xfs:/path2 hdd-ext4:/path3 hdd-xfs:/path4
The benchmark shows performance improvement (or degradation) when use
new preallocate filter with qcow2 image.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201021145859.11201-22-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Make results_to_text a tool to dump results saved in JSON file.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201021145859.11201-21-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Performance improvements / degradations are usually discussed in
percentage. Let's make the script calculate it for us.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201021145859.11201-20-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
[mreitz: 'seconds' instead of 'secs']
Signed-off-by: Max Reitz <mreitz@redhat.com>
Move to generic format for floats and percentage for error.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201021145859.11201-19-vsementsov@virtuozzo.com>
Acked-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Let's keep view part in separate: this way it's better to improve it in
the following commits.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201021145859.11201-18-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Next patch will use utf8 plus-minus symbol, let's use more generic (and
more readable) name.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201021145859.11201-17-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Standard deviation is more usual to see after +- than current maximum
of deviations.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201021145859.11201-16-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Support benchmarks returning not seconds but iops. We'll use it for
further new test.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201021145859.11201-15-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201021145859.11201-14-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201021145859.11201-13-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Add a parameter to skip test if some needed additional formats are not
supported (for example filter drivers).
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201021145859.11201-12-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20201021145859.11201-11-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This will be used in further test.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20201021145859.11201-10-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
It's intended to be inserted between format and protocol nodes to
preallocate additional space (expanding protocol file) on writes
crossing EOF. It improves performance for file-systems with slow
allocation.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201021145859.11201-9-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
[mreitz: Two comment fixes, and bumped the version from 5.2 to 6.0]
Signed-off-by: Max Reitz <mreitz@redhat.com>
Do generic processing even for drivers which define .bdrv_check_perm
handler. It's needed for further preallocate filter: it will need to do
additional action on bdrv_check_perm, but don't want to reimplement
generic logic.
The patch doesn't change existing behaviour: the only driver that
implements bdrv_check_perm is file-posix, but it never has any
children.
Also, bdrv_set_perm() don't stop processing if driver has
.bdrv_set_perm handler as well.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201021145859.11201-8-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Add flag to make serialising request no wait: if there are conflicting
requests, just return error immediately. It's will be used in upcoming
preallocate filter.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20201021145859.11201-7-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
We'll need a separate function, which will only "mark" request
serialising with specified align but not wait for conflicting
requests. So, it will be like old bdrv_mark_request_serialising(),
before merging bdrv_wait_serialising_requests_locked() into it.
To reduce the possible mess, let's do the following:
Public function that does both marking and waiting will be called
bdrv_make_request_serialising, and private function which will only
"mark" will be called tracked_request_set_serialising().
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20201021145859.11201-6-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
bs is linked in req, so no needs to pass it separately. Most of
tracked-requests API doesn't have bs argument. Actually, after this
patch only tracked_request_begin has it, but it's for purpose.
While being here, also add a comment about what "_locked" is.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20201021145859.11201-5-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
To be reused in separate.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20201021145859.11201-4-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The comments states, that on misaligned request we should have already
been waiting. But for bdrv_padding_rmw_read, we called
bdrv_mark_request_serialising with align = request_alignment, and now
we serialise with align = cluster_size. So we may have to wait again
with larger alignment.
Note, that the only user of BDRV_REQ_SERIALISING is backup which issues
cluster-aligned requests, so seems the assertion should not fire for
now. But it's wrong anyway.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20201021145859.11201-3-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
1. BDRV_REQ_NO_SERIALISING doesn't exist already, don't mention it.
2. We are going to add one more user of BDRV_REQ_SERIALISING, so
comment about backup becomes a bit confusing here. The use case in
backup is documented in block/backup.c, so let's just drop
duplication here.
3. The fact that BDRV_REQ_SERIALISING is only for write requests is
omitted. Add a note.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-Id: <20201021145859.11201-2-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The only users of this thing are:
1. bdrv_child_try_set_perm, to ignore failures on loosen restrictions
2. assertion in bdrv_replace_child
3. assertion in bdrv_inactivate_recurse
Assertions are not enough reason for overcomplication the permission
update system. So, look at bdrv_child_try_set_perm.
We are interested in tighten_restrictions only on failure. But on
failure this field is not reliable: we may fail in the middle of
permission update, some nodes are not touched and we don't know should
their permissions be tighten or not. So, we rely on the fact that if we
loose restrictions on some node (or BdrvChild), we'll not tighten
restriction in the whole subtree as part of this update (assertions 2
and 3 rely on this fact as well). And, if we rely on this fact anyway,
we can just check it on top, and don't pass additional pointer through
the whole recursive infrastructure.
Note also, that further patches will fix real bugs in permission update
system, so now is good time to simplify it, as a help for further
refactorings.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201106124241.16950-8-vsementsov@virtuozzo.com>
[mreitz: Fixed rebase conflict]
Signed-off-by: Max Reitz <mreitz@redhat.com>
We must set the permission used for _check_. Assert that we have
backup and drop extra arguments.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201106124241.16950-7-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
We should never set permissions other than cumulative permissions of
parents. During bdrv_reopen_multiple() we _check_ for synthetic
permissions but when we do _set_ the graph is already updated.
Add an assertion to bdrv_reopen_multiple(), other cases are more
obvious.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201106124241.16950-6-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Make separate function for common pattern.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201106124241.16950-5-vsementsov@virtuozzo.com>
[mreitz: Squashed in
https://lists.nongnu.org/archive/html/qemu-block/2020-11/msg00299.html]
Signed-off-by: Max Reitz <mreitz@redhat.com>
Keep the logs of acceptance tests for two days on GitLab. If you want
to make it available for more time, click on the 'Keep' button on
the Job page at web UI.
By default GitLab will archive artifacts only if the job succeed.
Instead let's keep it on both success and failure, so it gives the
opportunity to the developer/maintainer to check the error logs
as well as to the logs of CANCEL tests (not shown on the job logs).
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20201211183827.915232-4-wainersm@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Replace the code (python) on after_script of the acceptance jobs that
is currently used to show the logs of failed tests. Instead it is used
the Avocado's testlogs plug-in which works likewise.
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20201211183827.915232-3-wainersm@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
To use Avocado's testlogs plug-in on CI it is required to use
its 83.0 or greater version.
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20201211183827.915232-2-wainersm@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
- Improve the sifive_u DTB generation
- Add QSPI NOR flash to Microchip PFSoC
- Fix a bug in the Hypervisor HLVX/HLV/HSV instructions
- Fix some mstatus mask defines
- Ibex PLIC improvements
- OpenTitan memory layout update
- Initial steps towards support for 32-bit CPUs on 64-bit builds
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEE9sSsRtSTSGjTuM6PIeENKd+XcFQFAl/cRU4ACgkQIeENKd+X
cFTAPgf/dHOYiBeSZr0eg03LwpqiJ5ziVrvE9nvAjml8CsDvwlx6roEMT1Ynyquq
zs8sPb4a1Ro7rHBofHFqgHp8TO6wAiw2nDrT8YEt1iARO5Oh5IuHqs/wi8SNB2QF
d1Dv8/zIBOkK5+Fg/DQHTrPgq4fJZwY2jnVZAyUBuMW5UkvCVlJI4zGPwYyh+4ZS
xTWogMzSbyer3evfTg8f8AhvCGQMITwLo6Nxc4wj3bf1ZE8Br9UxorqPme4UwJ+r
Ip9/jXDlKI9BeE85XoOrQJNLR7OzLgdQ1S/LjeBYLQmsltOD49YcH6a6AX3YjDwW
Jj6GgXBTFGIUXbxc3ADpoMJQp+xDSA==
=Vj2m
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/alistair/tags/pull-riscv-to-apply-20201217-1' into staging
A collection of RISC-V improvements:
- Improve the sifive_u DTB generation
- Add QSPI NOR flash to Microchip PFSoC
- Fix a bug in the Hypervisor HLVX/HLV/HSV instructions
- Fix some mstatus mask defines
- Ibex PLIC improvements
- OpenTitan memory layout update
- Initial steps towards support for 32-bit CPUs on 64-bit builds
# gpg: Signature made Fri 18 Dec 2020 05:59:42 GMT
# gpg: using RSA key F6C4AC46D4934868D3B8CE8F21E10D29DF977054
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [full]
# Primary key fingerprint: F6C4 AC46 D493 4868 D3B8 CE8F 21E1 0D29 DF97 7054
* remotes/alistair/tags/pull-riscv-to-apply-20201217-1: (23 commits)
riscv/opentitan: Update the OpenTitan memory layout
hw/riscv: Use the CPU to determine if 32-bit
target/riscv: cpu: Set XLEN independently from target
target/riscv: csr: Remove compile time XLEN checks
target/riscv: cpu_helper: Remove compile time XLEN checks
target/riscv: cpu: Remove compile time XLEN checks
target/riscv: Specify the XLEN for CPUs
target/riscv: Add a riscv_cpu_is_32bit() helper function
target/riscv: fpu_helper: Match function defs in HELPER macros
hw/riscv: sifive_u: Remove compile time XLEN checks
hw/riscv: spike: Remove compile time XLEN checks
hw/riscv: virt: Remove compile time XLEN checks
hw/riscv: boot: Remove compile time XLEN checks
riscv: virt: Remove target macro conditionals
riscv: spike: Remove target macro conditionals
target/riscv: Add a TYPE_RISCV_CPU_BASE CPU
hw/riscv: Expand the is 32-bit check to support more CPUs
intc/ibex_plic: Clear interrupts that occur during claim process
target/riscv: Fix definition of MSTATUS_TW and MSTATUS_TSR
target/riscv: Fix the bug of HLVX/HLV/HSV
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
On the pc-i440fx machine, the floppy drive relies on the i8257 DMA
controller. Add this device to the floppy fuzzer config, and silence the
warning about a missing format specifier for the null-co:// drive.
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Message-Id: <20201216203328.41112-1-alxndr@bu.edu>
Signed-off-by: Thomas Huth <thuth@redhat.com>
device[NUMBER] thing in QOM path is not stable and tracking it during
code modifications is not fun. Let's filter it like it's already done
in iotest 186.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201216095205.526235-3-vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
According to original commit, that added this filter (627f607e3d),
the problematic thing in qom path is device[NUMBER], not the whole
path. Seems that tracking the other parts of the path in iotest output
is not bad. Let's make _filter_qom_path stricter.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201216095205.526235-2-vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The MAINTAINERS file was not updated when the storage daemon was merged.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20201209103802.350848-4-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Document the qemu-storage-daemon tool. Most of the command-line options
are identical to their QEMU counterparts. Perhaps Sphinx hxtool
integration could be extended to extract documentation for individual
command-line options so they can be shared. For now the
qemu-storage-daemon simply refers to the qemu(1) man page where the
command-line options are identical.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20201209103802.350848-3-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Although individual qemu-storage-daemon QMP commands are identical to
QEMU QMP commands, qemu-storage-daemon only supports a subset of QEMU's
QMP commands. Generate a manual page of just the commands supported by
qemu-storage-daemon so that users know exactly what is available in
qemu-storage-daemon.
Add an h1 heading in storage-daemon/qapi/qapi-schema.json so that
block-core.json is at the h2 heading level.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20201209103802.350848-2-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
nfs_client_open returns the file size in sectors. This effectively
makes it impossible to open files larger than 1TB.
Fixes: c22a034545
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Lieven <pl@kamp.de>
Message-Id: <20201209121735.16437-1-pl@kamp.de>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This is the QEMU equivalent of this Linux commit (but 7 years later):
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=f7025a43a9da2
The MTD subsystem has its own small museum of ancient NANDs
in a form of the CONFIG_MTD_NAND_MUSEUM_IDS configuration option.
The museum contains stone age NANDs with 256 bytes pages, as well
as iron age NANDs with 512 bytes per page and up to 8MiB page size.
It is with great sorrow that I inform you that the museum is being
decommissioned. The MTD subsystem is out of budget for Kconfig
options and already has too many of them, and there is a general
kernel trend to simplify the configuration menu.
We remove the stone age exhibits along with closing the museum,
but some of the iron age ones are transferred to the regular NAND
depot. Namely, only those which have unique device IDs are
transferred, and the ones which have conflicting device IDs are
removed.
The machine using this device are:
- axis-dev88
- tosa (via tc6393xb_init)
- spitz based (akita, borzoi, terrier)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201214002620.342384-1-f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Commit 8b1170012b has added a global maximum disk length for the block
layer, so the error message when creating an overly large disk has
changed.
Fixes: 8b1170012b
("block: introduce BDRV_MAX_LENGTH")
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20201214175158.299919-1-mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Correctly implement save/restore of the tstate field in
sparc64_get_context() and sparc64_set_context():
* Don't use the CWP value from the guest in set_context
* Construct and save a tstate value rather than leaving
it as zero in get_context
To do this we factor out the "calculate TSTATE value from CPU state"
code from sparc_cpu_do_interrupt() into its own sparc64_tstate()
function; that in turn requires us to move some of the function
prototypes out from inside a CPU_NO_IO_DEFS ifdef guard.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201106152738.26026-5-peter.maydell@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>