qio_channel_yield() now updates ioc->read_write/coroutine and calls
qio_channel_set_aio_fd_handlers(), so the code in the handlers has
become redundant and can be removed.
This does not make a difference in intermediate states because
aio_co_wake() really enters the coroutine immediately here: These
handlers are never run in coroutine context, and we're in the right
AioContext because qio_channel_attach_aio_context() asserts that the
handlers are inactive.
To make these conditions more obvious, assert the right AioContext.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Similar to how qemu_co_sleep_ns() allows preemption from an external
coroutine entry, allow reentering qio_channel_yield() early.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
nbd_client_attach_aio_context() schedules connection_co in the new
AioContext and this way reenters it in any arbitrary place that has
yielded. We can restrict this a bit to the function call where the
coroutine actually sits waiting when it's idle.
This doesn't solve any bug yet, but it shows where in the code we need
to support this random reentrance and where we don't have to care.
Add FIXME comments for the existing bugs that the rest of this series
will fix.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
virtio_blk_dma_restart_bh() submits new requests, so in order to make
sure that these requests are not started inside a drained section of the
attached BlockBackend, we need to make sure that draining the
BlockBackend waits for the BH to be executed.
This BH is still questionable because its scheduled in the main thread
instead of the configured iothread. Leave a FIXME comment for this.
But with this fix, enabling the data plane at least waits for these
requests (in bdrv_set_aio_context()) instead of changing the AioContext
under their feet and making them run in the wrong thread, causing
crashes and failures (e.g. due to missing locking).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
For some users of BlockBackends, just increasing the in_flight counter
is easier than implementing separate handlers in BlockDevOps. Make the
helper functions for this public.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Error reporting for user_creatable_add_opts_foreach was changed so that
it no longer called 'error_report_err' in:
commit 7e1e0c1112
Author: Markus Armbruster <armbru@redhat.com>
Date: Wed Oct 17 10:26:43 2018 +0200
qom: Clean up error reporting in user_creatable_add_opts_foreach()
Some callers were updated to pass in "&error_fatal" but all the ones in
qemu-img were left passing NULL. As a result all errors went to
/dev/null instead of being reported to the user.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If there's an error in commit_start() then the block job must be
deleted before replacing commit_top_bs, otherwise it will fail because
of lack of permissions. This happens since the permission system was
introduced in 8dfba27977.
Fortunately this bug doesn't seem to be possible to reproduce at the
moment without changing the code.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Adds a fast path on aio context setting preventing
unnecessary context setting routine.
Also, it prevents issues with cyclic walk of child
bds-es appeared because of registering aio walking
notifiers:
Call stack:
0 __GI_raise
1 __GI_abort
2 __assert_fail_base
3 __GI___assert_fail
4 bdrv_detach_aio_context (bs=0x55f54d65c000) <<<
5 bdrv_detach_aio_context (bs=0x55f54fc8a800)
6 bdrv_set_aio_context (bs=0x55f54fc8a800, ...)
7 block_job_attached_aio_context
8 bdrv_attach_aio_context (bs=0x55f54d65c000, ...) <<<
9 bdrv_set_aio_context (bs=0x55f54d65c000)
10 blk_set_aio_context
11 virtio_blk_data_plane_stop
12 virtio_bus_stop_ioeventfd
13 virtio_vmstate_change
14 vm_state_notify (running=0, state=RUN_STATE_SHUTDOWN)
15 do_vm_stop (state=RUN_STATE_SHUTDOWN, send_stop=true)
16 vm_stop (state=RUN_STATE_SHUTDOWN)
17 main_loop_should_exit
18 main_loop
19 main
This can happen because of "new" context attachment to VM disk bds.
When attaching a new context the corresponding aio context handler is
called for each of aio_notifiers registered on the VM disk bds context.
Among those handlers, there is the block_job_attached_aio_context handler
which sets a new aio context for the block job bds. When doing so,
the old context is detached from all the block job bds children and one of
them is the VM disk bds, serving as backing store for the blockjob bds,
although the VM disk bds is actually the initializer of that process.
Since the VM disk bds is protected with walking_aio_notifiers flag
from double processing in recursive calls, the assert fires.
Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
In qcow2_snapshot_create there is the following code block:
/* Generate an ID */
find_new_snapshot_id(bs, sn_info->id_str, sizeof(sn_info->id_str));
/* Check that the ID is unique */
if (find_snapshot_by_id_and_name(bs, sn_info->id_str, NULL) >= 0) {
return -EEXIST;
}
find_new_snapshot_id cycles through all snapshots, getting the id_str
as an unsigned long int, calculating the max id_max value of all the
existing id_strs and writing in the id_str pointer id_max + 1:
for(i = 0; i < s->nb_snapshots; i++) {
sn = s->snapshots + i;
id = strtoul(sn->id_str, NULL, 10);
if (id > id_max)
id_max = id;
}
snprintf(id_str, id_str_size, "%lu", id_max + 1);
Here, sn_info->id_str will have the unique value id_max + 1. Right
after that, find_snapshot_by_id_and_name is called with
id = sn_info->id_str and name = NULL. This will cause the function
to execute the following:
} else if (id) {
for (i = 0; i < s->nb_snapshots; i++) {
if (!strcmp(s->snapshots[i].id_str, id)) {
return i;
}
}
}
In short, we're searching the existing snapshots to see if sn_info->id_str
matches any existing id, right after we set in the previous line a
sn_info->id_str value that is already unique.
The first code block goes way back to commit 585f8587ad, a 2006 commit from
Fabrice Bellard that simply says "new qcow2 disk image format". No more
info is provided about this logic in any subsequent commits that moved
this code block around.
I can't say about the original design, but the current logic is redundant.
bdrv_snapshot_create is called in aio_context lock, forbidding any
concurrent call to accidentally create a new snapshot between
the find_new_snapshot_id and find_snapshot_by_id_and_name calls. What
we're ending up doing is to cycle through the snapshots two times
for no viable reason.
This patch eliminates the redundancy by removing the 'id is unique'
check that calls find_snapshot_by_id_and_name.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
After the previous patch, the only instance of this function left
is inside qemu-img.c.
qemu-img is using it inside the 'img_snapshot' function to delete
snapshots in the SNAPSHOT_DELETE case, based on a "snapshot_name"
string that refers to the tag, not ID, of the QEMUSnapshotInfo struct.
This can be verified by checking the SNAPSHOT_CREATE case that
comes shortly before SNAPSHOT_DELETE. In that case, the same
"snapshot_name" variable is being strcpy to the 'name' field
of the QEMUSnapshotInfo struct sn:
pstrcpy(sn.name, sizeof(sn.name), snapshot_name);
Based on that, it is unlikely that "snapshot_name" might contain
an "id" in SNAPSHOT_DELETE.
This patch changes SNAPSHOT_DELETE to use snapshot_find() and
snapshot_delete() instead of bdrv_snapshot_delete_by_id_or_name.
After that, there is no instances left of bdrv_snapshot_delete_by_id_or_name
in the code, so it is safe to remove it entirely.
Suggested-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
At this moment, QEMU attempts to create/load/delete snapshots
by using either an ID (id_str) or a name. The problem is that the code
isn't consistent of whether the entered argument is an ID or a name,
causing unexpected behaviors.
For example, when creating snapshots via savevm <arg>, what happens is that
"arg" is treated as both name and id_str. In a guest without snapshots, create
a single snapshot via savevm:
(qemu) savevm 0
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- 0 741M 2018-07-31 13:39:56 00:41:25.313
A snapshot with name "0" is created. ID is hidden from the user, but the
ID is a non-zero integer that starts at "1". Thus, this snapshot has
id_str=1, TAG="0". Creating a second snapshot with arg = 1, the first one
is deleted:
(qemu) savevm 1
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- 1 741M 2018-07-31 13:42:14 00:41:55.252
What happened?
- when creating the second snapshot, a verification is done inside
bdrv_all_delete_snapshot to delete any existing snapshots that matches an
string argument. Here, the code calls bdrv_all_delete_snapshot("1", ...);
- bdrv_all_delete_snapshot calls bdrv_snapshot_find(..., "1") for each
BlockDriverState of the guest. And this is where things goes tilting:
bdrv_snapshot_find does a search by both id_str and name. It finds
out that there is a snapshot that has id_str = 1, stores a reference
to the snapshot in the sn_info pointer and then returns match found;
- since a match was found, a call to bdrv_snapshot_delete_by_id_or_name() is
made. This function ignores the pointer written by bdrv_snapshot_find. Instead,
it deletes the snapshot using bdrv_snapshot_delete() calling it first with
id_str = 1. If it fails to delete, then it calls it again with name = 1.
- after all that, QEMU creates the new snapshot, that has id_str = 1 and
name = 1. The user is left wondering that happened with the first snapshot
created. Similar bugs can be triggered when using loadvm and delvm.
Before contemplating discarding the use of ID input in these operations,
I've searched the code of what would be the implications. My findings
are:
- the RBD and Sheepdog drivers don't care. Both uses the 'name' field as
key in their logic, making id_str = name when appropriate.
replay-snapshot.c does not make any special use of id_str;
- qcow2 uses id_str as an unique identifier but it is automatically
calculated, not being influenced by user input. Other than that, there are
no distinguish operations made only with id_str;
- in blockdev.c, the delete operation uses a match of both id_str AND
name. Given that id_str is either a copy of 'name' or auto-generated,
we're fine here.
This gives motivation to not consider ID as a valid user input in HMP
commands - sticking with 'name' input only is more consistent. To
accomplish that, the following changes were made in this patch:
- bdrv_snapshot_find() does not match for id_str anymore, only 'name'. The
function is called in save_snapshot(), load_snapshot(), bdrv_all_delete_snapshot()
and bdrv_all_find_snapshot(). This change makes the search function more
predictable and does not change the behavior of any underlying code that uses
these affected functions, which are related to HMP (which is fine) and the
main loop inside vl.c (which doesn't care about it anyways);
- bdrv_all_delete_snapshot() does not call bdrv_snapshot_delete_by_id_or_name
anymore. Instead, it uses the pointer returned by bdrv_snapshot_find to
erase the snapshot with the exact match of id_str an name. This function
is called in save_snapshot and hmp_delvm, thus this change produces the
intended effect;
- documentation changes to reflect the new behavior. I consider this to
be an API fix instead of an API change - the user was already creating
snapshots using 'name', but now he/she will also enjoy a consistent
behavior.
Ideally we would get rid of the id_str field entirely, but this would have
repercussions on existing snapshots. Another day perhaps.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
I'll not be involved in day-to-day qemu development. Remove myself as
maintainer from the remainder of the network block drivers, and revert
them to the general block layer maintainership.
Move 'sheepdog' to the 'Odd Fixes' support level.
For VHDX, added my personal email address as a maintainer, as I can
answer questions or send the occassional bug fix. Leaving it as
'Supported', instead of 'Odd Fixes', because I think the rest of the
block layer maintainers and developers will upkeep it as well, if
needed.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Acked-by: Max Reitz <mreitz@redhat.com>
Message-Id: <63e205cb84c8f0a10c1bc6d5d6856d72ceb56e41.1537984851.git.jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
I'll not be involved with day-to-day qemu development, and John
Snow is a block jobs wizard. Have him take over block job
maintainership duties.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Acked-by: John Snow <jsnow@redhat.com>
Message-Id: <d56d7c6592e7d68aa72764e9616878394bffbc14.1537984851.git.jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
cppcheck reports:
[hw/tpm/tpm_tis.c:113]: (warning) %d in format string (no. 2) requires 'int' but the argument type is 'unsigned int'
Rather than just converting the format specifier to use '%u", the
tpm_tis_show_buffer() function is converted to use trace points and
the two debug callers use the trace event infrastructure so that it's
available in production cases also and not just when DEBUG_TIS is enabled.
Signed-off-by: Liam Merwick <liam.merwick@oracle.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
In tpm_tis_mmio_write() if the requesting locality is seizing
access, any seizure by a lower locality is cancelled. However the
loop doing the seizure had an off-by-one error and the locality
immediately preceding the requesting locality was not being cleared.
This is fixed by adjusting the test in the for loop to check the
localities up to the requesting locality.
Signed-off-by: Liam Merwick <Liam.Merwick@oracle.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
XLF_CAN_BE_LOADED_ABOVE_4G is set on vmlinuz shipped by Fedora-28 so that
it's allowed to be loaded below 4 GB address.
timeout is updated to 5 minutes as well since we need more time to load a
large initrd to the guest
CC: Wainer dos Santos Moschetta <wainersm@redhat.com>
CC: Caio Carrara <ccarrara@redhat.com>
CC: Cleber Rosa <crosa@redhat.com>
CC: Eduardo Habkost <ehabkost@redhat.com>
CC: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Message-Id: <1548638112-31101-2-git-send-email-lizhijian@cn.fujitsu.com>
Signed-off-by: Cleber Rosa <crosa@redhat.com>
QEMU have already supported to load up to 4G initrd if the sepcified memory is
enough and XLF_CAN_BE_LOADED_ABOVE_4G is set by guest kernel
linux-3.6 kernel shipped by Fedora-18 cannot support xldflags so that it
cannot support loading more than 2GiB initrd
CC: Wainer dos Santos Moschetta <wainersm@redhat.com>
CC: Caio Carrara <ccarrara@redhat.com>
CC: Cleber Rosa <crosa@redhat.com>
CC: Eduardo Habkost <ehabkost@redhat.com>
CC: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Message-Id: <1548638112-31101-1-git-send-email-lizhijian@cn.fujitsu.com>
Signed-off-by: Cleber Rosa <crosa@redhat.com>
This change adds the simplest possible migration test. Beyond the test
purpose itself it's also useful to exercise the multi virtual machines
capabilities from base avocado qemu test class.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Signed-off-by: Caio Carrara <ccarrara@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Message-Id: <20190212193855.13223-3-ccarrara@redhat.com>
Signed-off-by: Cleber Rosa <crosa@redhat.com>
This change adds the possibility to write acceptance tests with multi
virtual machine support. It's done keeping the virtual machines objects
stored in a test attribute (dictionary). This dictionary shouldn't be
accessed directly but through the new method added `get_vm`. This new
method accept a list of args (that will be added as virtual machine
arguments) and an optional name argument. The name is the key that
identify a single virtual machine along the test machines available. If
a name without a machine is informed a new machine will be instantiated.
The current usage of vm in tests will not be broken by this change since
it keeps a property called vm in the base test class. This property only
calls the new method `get_vm` with default parameters (no args and
'default' as machine name).
Signed-off-by: Caio Carrara <ccarrara@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Message-Id: <20190212193855.13223-2-ccarrara@redhat.com>
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Even when the launch of QEMU succeeds, it's useful to have the command
line recorded.
Reviewed-by: Caio Carrara <ccarrara@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Message-Id: <20190202005610.24048-2-crosa@redhat.com>
Signed-off-by: Cleber Rosa <crosa@redhat.com>
This is a simple move of Python code that wraps common QEMU
functionality, and are used by a number of different tests
and scripts.
By treating that code as a real Python module, we can more easily:
* reuse code
* have a proper place for the module's own unittests
* apply a more consistent style
* generate documentation
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Reviewed-by: Caio Carrara <ccarrara@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20190206162901.19082-2-crosa@redhat.com>
Signed-off-by: Cleber Rosa <crosa@redhat.com>
The Avocado test runner attemps to find its INSTRUMENTED (that is,
Python based tests) in a manner that is as safe as possible to the
user. Different from plain Python unittest, it won't load or
execute test code on an operation such as:
$ avocado list tests/acceptance/
Before version 68.0, the logic implemented to identify INSTRUMENTED
tests would require either the "🥑 enable" or "🥑
recursive" statement as a flag for tests that would not inherit
directly from "avocado.Test". This is not necessary anymore,
and because of that the boiler plate statements can now be removed.
Reference: https://avocado-framework.readthedocs.io/en/68.0/release_notes/68_0.html#users-test-writers
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Reviewed-by: Caio Carrara <ccarrara@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Message-Id: <20190218173723.26120-1-crosa@redhat.com>
Signed-off-by: Cleber Rosa <crosa@redhat.com>
The entire link status register for SR-IOV VFs is defined as RsvdZ,
reads simply return zero. Usually this is nothing more than lspci
reporting inconsequentially broken values:
LnkSta: Speed unknown, Width x0, ...
However, now that we're using the downstream endpoint link status to
fill in the value at the parent downstream port, invalid values become
a problem. In particular, the PCIe hotplug driver in Linux looks for
a valid negotiated link width and will fail to enumerate hot-added
downstream endpoints without non-zero value here, ex:
pciehp 0000:00:02.0:pcie004: Slot(0): Attention button pressed
pciehp 0000:00:02.0:pcie004: Slot(0) Powering on due to button press
pciehp 0000:00:02.0:pcie004: Slot(0): Card present
pciehp 0000:00:02.0:pcie004: Slot(0): Link Up
pciehp 0000:00:02.0:pcie004: link training error: status 0x2000
pciehp 0000:00:02.0:pcie004: Failed to check link status
Resolve by using minimum width and speed values for the downstream
port link status when the endpoint fails to provide valid values.
Long term, we may want to implement emulation in the vfio-pci host
driver to suppliment this field with the PF value as the SR-IOV spec
seems to allow, but the solution here is compatible should that be
implemented later.
Fixes: 727b48661f ("pci: Sync PCIe downstream port LNKSTA on read")
Reported-by: Jens Freimann <jfreimann@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Message-Id: <155060310248.19547.14979269067689441201.stgit@gimli.home>
Tested-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The type 3 SMBIOS structure[1] ends with fields
...
0x14 - contained element count
0x15 - contained element record length
0x16 - sku number
The smbios_type_3 struct missed the contained element record
length field, causing sku number to be reported at the wrong
offset.
[1] https://www.dmtf.org/sites/default/files/standards/documents/DSP0134_3.1.1.pdf
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190215153600.1770727-1-berrange@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Fixes: e41fca3da7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
sPAPR code will use it too so move it from VFIO to the common code.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20190214051440.59167-1-aik@ozlabs.ru>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The virtio-balloon always works in units of 4kiB (BALLOON_PAGE_SIZE), but
we can only actually discard memory in units of the host page size.
Now, we handle this very badly: we silently ignore balloon requests that
aren't host page aligned, and for requests that are host page aligned we
discard the entire host page. The latter can corrupt guest memory if its
page size is smaller than the host's.
The obvious choice would be to disable the balloon if the host page size is
not 4kiB. However, that would break the special case where host and guest
have the same page size, but that's larger than 4kiB. That case currently
works by accident[1] - and is used in practice on many production POWER
systems where 64kiB has long been the Linux default page size on both host
and guest.
To make the balloon safe, without breaking that useful special case, we
need to accumulate 4kiB balloon requests until we have a whole contiguous
host page to discard.
We could in principle do that across all guest memory, but it would require
a large bitmap to track. This patch represents a compromise: we track
ballooned subpages for a single contiguous host page at a time. This means
that if the guest discards all 4kiB chunks of a host page in succession,
we will discard it. This is the expected behaviour in the (host page) ==
(guest page) != 4kiB case we want to support.
If the guest scatters 4kiB requests across different host pages, we don't
discard anything, and issue a warning. Not ideal, but at least we don't
corrupt guest memory as the previous version could.
Warning reporting is kind of a compromise here. Determining whether we're
in a problematic state at realize() time is tricky, because we'd have to
look at the host pagesizes of all memory backends, but we can't really know
if some of those backends could be for special purpose memory that's not
subject to ballooning.
Reporting only when the guest tries to balloon a partial page also isn't
great because if the guest page size happens to line up it won't indicate
that we're in a non ideal situation. It could also cause alarming repeated
warnings whenever a migration is attempted.
So, what we do is warn the first time the guest attempts balloon a partial
host page, whether or not it will end up ballooning the rest of the page
immediately afterwards.
[1] Because when the guest attempts to balloon a page, it will submit
requests for each 4kiB subpage. Most will be ignored, but the one
which happens to be host page aligned will discard the whole lot.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190214043916.22128-6-david@gibson.dropbear.id.au>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJcbu/QAAoJENSXKoln91plb0oH/RczDVACfmhnERAru8NhW19/
6YB5w1FjbH+CkNB4ZBdF5sQNRyAnuHxL6xMKT3LvZUCEy0ADk+D5KJxzg340JABB
eGc2FxKYe1vbhCAsYhQMOZyGhiye6UZnRjTXirYqMCm74zuFVI954X0V1ytfHARI
0AIsWcOOVLnJj+itU0Uj+i+dBFFec0TbHWodvB8rt+TVcg5SFsdiwbT7jLxUSCAA
VwhjmDUlE2+545LgbIrRbhMfnsEDkMgN2C1YGqkdBSM03dYnW0scxudGbxN0QPrV
l0KAVTvUXcdUj0i1B3E91QiF0s6KU34TpE1vwZFUBdyuqHpIPgNhJkK6Tmt/DqM=
=PVKW
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-feb-21-2019-v2' into staging
MIPS queue for February 21st, 2019, v2
# gpg: Signature made Thu 21 Feb 2019 18:37:04 GMT
# gpg: using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01 DD75 D497 2A89 67F7 5A65
* remotes/amarkovic/tags/mips-queue-feb-21-2019-v2:
target/mips: fulong2e: Dynamically generate SPD EEPROM data
target/mips: fulong2e: Fix bios flash size
hw/pci-host/bonito.c: Add PCI mem region mapped at the correct address
target/mips: implement QMP query-cpu-definitions command
tests/tcg: target/mips: Add wrappers for MSA integer compare instructions
tests/tcg: target/mips: Change directory name 'bit-counting' to 'bit-count'
tests/tcg: target/mips: Correct path to headers in some test source files
hw/misc: mips_itu: Fix 32/64 bit issue in a line involving shift operator
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If the DISCARD feature is enabled, we try this command in the
test_basic(), checking only the status returned by the request.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Message-id: 20190221103314.58500-11-sgarzare@redhat.com
Message-Id: <20190221103314.58500-11-sgarzare@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
If the WRITE_ZEROES feature is enabled, we check this command
in the test_basic().
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Message-id: 20190221103314.58500-10-sgarzare@redhat.com
Message-Id: <20190221103314.58500-10-sgarzare@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This function is useful to fix the endianness of struct
virtio_blk_discard_write_zeroes headers.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Message-id: 20190221103314.58500-9-sgarzare@redhat.com
Message-Id: <20190221103314.58500-9-sgarzare@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The size of data in the virtio_blk_request must be a multiple
of 512 bytes for IN and OUT requests, or a multiple of the size
of struct virtio_blk_discard_write_zeroes for DISCARD and
WRITE_ZEROES requests.
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Message-id: 20190221103314.58500-8-sgarzare@redhat.com
Message-Id: <20190221103314.58500-8-sgarzare@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This patch adds the support of DISCARD and WRITE_ZEROES commands,
that have been introduced in the virtio-blk protocol to have
better performance when using SSD backend.
We support only one segment per request since multiple segments
are not widely used and there are no userspace APIs that allow
applications to submit multiple segments in a single call.
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Message-id: 20190221103314.58500-7-sgarzare@redhat.com
Message-Id: <20190221103314.58500-7-sgarzare@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Starting from DISABLE and WRITE_ZEROES features, we use an array of
VirtIOFeature (as virtio-net) to properly set the config size
depending on the features enabled.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Message-id: 20190221103314.58500-6-sgarzare@redhat.com
Message-Id: <20190221103314.58500-6-sgarzare@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
In order to use VirtIOFeature also in other virtio devices, we move
its declaration and the endof() macro (renamed in virtio_endof())
in virtio.h.
We add virtio_feature_get_config_size() function to iterate the array
of VirtIOFeature and to return the config size depending on the
features enabled. (as virtio_net_set_config_size() did)
Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Message-id: 20190221103314.58500-5-sgarzare@redhat.com
Message-Id: <20190221103314.58500-5-sgarzare@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
In order to avoid migration issues, we enable DISCARD and
WRITE_ZEROES features only for machine type >= 4.0
As discussed with Michael S. Tsirkin and Stefan Hajnoczi on the
list [1], DISCARD operation should not have security implications
(eg. page cache attacks), so we can enable it by default.
[1] https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg00504.html
Suggested-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Message-id: 20190221103314.58500-4-sgarzare@redhat.com
Message-Id: <20190221103314.58500-4-sgarzare@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Since configurable features for virtio-blk are growing, this patch
adds host_features field in the struct VirtIOBlock. (as in virtio-net)
In this way, we can avoid to add new fields for new properties and
we can directly set VIRTIO_BLK_F* flags in the host_features.
We update "config-wce" and "scsi" property definition to use the new
host_features field without change the behaviour.
Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Message-id: 20190221103314.58500-3-sgarzare@redhat.com
Message-Id: <20190221103314.58500-3-sgarzare@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
We add acct_failed param in order to use virtio_blk_handle_rw_error()
also when is not required to call block_acct_failed(). (eg. a discard
operation is failed)
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Message-id: 20190221103314.58500-2-sgarzare@redhat.com
Message-Id: <20190221103314.58500-2-sgarzare@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
@iov is used only to initialize @qiov. Let's use new
qemu_iovec_init_buf() instead, which simplifies the code.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190218140926.333779-18-vsementsov@virtuozzo.com
Message-Id: <20190218140926.333779-18-vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
@iov is used only to initialize @qiov. Let's use new
qemu_iovec_init_buf() instead, which simplifies the code.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190218140926.333779-17-vsementsov@virtuozzo.com
Message-Id: <20190218140926.333779-17-vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
@iov is used only to initialize @qiov. Let's use new
qemu_iovec_init_buf() instead, which simplifies the code.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190218140926.333779-16-vsementsov@virtuozzo.com
Message-Id: <20190218140926.333779-16-vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Use new QEMU_IOVEC_INIT_BUF() instead of
qemu_iovec_init_external( ... , 1), which simplifies the code.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190218140926.333779-15-vsementsov@virtuozzo.com
Message-Id: <20190218140926.333779-15-vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Use new qemu_iovec_init_buf() instead of
qemu_iovec_init_external( ... , 1), which simplifies the code.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190218140926.333779-14-vsementsov@virtuozzo.com
Message-Id: <20190218140926.333779-14-vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Use new qemu_iovec_init_buf() instead of
qemu_iovec_init_external( ... , 1), which simplifies the code.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190218140926.333779-13-vsementsov@virtuozzo.com
Message-Id: <20190218140926.333779-13-vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Use new qemu_iovec_init_buf() instead of
qemu_iovec_init_external( ... , 1), which simplifies the code.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190218140926.333779-12-vsementsov@virtuozzo.com
Message-Id: <20190218140926.333779-12-vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Use new qemu_iovec_init_buf() instead of
qemu_iovec_init_external( ... , 1), which simplifies the code.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190218140926.333779-11-vsementsov@virtuozzo.com
Message-Id: <20190218140926.333779-11-vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>