- qemu-storage-daemon: Add --daemonize
- Fix x-blockdev-amend and block node activation code which incorrectly
executed code in the iothread that must run in the main thread.
- Add macros for coroutine-safe TLS variables (required for correctness
with LTO)
- Fix crashes with concurrent I/O and bdrv_refresh_limits()
- Split block APIs in global state and I/O
- iotests: Don't refuse to run at all without GNU sed, just skip tests
that need it
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmIiSecRHGt3b2xmQHJl
ZGhhdC5jb20ACgkQfwmycsiPL9aSMBAAhS1FLwiUPJ5zsRlYkFiJ76M5AEJPNgYT
F3QqBxJa4d/rR8Hibx0p6bFU21QKIat2OIkepcaVGh8oOM8/8DKx1dUlhQt3IOQq
yTJ5klBTxQtnBYapEsZC1bcRgRhLXbhjsXtJluzJrfvIYO0BPdVmpetTY4vJ7v79
U2lYImHkUYZ3xH84qXj3ymfURyBc8LpjmMwWrCaEkjxcwfgb1fOeZuGEy7B387aL
zpYE2oKjSSI20TTbJ+VsPgf2CglmTRl2kILnWP0tFjh5clpozkXAJ/0WW/TwgQgJ
20Blvxk4inSfkMxHPdW0ttoBfW+WqftFFh1t0xqeUn6AfQFJkpQ93RmWk4rpKc8k
rVcXIO54sYNEcJfkofs0m7N6rDk5HBq1WA7wt5veWBeNeoKWALcqjFSlr52FofJr
bcCFnf/DRrGJ9XSi0XDqAqJeuqcGARVViqJZL3jUm+7VuLYcdA7d1wVUzuPUdv+0
KdANzzoLaGR8xNbB+NqRBuzOcxoXYRZWbKH5i2XDk+FCwl5qcg/XalsAcM0bwXPL
moRkH7csqrnD4cBZDSToZoi/iNdlynSIZmI8pL5Tr9btPODBF8lQEiPtJziSHReo
v7S1nR0Q6NNOpuZUMzLJJoPcm+uy7n672SAoWhpbvh0NTdW9msxtqY2KGCKjJH8l
f5zp/zljV0Y=
=Jdal
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/kwolf-gitlab/tags/for-upstream' into staging
Block layer patches
- qemu-storage-daemon: Add --daemonize
- Fix x-blockdev-amend and block node activation code which incorrectly
executed code in the iothread that must run in the main thread.
- Add macros for coroutine-safe TLS variables (required for correctness
with LTO)
- Fix crashes with concurrent I/O and bdrv_refresh_limits()
- Split block APIs in global state and I/O
- iotests: Don't refuse to run at all without GNU sed, just skip tests
that need it
# gpg: Signature made Fri 04 Mar 2022 17:18:31 GMT
# gpg: using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6
# gpg: issuer "kwolf@redhat.com"
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full]
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kwolf-gitlab/tags/for-upstream: (50 commits)
block/amend: Keep strong reference to BDS
block/amend: Always call .bdrv_amend_clean()
tests/qemu-iotests: Rework the checks and spots using GNU sed
iotests/graph-changes-while-io: New test
iotests: Allow using QMP with the QSD
block: Make bdrv_refresh_limits() non-recursive
job.h: assertions in the callers of JobDriver function pointers
job.h: split function pointers in JobDriver
block-backend-common.h: split function pointers in BlockDevOps
block_int-common.h: assertions in the callers of BdrvChildClass function pointers
block_int-common.h: split function pointers in BdrvChildClass
block_int-common.h: assertions in the callers of BlockDriver function pointers
block_int-common.h: split function pointers in BlockDriver
block/coroutines: I/O and "I/O or GS" API
block/copy-before-write.h: global state API + assertions
include/block/snapshot: global state API + assertions
assertions for blockdev.h global state API
include/sysemu/blockdev.h: global state API
assertions for blockjob.h global state API
include/block/blockjob.h: global state API
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Otherwise, the BDS might be freed while the job is running, which would
cause a use-after-free.
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Message-Id: <20220304153729.711387-5-hreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
.bdrv_amend_clean() says block drivers can use it to clean up what was
done in .bdrv_amend_pre_run(). Therefore, it should always be called
after .bdrv_amend_pre_run(), which means we need it to call it in the
JobDriver.free() callback, not in JobDriver.clean().
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Message-Id: <20220304153729.711387-3-hreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Instead of failing the iotests if GNU sed is not available (or skipping
them completely in the check-block.sh script), it would be better to
simply skip the bash-based tests that rely on GNU sed, so that the other
tests could still be run. Thus we now explicitely use "gsed" (either as
direct program or as a wrapper around "sed" if it's the GNU version)
in the spots that rely on the GNU sed behavior. Statements that use the
"-r" parameter of sed have been switched to use "-E" instead, since this
switch is supported by all sed versions on our supported build hosts
(most also support "-r", but macOS' sed only supports "-E"). With all
these changes in place, we then can also remove the sed checks from the
check-block.sh script, so that "make check-block" can now be run on
systems without GNU sed, too.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20220216125454.465041-1-thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Test the following scenario:
1. Some block node (null-co) attached to a user (here: NBD server) that
performs I/O and keeps the node in an I/O thread
2. Repeatedly run blockdev-add/blockdev-del to add/remove an overlay
to/from that node
Each blockdev-add triggers bdrv_refresh_limits(), and because
blockdev-add runs in the main thread, it does not stop the I/O requests.
I/O can thus happen while the limits are refreshed, and when such a
request sees a temporarily invalid block limit (e.g. alignment is 0),
this may easily crash qemu (or the storage daemon in this case).
The block layer needs to ensure that I/O requests to a node are paused
while that node's BlockLimits are refreshed.
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20220216105355.30729-4-hreitz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add a parameter to optionally open a QMP connection when creating a
QemuStorageDaemon instance.
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Message-Id: <20220216105355.30729-3-hreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
bdrv_refresh_limits() recurses down to the node's children. That does
not seem necessary: When we refresh limits on some node, and then
recurse down and were to change one of its children's BlockLimits, then
that would mean we noticed the changed limits by pure chance. The fact
that we refresh the parent's limits has nothing to do with it, so the
reason for the change probably happened before this point in time, and
we should have refreshed the limits then.
Consequently, we should actually propagate block limits changes upwards,
not downwards. That is a separate and pre-existing issue, though, and
so will not be addressed in this patch.
The problem with recursing is that bdrv_refresh_limits() is not atomic.
It begins with zeroing BDS.bl, and only then sets proper, valid limits.
If we do not drain all nodes whose limits are refreshed, then concurrent
I/O requests can encounter invalid request_alignment values and crash
qemu. Therefore, a recursing bdrv_refresh_limits() requires the whole
subtree to be drained, which is currently not ensured by most callers.
A non-recursive bdrv_refresh_limits() only requires the node in question
to not receive I/O requests, and this is done by most callers in some
way or another:
- bdrv_open_driver() deals with a new node with no parents yet
- bdrv_set_file_or_backing_noperm() acts on a drained node
- bdrv_reopen_commit() acts only on drained nodes
- bdrv_append() should in theory require the node to be drained; in
practice most callers just lock the AioContext, which should at least
be enough to prevent concurrent I/O requests from accessing invalid
limits
So we can resolve the bug by making bdrv_refresh_limits() non-recursive.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1879437
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20220216105355.30729-2-hreitz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-32-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The job API will be handled separately in another serie.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-31-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Assertions in the callers of the function pointrs are already
added by previous patches.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20220303151616.325444-30-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-29-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-28-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-27-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Similar to the header split, also the function pointers in BlockDriver
can be split in I/O and global state.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-26-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
block coroutines functions run in different aiocontext, and are
not protected by the BQL. Therefore are I/O.
On the other side, generated_co_wrapper functions use BDRV_POLL_WHILE,
meaning the caller can either be the main loop or a specific iothread.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-25-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
copy-before-write functions always run under BQL.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-24-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Snapshots run also under the BQL, so they all are
in the global state API. The aiocontext lock that they hold
is currently an overkill and in future could be removed.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-23-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-22-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
blockdev functions run always under the BQL lock.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-21-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-20-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
blockjob functions run always under the BQL lock.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-19-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Following the assertion derived from the API split,
propagate the assertion also in the static functions.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-18-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-17-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Since the I/O functions are not many, keep a single file.
Also split the function pointers in BlockJobDriver.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220303151616.325444-16-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We want to be sure that the functions that write the child and
parent list of a bs are under BQL and drain.
BQL prevents from concurrent writings from the GS API, while
drains protect from I/O.
TODO: drains are missing in some functions using this assert.
Therefore a proper assertion will fail. Because adding drains
requires additional discussions, they will be added in future
series.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-15-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Mark all I/O functions with IO_CODE, and all "I/O OR GS" with
IO_OR_GS_CODE.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-14-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-13-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Similarly to the previous patch, split block_int.h
in block_int-io.h and block_int-global-state.h
block_int-common.h contains the structures shared between
the two headers, and the functions that can't be categorized as
I/O or global state.
Assertions are added in the next patch.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-12-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Now that we "covered" the three main cases where the
permission API was being used under BQL (fuse,
amend and invalidate_cache), we can safely assert for
the permission functions implemented in block.c
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-11-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Mark all I/O functions with IO_CODE, and all "I/O OR GS" with
IO_OR_GS_CODE.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-10-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
All the global state (GS) API functions will check that
qemu_in_main_thread() returns true. If not, it means
that the safety of BQL cannot be guaranteed, and
they need to be moved to I/O.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-9-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Similarly to the previous patches, split block-backend.h
in block-backend-io.h and block-backend-global-state.h
In addition, remove "block/block.h" include as it seems
it is not necessary anymore, together with "qemu/iov.h"
block-backend-common.h contains the structures shared between
the two headers, and the functions that can't be categorized as
I/O or global state.
Assertions are added in the next patch.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-8-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Allow writable exports to get BLK_PERM_RESIZE permission
from creation, in fuse_export_create().
In this way, there is no need to give the permission in
fuse_do_truncate(), which might be run in an iothread.
Permissions should be set only in the main thread, so
in any case if an iothread tries to set RESIZE, it will
be blocked.
Also assert in fuse_do_truncate that if we give the
RESIZE permission we can then restore the original ones.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Message-Id: <20220303151616.325444-7-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Mark all I/O functions with IO_CODE, and all "I/O OR GS" with
IO_OR_GS_CODE.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-6-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
All the global state (GS) API functions will check that
qemu_in_main_thread() returns true. If not, it means
that the safety of BQL cannot be guaranteed, and
they need to be moved to I/O.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-5-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
block.h currently contains a mix of functions:
some of them run under the BQL and modify the block layer graph,
others are instead thread-safe and perform I/O in iothreads.
Some others can only be called by either the main loop or the
iothread running the AioContext (and not other iothreads),
and using them in another thread would cause deadlocks, and therefore
it is not ideal to define them as I/O.
It is not easy to understand which function is part of which
group (I/O vs GS vs "I/O or GS"), and this patch aims to clarify it.
The "GS" functions need the BQL, and often use
aio_context_acquire/release and/or drain to be sure they
can modify the graph safely.
The I/O function are instead thread safe, and can run in
any AioContext.
"I/O or GS" functions run instead in the main loop or in
a single iothread, and use BDRV_POLL_WHILE().
By splitting the header in two files, block-io.h
and block-global-state.h we have a clearer view on what
needs what kind of protection. block-common.h
contains common structures shared by both headers.
block.h is left there for legacy and to avoid changing
all includes in all c files that use the block APIs.
Assertions are added in the next patch.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-4-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Righ now, IO_CODE and IO_OR_GS_CODE are nop, as there isn't
really a way to check that a function is only called in I/O.
On the other side, we can use qemu_in_main_thread() to check if
we are in the main loop.
The usage of macros makes easy to extend them in the future without
making changes in all callers. They will also visually help understanding
in which category each function is, without looking at the header.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-3-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
When invoked from the main loop, this function is the same
as qemu_mutex_iothread_locked, and returns true if the BQL is held.
When invoked from iothreads or tests, it returns true only
if the current AioContext is the Main Loop.
This essentially just extends qemu_mutex_iothread_locked to work
also in unit tests or other users like storage-daemon, that run
in the Main Loop but end up using the implementation in
stubs/iothread-lock.c.
Using qemu_mutex_iothread_locked in unit tests defaults to false
because they use the implementation in stubs/iothread-lock,
making all assertions added in next patches fail despite the
AioContext is still the main loop.
See the comment in the function header for more information.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220303151616.325444-2-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
185 tests quitting qemu while a block job is active. It does not
specifically test quitting qemu while a mirror or active commit job is
in its READY phase.
Add two test cases for this, where we respectively mirror or commit to
an external QSD instance, which provides a throttled block device. qemu
is supposed to cancel the job so that it can quit as soon as possible
instead of waiting for the job to complete (which it did before 6.2).
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Message-Id: <20220303164814.284974-5-hreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
To implement this, we reuse the existing daemonizing functions from the
system emulator, which mainly do the following:
- Fork off a child process, and set up a pipe between parent and child
- The parent process waits until the child sends a status byte over the
pipe (0 means that the child was set up successfully; anything else
(including errors or EOF) means that the child was not set up
successfully), and then exits with an appropriate exit status
- The child process enters a new session (forking off again), changes
the umask, and will ignore terminal signals from then on
- Once set-up is complete, the child will chdir to /, redirect all
standard I/O streams to /dev/null, and tell the parent that set-up has
been completed successfully
In contrast to qemu-nbd's --fork implementation, during the set up
phase, error messages are not piped through the parent process.
qemu-nbd mainly does this to detect errors, though (while os_daemonize()
has the child explicitly signal success after set up); because we do not
redirect stderr after forking, error messages continue to appear on
whatever the parent's stderr was (until set up is complete).
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Message-Id: <20220303164814.284974-4-hreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
In contrast to qemu-nbd (where it is called --fork) and the system
emulator, QSD does not have a --daemonize switch yet. Just like them,
QSD allows setting up block devices and exports on the command line.
When doing so, it is often necessary for whoever invoked the QSD to wait
until these exports are fully set up. A --daemonize switch allows
precisely this, by virtue of the parent process exiting once everything
is set up.
Note that there are alternative ways of waiting for all exports to be
set up, for example:
- Passing the --pidfile option and waiting until the respective file
exists (but I do not know if there is a way of implementing this
without a busy wait loop)
- Set up some network server (e.g. on a Unix socket) and have the QSD
connect to it after all arguments have been processed by appending
corresponding --chardev and --monitor options to the command line,
and then wait until the QSD connects
Having a --daemonize option would make this simpler, though, without
having to rely on additional tools (to set up a network server) or busy
waiting.
Implementing a --daemonize switch means having to fork the QSD process.
Ideally, we should do this as early as possible: All the parent process
has to do is to wait for the child process to signal completion of its
set-up phase, and therefore there is basically no initialization that
needs to be done before the fork. On the other hand, forking after
initialization steps means having to consider how those steps (like
setting up the block layer or QMP) interact with a later fork, which is
often not trivial.
In order to fork this early, we must scan the command line for
--daemonize long before our current process_options() call. Instead of
adding custom new code to do so, just reuse process_options() and give
it a @pre_init_pass argument to distinguish the two passes. I believe
there are some other switches but --daemonize that deserve parsing in
the first pass:
- --help and --version are supposed to only print some text and then
immediately exit (so any initialization we do would be for naught).
This changes behavior, because now "--blockdev inv-drv --help" will
print a help text instead of complaining about the --blockdev
argument.
Note that this is similar in behavior to other tools, though: "--help"
is generally immediately acted upon when finding it in the argument
list, potentially before other arguments (even ones before it) are
acted on. For example, "ls /does-not-exist --help" prints a help text
and does not complain about ENOENT.
- --pidfile does not need initialization, and is already exempted from
the sequential order that process_options() claims to strictly follow
(the PID file is only created after all arguments are processed, not
at the time the --pidfile argument appears), so it makes sense to
include it in the same category as --daemonize.
- Invalid arguments should always be reported as soon as possible. (The
same caveat with --help applies: That means that "--blockdev inv-drv
--inv-arg" will now complain about --inv-arg, not inv-drv.)
This patch does make some references to --daemonize without having
implemented it yet, but that will happen in the next patch.
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20220303164814.284974-3-hreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The daemonizing functions in os-posix (os_daemonize() and
os_setup_post()) only daemonize the process if the static `daemonize`
variable is set. Right now, it can only be set by os_parse_cmd_args().
In order to use os_daemonize() and os_setup_post() from the storage
daemon to have it be daemonized, we need some other way to set this
`daemonize` variable, because I would rather not tap into the system
emulator's arg-parsing code. Therefore, this patch adds an
os_set_daemonize() function, which will return an error on os-win32
(because daemonizing is not supported there).
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Message-Id: <20220303164814.284974-2-hreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
qemu_mutex_iothread_locked() may be used from coroutines. Standard
__thread variables cannot be used by coroutines. Use the coroutine TLS
macros instead.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220222140150.27240-5-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
RCU may be used from coroutines. Standard __thread variables cannot be
used by coroutines. Use the coroutine TLS macros instead.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220222140150.27240-4-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
QEMU TLS macros must be used to make TLS variables safe with coroutines.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220222140150.27240-3-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Compiler optimizations can cache TLS values across coroutine yield
points, resulting in stale values from the previous thread when a
coroutine is re-entered by a new thread.
Serge Guelton developed an __attribute__((noinline)) wrapper and tested
it with clang and gcc. I formatted his idea according to QEMU's coding
style and wrote documentation.
The compiler can still optimize based on analyzing noinline code, so an
asm volatile barrier with an output constraint is required to prevent
unwanted optimizations.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1952483
Suggested-by: Serge Guelton <sguelton@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220222140150.27240-2-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Split bdrv_co_invalidate cache in two: the Global State (under BQL)
code that takes care of permissions and running GS callbacks,
and leave only the I/O code (->bdrv_co_invalidate_cache) running in
the I/O coroutine.
The only side effect is that bdrv_co_invalidate_cache is not
recursive anymore, and so is every direct call to
bdrv_invalidate_cache().
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220209105452.1694545-6-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Following the bdrv_activate renaming, change also the name
of the respective callers.
bdrv_invalidate_cache_all -> bdrv_activate_all
blk_invalidate_cache -> blk_activate
test_sync_op_invalidate_cache -> test_sync_op_activate
No functional change intended.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Message-Id: <20220209105452.1694545-5-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This function is currently just a wrapper for bdrv_invalidate_cache(),
but in future will contain the code of bdrv_co_invalidate_cache() that
has to always be protected by BQL, and leave the rest in the I/O
coroutine.
Replace all bdrv_invalidate_cache() invokations with bdrv_activate().
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Message-Id: <20220209105452.1694545-4-eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>