Disposition (action) for any given signal is global for the process.
When two threads run coroutine-sigaltstack's qemu_coroutine_new()
concurrently, they may interfere with each other: One of them may revert
the SIGUSR2 handler to SIG_DFL, between the other thread (a) setting up
coroutine_trampoline() as the handler and (b) raising SIGUSR2. That
SIGUSR2 will then terminate the QEMU process abnormally.
We have to ensure that only one thread at a time can modify the
process-global SIGUSR2 handler. To do so, wrap the whole section where
that is done in a mutex.
Alternatively, we could for example have the SIGUSR2 handler always be
coroutine_trampoline(), so there would be no need to invoke sigaction()
in qemu_coroutine_new(). Laszlo has posted a patch to do so here:
https://lists.nongnu.org/archive/html/qemu-devel/2021-01/msg05962.html
However, given that coroutine-sigaltstack is more of a fallback
implementation for platforms that do not support ucontext, that change
may be a bit too invasive to be comfortable with it. The mutex proposed
here may negatively impact performance, but the change is much simpler.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210125120305.19520-1-mreitz@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
And consequentially drop it from 297's skip list.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20210118105720.14824-11-mreitz@redhat.com>
And consequentially drop it from 297's skip list.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210118105720.14824-10-mreitz@redhat.com>
Issuing 'stop' on the VM drains all nodes. If the mirror job has many
large requests in flight, this may lead to significant I/O that looks a
bit like 'stop' would make the job try to complete (which is what 129
should verify not to happen).
We can limit the I/O in flight by limiting the buffer size, so mirror
will make very little progress during the 'stop' drain.
(We do not need to do anything about commit, which has a buffer size of
512 kB by default; or backup, which goes cluster by cluster. Once we
have asynchronous requests for backup, that will change, but then we can
fine-tune the backup job to only perform a single request on a very
small chunk, too.)
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20210118105720.14824-9-mreitz@redhat.com>
Before this patch, test_block_commit() performs an active commit, which
under the hood is a mirror job. If we want to test various different
block jobs, we should perhaps run an actual commit job instead.
Doing so requires adding an overlay above the source node before the
commit is done (and then specifying the source node as the top node for
the commit job).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20210118105720.14824-8-mreitz@redhat.com>
Throttling on the BB has not affected block jobs in a while, so it is
possible that one of the jobs in 129 finishes before the VM is stopped.
We can fix that by running the job from a throttle node.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20210118105720.14824-7-mreitz@redhat.com>
@busy is false when the job is paused, which happens all the time
because that is how jobs yield (e.g. for mirror at least since commit
565ac01f8d).
Back when 129 was added (2015), perhaps there was no better way of
checking whether the job was still actually running. Now we have the
@status field (as of 58b295ba52, i.e. 2018), which can give us exactly
that information.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20210118105720.14824-6-mreitz@redhat.com>
Instead of checking iotests.py only, check all Python files in the
qemu-iotests/ directory. Of course, most of them do not pass, so there
is an extensive skip list for now. (The only files that do pass are
209, 254, 283, and iotests.py.)
(Alternatively, we could have the opposite, i.e. an explicit list of
files that we do want to check, but I think it is better to check files
by default.)
Unless started in debug mode (./check -d), the output has no information
on which files are tested, so we will not have a problem e.g. with
backports, where some files may be missing when compared to upstream.
Besides the technical rewrite, some more things are changed:
- For the pylint invocation, PYTHONPATH is adjusted. This mirrors
setting MYPYPATH for mypy.
- Also, MYPYPATH is now derived from PYTHONPATH, so that we include
paths set by the environment. Maybe at some point we want to let the
check script add '../../python/' to PYTHONPATH so that iotests.py does
not need to do that.
- Passing --notes=FIXME,XXX to pylint suppresses warnings for TODO
comments. TODO is fine, we do not need 297 to complain about such
comments.
- The "Success" line from mypy's output is suppressed, because (A) it
does not add useful information, and (B) it would leak information
about the files having been tested to the reference output, which we
decidedly do not want.
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210118105720.14824-3-mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
There are a couple of environment variables that we fetch with
os.environ.get() without supplying a default. Clearly they are required
and expected to be set by the ./check script (as evidenced by
execute_setup_common(), which checks for test_dir and
qemu_default_machine to be set, and aborts if they are not).
Using .get() this way has the disadvantage of returning an Optional[str]
type, which mypy will complain about when tests just assume these values
to be str.
Use [] instead, which raises a KeyError for environment variables that
are not set. When this exception is raised, catch it and move the abort
code from execute_setup_common() there.
Drop the 'assert iotests.sock_dir is not None' from iotest 300, because
that sort of thing is precisely what this patch wants to prevent.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20210118105720.14824-2-mreitz@redhat.com>
This patch completes the series with the COR-filter applied to
block-stream operations.
Adding the filter makes it possible in future implement discarding
copied regions in backing files during the block-stream job, to reduce
the disk overuse (we need control on permissions).
Also, the filter now is smart enough to do copy-on-read with specified
base, so we have benefit on guest reads even when doing block-stream of
the part of the backing chain.
Several iotests are slightly modified due to filter insertion.
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201216061703.70908-14-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Add a direct link to target bs for convenience and to simplify
following commit which will insert COR filter above target bs.
This is a part of original commit written by Andrey.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20201216061703.70908-13-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
test_stream_parallel run parallel stream jobs, intersecting so that top
of one is base of another. It's OK now, but it would be a problem if
insert the filter, as one job will want to use another job's filter as
above_base node.
Correct thing to do is move to new interface: "bottom" argument instead
of base. This guarantees that jobs don't intersect by their actions.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20201216061703.70908-12-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The code already don't freeze base node and we try to make it prepared
for the situation when base node is changed during the operation. In
other words, block-stream doesn't own base node.
Let's introduce a new interface which should replace the current one,
which will in better relations with the code. Specifying bottom node
instead of base, and requiring it to be non-filter gives us the
following benefits:
- drop difference between above_base and base_overlay, which will be
renamed to just bottom, when old interface dropped
- clean way to work with parallel streams/commits on the same backing
chain, which otherwise become a problem when we introduce a filter
for stream job
- cleaner interface. Nobody will surprised the fact that base node may
disappear during block-stream, when there is no word about "base" in
the interface.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201216061703.70908-11-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Stream in stream_prepare calls bdrv_change_backing_file() to change
backing-file in the metadata of bs.
It may use either backing-file parameter given by user or just take
filename of base on job start.
Backing file format is determined by base on job finish.
There are some problems with this design, we solve only two by this
patch:
1. Consider scenario with backing-file unset. Current concept of stream
supports changing of the base during the job (we don't freeze link to
the base). So, we should not save base filename at job start,
- let's determine name of the base on job finish.
2. Using direct base to determine filename and format is not very good:
base node may be a filter, so its filename may be JSON, and format_name
is not good for storing into qcow2 metadata as backing file format.
- let's use unfiltered_base
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
[vsementsov: change commit subject, change logic in stream_prepare]
Message-Id: <20201216061703.70908-10-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
If the flag BDRV_REQ_PREFETCH was set, skip idling read/write
operations in COR-driver. It can be taken into account for the
COR-algorithms optimization. That check is being made during the
block stream job by the moment.
Add the BDRV_REQ_PREFETCH flag to the supported_read_flags of the
COR-filter.
block: Modify the comment for the flag BDRV_REQ_PREFETCH as we are
going to use it alone and pass it to the COR-filter driver for further
processing.
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20201216061703.70908-9-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Add the new member supported_read_flags to the BlockDriverState
structure. It will control the flags set for copy-on-read operations.
Make the block generic layer evaluate supported read flags before they
go to a block driver.
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
[vsementsov: use assert instead of abort]
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20201216061703.70908-8-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The test case #310 is similar to #216 by Max Reitz. The difference is
that the test #310 involves a bottom node to the COR filter driver.
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
[vsementsov: detach backing to test reads from top, limit to qcow2]
Message-Id: <20201216061703.70908-7-vsementsov@virtuozzo.com>
[mreitz: Add "# group:" line]
Signed-off-by: Max Reitz <mreitz@redhat.com>
Add an option to limit copy-on-read operations to specified sub-chain
of backing-chain, to make copy-on-read filter useful for block-stream
job.
Suggested-by: Max Reitz <mreitz@redhat.com>
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
[vsementsov: change subject, modified to freeze the chain,
do some fixes]
Message-Id: <20201216061703.70908-6-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Provide the possibility to pass the 'filter-node-name' parameter to the
block-stream job as it is done for the commit block job.
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
[vsementsov: comment indentation, s/Since: 5.2/Since: 6.0/]
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20201216061703.70908-5-vsementsov@virtuozzo.com>
[mreitz: s/commit/stream/]
Signed-off-by: Max Reitz <mreitz@redhat.com>
Provide API for the COR-filter removal. Also, drop the filter child
permissions for an inactive state when the filter node is being
removed.
To insert the filter, the block generic layer function
bdrv_insert_node() can be used.
The new function bdrv_cor_filter_drop() may be considered as an
intermediate solution before the QEMU permission update system has
overhauled. Then we are able to implement the API function
bdrv_remove_node() on the block generic layer.
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20201216061703.70908-4-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Provide API for insertion a node to backing chain.
Suggested-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20201216061703.70908-3-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Add support for the recently introduced functions
bdrv_co_preadv_part()
and
bdrv_co_pwritev_part()
to the COR-filter driver.
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201216061703.70908-2-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Unfortunately commit "iotests: handle tmpfs" breaks running iotests
with -nbd -nocache, as _check_o_direct tries to create
$TEST_IMG.test_o_direct, but in case of nbd TEST_IMG is something like
nbd+unix:///... , and test fails with message
qemu-img: nbd+unix:///?socket[...]test_o_direct: Protocol driver
'nbd' does not support image creation, and opening the image
failed: Failed to connect to '/tmp/tmp.[...]/nbd/test_o_direct': No
such file or directory
Use TEST_DIR instead.
Fixes: cfdca2b9f9
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201218182012.47607-1-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Version: GnuPG v1
iQEcBAABAgAGBQJgDonvAAoJEO8Ells5jWIR++8IAJBTpKDSwdwG1ySn5m6tHZHM
GdFD2Ao8R8VsonJzw9i/1Gfn9T0icndRU6VC+nJEdvSLXUDoAQdmMmLCk4oml41g
gzPzsc6bFKwKB4DG3RyixbEoUsMl7jQRoVYOgWy1Ezx2daAGG9PjuDMdBU4tNCFv
IxkVpOz3RClhzaMz3O0OkLKljooRcvhePB+8FuZCOnBzxDJ9RU1KXDpv5kYMswFL
9f+D2UK474IcaAQCzKXuHrs/k3utMEY32udjqQdgb9y/8jZYJuEZSB7pyXozuSAO
QSl62FxbAwfx85hVbMqnIa2Yd629SiVVHsotavAZz60fcBaQgrNbBTWDNlCrWJo=
=XN54
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging
# gpg: Signature made Mon 25 Jan 2021 09:05:51 GMT
# gpg: using RSA key EF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>" [marginal]
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* remotes/jasowang/tags/net-pull-request:
net: checksum: Introduce fine control over checksum type
net: checksum: Add IP header checksum calculation
net: checksum: Skip fragmented IP packets
net: Fix handling of id in netdev_add and netdev_del
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- Various improvements for SD cards in SPI mode (Bin Meng)
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmANx6cACgkQ4+MsLN6t
wN5OTRAAvBZrBn3YktziZZds4HpKpBdEC/lAmlYNBCl6cn6gpAfrYz1WjpKm+DrA
0tfDeanoqUnWNReYwFRyQHzpWtIjIGo1K5tLbBVGE3qL1DtoZliDMA94RAGZu9UW
vrdWFxFtRFJ6Yqs0JjIhY2c+K9y7UcYRqATihbl/TpQNLSlVKblKnP1GPKZWqpRx
RL+sdAzwXhtXLzaJ/Jnk4XDTibNsLsRMWsa0rKM2o6181NqumYDj6gWOFfZWADji
lScwZzU0gWxYEarruUWaMMDxxB/1OXGH5Rd+bpDTrqVJV9qgsEEVj1VrJVfCPQFk
nInd0X4cAp+Mq4x901eovWcF+nT/zNWS/vJ0JiJKlxciz3Oev0kJLPJ/3YssLK3k
LYrhb20Py5ug41XYnpOKLcXR8CBKyqRlmwp8U330lCooLDxhy2hXaU41B0Dte3M3
CgngnOKmr2xizdWKy8L9GFvcQIPv1w9tRIOm/Z3CaU4JNaDSZo8vSUMFAtzsiW7B
dB6TOXcYxQZEPt1u6dO5KUDetd7m2pRMQ+or5lZa3d5w57kpAzuLRyiyXWv1npQc
4nVf6fS/tqmkqOjZkyj3lliAKdDkmEfWoiSRvUjHeddetGFd5VJ8IjPdf6pDGj1G
H1ix3N1JIrGpBmKwVrjTbxeGBGuD4vhetQeMQ498exmzaiYEmgk=
=B3sY
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/philmd-gitlab/tags/sdmmc-20210124' into staging
SD/MMC patches
- Various improvements for SD cards in SPI mode (Bin Meng)
# gpg: Signature made Sun 24 Jan 2021 19:16:55 GMT
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* remotes/philmd-gitlab/tags/sdmmc-20210124:
hw/sd: sd.h: Cosmetic change of using spaces
hw/sd: ssi-sd: Use macros for the dummy value and tokens in the transfer
hw/sd: ssi-sd: Fix the wrong command index for STOP_TRANSMISSION
hw/sd: ssi-sd: Add a state representing Nac
hw/sd: ssi-sd: Suffix a data block with CRC16
util: Add CRC16 (CCITT) calculation routines
hw/sd: sd: Drop sd_crc16()
hw/sd: sd: Support CMD59 for SPI mode
hw/sd: ssi-sd: Fix incorrect card response sequence
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
At present net_checksum_calculate() blindly calculates all types of
checksums (IP, TCP, UDP). Some NICs may have a per type setting in
their BDs to control what checksum should be offloaded. To support
such hardware behavior, introduce a 'csum_flag' parameter to the
net_checksum_calculate() API to allow fine control over what type
checksum is calculated.
Existing users of this API are updated accordingly.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
At present net_checksum_calculate() only calculates TCP/UDP checksum
in an IP packet, but assumes the IP header checksum to be provided
by the software, e.g.: Linux kernel always calculates the IP header
checksum. However this might not always be the case, e.g.: for an IP
checksum offload enabled stack like VxWorks, the IP header checksum
can be zero.
This adds the checksum calculation of the IP header.
Signed-off-by: Guishan Qin <guishan.qin@windriver.com>
Signed-off-by: Yabing Liu <yabing.liu@windriver.com>
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Jason Wang <jasowang@redhat.com>
To calculate the TCP/UDP checksum we need the whole datagram. Unless
the hardware has some logic to collect all fragments before sending
the whole datagram first, it can only be done by the network stack,
which is normally the case for the NICs we have seen so far.
Skip these fragmented IP packets to avoid checksum corruption.
Signed-off-by: Markus Carlstedt <markus.carlstedt@windriver.com>
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
CLI -netdev accumulates in option group "netdev".
Before commit 08712fcb85 "net: Track netdevs in NetClientState rather
than QemuOpt", netdev_add added to the option group, and netdev_del
removed from it, both HMP and QMP. Thus, every netdev had a
corresponding QemuOpts in this option group.
Commit 08712fcb85 dropped this for QMP netdev_add and both netdev_del.
Now a netdev has a corresponding QemuOpts only when it was created
with CLI or HMP. Two issues:
* QMP and HMP netdev_del can leave QemuOpts behind, breaking HMP
netdev_add. Reproducer:
$ qemu-system-x86_64 -S -display none -nodefaults -monitor stdio
QEMU 5.1.92 monitor - type 'help' for more information
(qemu) netdev_add user,id=net0
(qemu) info network
net0: index=0,type=user,net=10.0.2.0,restrict=off
(qemu) netdev_del net0
(qemu) info network
(qemu) netdev_add user,id=net0
upstream-qemu: Duplicate ID 'net0' for netdev
Try "help netdev_add" for more information
Fix by restoring the QemuOpts deletion in qmp_netdev_del(), but with
a guard, because the QemuOpts need not exist.
* QMP netdev_add loses its "no duplicate ID" check. Reproducer:
$ qemu-system-x86_64 -S -display none -qmp stdio
{"QMP": {"version": {"qemu": {"micro": 92, "minor": 1, "major": 5}, "package": "v5.2.0-rc2-1-g02c1f0142c"}, "capabilities": ["oob"]}}
{"execute": "qmp_capabilities"}
{"return": {}}
{"execute": "netdev_add", "arguments": {"type": "user", "id":"net0"}}
{"return": {}}
{"execute": "netdev_add", "arguments": {"type": "user", "id":"net0"}}
{"return": {}}
Fix by adding a duplicate ID check to net_client_init1() to replace
the lost one. The check is redundant for callers where QemuOpts
still checks, i.e. for CLI and HMP.
Reported-by: Andrew Melnichenko <andrew@daynix.com>
Fixes: 08712fcb85
Cc: qemu-stable@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Fix running during atomic single-step.
Partial support for apple silicon.
Cleanups for accel/tcg.
-----BEGIN PGP SIGNATURE-----
iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmANt7kdHHJpY2hhcmQu
aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/qwQgApfJEYmEURz2ETsBv
z1D/lVM6sjVky7Rk0GiCePf9GZhpXuLK0tEDxWPhHp8d466KfvU3iiXLv0xGbFV5
MBYXPw2uJWLfYNqfi3YGJYSGAdESKJSx27hMMb7RkElbbKqmFGVyPTkZG0VDoiCZ
hEUMGFXa1e0WjTQchm7AUw8CtATHUTmC2wkVqA+Z+BNqoU2WKbvy6f5QgM2CdWEX
bFoC/K0o0vIr8OrwS76ZyVDwbGG7s8r9gA2B9a2Nw73/n4q6DdZPwcUIR23blHa/
A9NmdU4HRNt07nfJN9DQ2qe9utNkoaFj7SpRW2+RJCVkTz5hqbiZTO1c3ZJa0MTi
QbyMNQ==
=KGcn
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20210124' into staging
Fix tcg constant temp overflow.
Fix running during atomic single-step.
Partial support for apple silicon.
Cleanups for accel/tcg.
# gpg: Signature made Sun 24 Jan 2021 18:08:57 GMT
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth-gitlab/tags/pull-tcg-20210124:
tcg: Restart code generation when we run out of temps
tcg: Toggle page execution for Apple Silicon
accel/tcg: Restrict cpu_io_recompile() from other accelerators
accel/tcg: Declare missing cpu_loop_exit*() stubs
accel/tcg: Restrict tb_gen_code() from other accelerators
accel/tcg: Move tb_flush_jmp_cache() to cputlb.c
accel/tcg: Make cpu_gen_init() static
tcg: Optimize inline dup_const for MO_64
qemu/compiler: Split out qemu_build_not_reached_always
tcg: update the cpu running flag in cpu_exec_step_atomic
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
QEMU coding convention prefers spaces over tabs.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210123104016.17485-15-bmeng.cn@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
At present the codes use hardcoded numbers (0xff/0xfe) for the dummy
value and block start token. Replace them with macros.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210123104016.17485-12-bmeng.cn@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This fixes the wrong command index for STOP_TRANSMISSION, the
required command to interrupt the multiple block read command,
in the old codes. It should be CMD12 (0x4c), not CMD13 (0x4d).
Fixes: 775616c3ae ("Partial SD card SPI mode support")
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210123104016.17485-10-bmeng.cn@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Per the "Physical Layer Specification Version 8.00" chapter 7.5.2,
"Data Read", there is a minimum 8 clock cycles (Nac) after the card
response and before data block shows up on the data out line. This
applies to both single and multiple block read operations.
Current implementation of single block read already satisfies the
timing requirement as in the RESPONSE state after all responses are
transferred the state remains unchanged. In the next 8 clock cycles
it jumps to DATA_START state if data is ready.
However we need an explicit state when expanding our support to
multiple block read in the future. Let's add a new state PREP_DATA
explicitly in the ssi-sd state machine to represent Nac.
Note we don't change the single block read state machine to let it
jump from RESPONSE state to DATA_START state as that effectively
generates a 16 clock cycles Nac, which might not be safe. As the
spec says the maximum Nac shall be calculated from several fields
encoded in the CSD register, we don't want to bother updating CSD
to ensure our Nac is within range to complicate things.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210123104016.17485-9-bmeng.cn@gmail.com>
[PMD: Change VMState version id 4 -> 5]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Per the SD spec, a valid data block is suffixed with a 16-bit CRC
generated by the standard CCITT polynomial x16+x12+x5+1. This part
is currently missing in the ssi-sd state machine. Without it, all
data block transfer fails in guest software because the expected
CRC16 is missing on the data out line.
Fixes: 775616c3ae ("Partial SD card SPI mode support")
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210123104016.17485-8-bmeng.cn@gmail.com>
[PMD: Change VMState version id 3 -> 4,
check s->mode validity in post_load()]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Import CRC16 calculation routines from Linux kernel v5.10:
include/linux/crc-ccitt.h
lib/crc-ccitt.c
to QEMU:
include/qemu/crc-ccitt.h
util/crc-ccitt.c
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20210123104016.17485-7-bmeng.cn@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMD: Restrict compilation to system emulation]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
commit f6fb1f9b31 ("sdcard: Correct CRC16 offset in sd_function_switch()")
changed the 16-bit CRC to be stored at offset 64. In fact, this CRC
calculation is completely wrong. From the original codes, it wants
to calculate the CRC16 of the first 64 bytes of sd->data[], however
passing 64 as the `width` to sd_crc16() actually counts 256 bytes
starting from the `message` for the CRC16 calculation, which is not
what we want.
Besides that, it seems existing sd_crc16() algorithm does not match
the SD spec, which says CRC16 is the CCITT one but the calculation
does not produce expected result. It turns out the CRC16 was never
transferred outside the sd core, as in sd_read_byte() we see:
if (sd->data_offset >= 64)
sd->state = sd_transfer_state;
Given above reasons, let's drop it.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Tested-by: Pragnesh Patel <pragnesh.patel@sifive.com>
Reviewed-by: Pragnesh Patel <pragnesh.patel@sifive.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210123104016.17485-6-bmeng.cn@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Some large translation blocks can generate so many unique
constants that we run out of temps to hold them. In this
case, longjmp back to the start of code generation and
restart with a smaller translation block.
Buglink: https://bugs.launchpad.net/bugs/1912065
Tested-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
After the card is put into SPI mode, CRC check for all commands
including CMD0 will be done according to CMD59 setting. But this
command is currently unimplemented. Simply allow the decoding of
CMD59, but the CRC remains unchecked.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Tested-by: Pragnesh Patel <pragnesh.patel@sifive.com>
Reviewed-by: Pragnesh Patel <pragnesh.patel@sifive.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210123104016.17485-5-bmeng.cn@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Per the "Physical Layer Specification Version 8.00" chapter 7.5.1,
"Command/Response", there is a minimum 8 clock cycles (Ncr) before
the card response shows up on the data out line. However current
implementation jumps directly to the sending response state after
all 6 bytes command is received, which is a spec violation.
Add a new state PREP_RESP in the ssi-sd state machine to handle it.
Fixes: 775616c3ae ("Partial SD card SPI mode support")
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Tested-by: Pragnesh Patel <pragnesh.patel@sifive.com>
Reviewed-by: Pragnesh Patel <pragnesh.patel@sifive.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210123104016.17485-4-bmeng.cn@gmail.com>
[PMD: Change VMState version id 2 -> 3]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Pages can't be both write and executable at the same time on Apple
Silicon. macOS provides public API to switch write protection [1] for
JIT applications, like TCG.
1. https://developer.apple.com/documentation/apple_silicon/porting_just-in-time_compilers_to_apple_silicon
Tested-by: Alexander Graf <agraf@csgraf.de>
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20210113032806.18220-1-r.bolshakov@yadro.com>
[rth: Inline the qemu_thread_jit_* functions;
drop the MAP_JIT change for a follow-on patch.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
As cpu_io_recompile() is only called within TCG accelerator
in cputlb.c, declare it locally.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210117164813.4101761-6-f4bug@amsat.org>
[rth: Adjust vs changed tb_flush_jmp_cache patch.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
cpu_loop_exit*() functions are declared in accel/tcg/cpu-exec-common.c,
and are not available when TCG accelerator is not built. Add stubs so
linking without TCG succeed.
Problematic files:
- hw/semihosting/console.c in qemu_semihosting_console_inc()
- hw/ppc/spapr_hcall.c in h_confer()
- hw/s390x/ipl.c in s390_ipl_reset_request()
- hw/misc/mips_itu.c
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210117164813.4101761-5-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
tb_gen_code() is only called within TCG accelerator, declare it locally.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210117164813.4101761-4-f4bug@amsat.org>
[rth: Adjust vs changed tb_flush_jmp_cache patch.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Move and make the function static, as the only users
are here in cputlb.c.
Suggested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
cpu_gen_init() is TCG specific, only used in tcg/translate-all.c.
No need to export it to other accelerators, declare it statically.
Reviewed-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210117164813.4101761-2-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>