We also --disable-plugins for the two mingw based cross builds as
although they have dlltool they seem to be unhappy linking.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Maintaining two sets of containers for test building is silly. While
it makes sense for the QEMU cross-compile targets to have their own
fat containers built by lcitool we might as well merge the other
random debian based compilers into the same one used on gitlab.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231029145033.592566-15-alex.bennee@linaro.org>
Maintaining two sets of containers for test building is silly. While
it makes sense for the QEMU cross-compile targets to have their own
fat containers built by lcitool we might as well merge the other
random debian based compilers into the same one used on gitlab.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231029145033.592566-14-alex.bennee@linaro.org>
Maintaining two sets of containers for test building is silly. While
it makes sense for the QEMU cross-compile targets to have their own
fat containers built by lcitool we might as well merge the other
random debian based compilers into the same one used on gitlab.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231029145033.592566-13-alex.bennee@linaro.org>
Maintaining two sets of containers for test building is silly. While
it makes sense for the QEMU cross-compile targets to have their own
fat containers built by lcitool we might as well merge the other
random debian based compilers into the same one used on gitlab.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231029145033.592566-12-alex.bennee@linaro.org>
Maintaining two sets of containers for test building is silly. While
it makes sense for the QEMU cross-compile targets to have their own
fat containers built by lcitool we might as well merge the other
random debian based compilers into the same one used on gitlab.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231029145033.592566-11-alex.bennee@linaro.org>
Maintaining two sets of containers for test building is silly. While
it makes sense for the QEMU cross-compile targets to have their own
fat containers built by lcitool we might as well merge the other
random debian based compilers into the same one used on gitlab.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231029145033.592566-10-alex.bennee@linaro.org>
Maintaining two sets of containers for test building is silly. While
it makes sense for the QEMU cross-compile targets to have their own
fat containers built by lcitool we might as well merge the other
random debian based compilers into the same one used on gitlab.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231029145033.592566-9-alex.bennee@linaro.org>
sh4 is another target which doesn't work with bookworm compilers. To
keep on buster move across to the debian-legacy-test-cross image and
update accordingly.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231030135715.800164-1-alex.bennee@linaro.org>
Maintaining two sets of containers for test building is silly. While
it makes sense for the QEMU cross-compile targets to have their own
fat containers built by lcitool we might as well merge the other
random debian based compilers into the same one used on gitlab.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231029145033.592566-7-alex.bennee@linaro.org>
We have the compiler and with a few updates a container that can build
QEMU so we should at least run the check-tcg smoke tests.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231029145033.592566-6-alex.bennee@linaro.org>
Having dropped alpha we also now drop xtensa as we don't have the
compiler in this image. It's not all doom and gloom though as a number
of other targets have gained softmmu TCG tests so we can add them. We
will take care of the other targets with their own containers in
future commits.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231029145033.592566-5-alex.bennee@linaro.org>
The current bookworm compiler doesn't build the static binaries due to
bug #1054412 and it might be awhile before it gets fixed. The problem
of keeping older architecture compilers running isn't going to go away
so lets prepare the ground. Create a legacy container and move some
tests around so the others can get upgraded.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231029145033.592566-4-alex.bennee@linaro.org>
This job is failing since weeks. Let's mark it as manual until
it gets fixed.
Message-Id: <82aa015a-ca94-49ce-beec-679cc175b726@redhat.com>
Acked-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Thomas Huth <thuth@redhat.com>
We move a couple of targets out of the avocado runs because there are
no tests to run. Tricore already has some coverage. The cris target
only really has check-tcg tests but its getting harder to find
anything that packages the compiler.
To reduce the noise of CANCEL messages we also set AVOCADO_TAGS
appropriately so we filter down the number of tests we attempt.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231009164104.369749-5-alex.bennee@linaro.org>
We need this to test some TPM stuff.
Reviewed-by: "Daniel P. Berrangé" <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231009164104.369749-4-alex.bennee@linaro.org>
The Cirrus CI jobs have been non-gating for a while to let us build
confidence in their reliability. Aside from periodic dependancy
problems when FreeBSD Ports switches to be based on a new FreeBSD
image version, the jobs have been reliable. It is thus worth making
them gating to prevent build failures being missed during merges.
Signed-off-by: "Daniel P. Berrangé" <berrange@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20230912184130.3056054-5-berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230914155422.426639-8-alex.bennee@linaro.org>
On the GitLab side we're invoking the Cirrus CI job using the
cirrus-run tool which speaks to the Cirrus REST API. Cirrus
sometimes tasks 5-10 minutes to actually schedule the task,
and thus the execution time of 'cirrus-run' inside GitLab will
be slightly longer than the execution time of the Cirrus CI
task.
Setting the timeout in the GitLab CI job should thus be done
in relation to the timeout set for the Cirrus CI job. While
Cirrus CI defaults to 60 minutes, it is better to set this
explicitly, and make the relationship between the jobs
explicit
Signed-off-by: "Daniel P. Berrangé" <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20230912184130.3056054-4-berrange@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230914155422.426639-7-alex.bennee@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230914155422.426639-3-alex.bennee@linaro.org>
The FreeBSD CI job started to fail due to linking problems ... time
to update to the latest version to get this fixed.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230823144533.230477-1-thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230829161528.2707696-6-alex.bennee@linaro.org>
The `ccache` tool can be very effective at reducing compilation times
when re-running pipelines with only minor changes each time. For example
a fresh 'build-system-fedora' job will typically take 20 minutes on the
gitlab.com shared runners. With ccache this is reduced to as little as
6 minutes.
Normally meson would auto-detect existance of ccache in $PATH and use
it automatically, but the way we wrap meson from configure breaks this,
as we're passing in an config file with explicitly set compiler paths.
Thus we need to add $CCACHE_WRAPPERSPATH to the front of $PATH. For
unknown reasons if doing this in msys though, gcc becomes unable to
invoke 'cc1' when run from meson. For msys we thus set CC='ccache gcc'
before invoking 'configure' instead.
A second problem with msys is that cache misses are incredibly
expensive, so enabling ccache massively slows down the build when
the cache isn't well populated. This is suspected to be a result of
the cost of spawning processes under the msys architecture. To deal
with this we set CCACHE_DEPEND=1 which enables ccache's 'depend_only'
strategy. This avoids extra spawning of the pre-processor during
cache misses, with the downside that is it less likely ccache will
find a cache hit after semantically benign compiler flag changes.
This is the lesser of two evils, as otherwise we can't use ccache
at all under msys and remain inside the job time limit.
If people are finding ccache to hurt their pipelines, it can be
disabled by setting the 'CCACHE_DISABLE=1' env variable against
their gitlab fork CI settings.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230804111054.281802-2-berrange@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230829161528.2707696-2-alex.bennee@linaro.org>
Instead of having CI pick tomli from the vendored wheel at configure
time, place it in the containers.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This reverts commit e8e4298fea.
ensuregroup allows to specify both the acceptable versions of avocado,
and a locked version to be used when avocado is not installed as a system
pacakge. This lets us install avocado in pyvenv/ using "mkvenv.py" and
reuse the distro package on Fedora and CentOS Stream (the only distros
where it's available).
ensuregroup's usage of "(>=..., <=...)" constraints when evaluating
the distro package, and "==" constraints when installing it from PyPI,
makes it possible to avoid conflicts between the known-good version and
a package plugins included in the distro.
This is because package plugins have "==" constraints on the version
that is included in the distro, and, using "pip install avocado==88.1"
on a venv that includes system packages will result in an error:
avocado-framework-plugin-varianter-yaml-to-mux 98.0 requires avocado-framework==98.0, but you have avocado-framework 88.1 which is incompatible.
avocado-framework-plugin-result-html 98.0 requires avocado-framework==98.0, but you have avocado-framework 88.1 which is incompatible.
But at the same time, if the venv does not include a system distribution
of avocado then we can install a known-good version and stick to LTS
releases.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1663
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
scripts/archive-source.sh needs meson in order to download the subprojects,
therefore meson needs to be part of the host environment in which VM-based
build jobs run.
Fixes: 2019cabfee ("meson: subprojects: replace submodules with wrap files", 2023-06-06)
Reported-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The FF_SCRIPT_SECTIONS=1 variable should ordinarily cause output from
each line of the job script to be presented in a collapsible section
with execution time listed.
While it works on Linux shared runners, when used with Windows runners
with PowerShell, this option does not create any sections, and actually
causes echo'ing of commands to be disabled, making it even worse to
debug the jobs.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20230801130403.164060-9-berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Building at -O2, adds 33% to the build time, over -O2. IOW a build that
takes 45 minutes at -O0, takes 60 minutes at -O2. Turning off debug
symbols drops it further, down to 38 minutes.
IOW, a "-O2 -g" build is 58% slower than a "-O0" build on msys in the
gitlab CI windows shared runners.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230801130403.164060-8-berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The cache is used to hold the msys installer. Even if the build phase
fails, we should still populate the cache as the installer will be
valid for next time.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20230801130403.164060-6-berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The gitlab cache is limited to only handle content within the
$CI_PROJECT_DIR hierarchy, and as such relative paths are always
implicitly relative to $CI_PROJECT_DIR.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20230801130403.164060-5-berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
We current reference an msys installer binary from mid-2022, which means
after installation, it immediately has to re-download a bunch of newer
content. This wastes precious CI time.
The msys project publishes an installer binary with a fixed URL that
always references the latest content. We cache the downloads in gitlab
though and so once downloaded we would never re-fetch the installer
leading back to the same problem.
To deal with this we also fetch the pgp signature for the installer
on every run, and compare that to the previously cached signature. If
the signature changes, we re-download the full installer.
This ensures we always have the latest installer for msys, while also
maximising use of the gitlab cache.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20230801130403.164060-4-berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
It is hard to get visibility into where time is consumed in our Windows
msys jobs. Adding a few log console messages with the timestamp will
aid in our debugging.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20230801130403.164060-3-berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Although they share a common parent, the two msys jobs still have
massive duplication in their script definitions that can easily be
collapsed.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20230801130403.164060-2-berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
This keeps timing out on gitlab due to some qtests taking a long time.
As this is just ensuring the gcov machinery is working and not
attempting to be comprehensive lets skip qtest in this run.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230630180423.558337-4-alex.bennee@linaro.org>
The coverage job wants to publish a coverage report on success, but the
tests might fail and in that case we need the meson logs for debugging.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230503145535.91325-3-berrange@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230630180423.558337-3-alex.bennee@linaro.org>
If not set explicitly, gitlab assumes 'when: on_success" as the
publishing criteria for artifacts. This is reasonable if the
artifact is an output deliverable of the job. This is useless
if the artifact is a log file to be used for debugging job
failures.
This change makes the desired criteria explicit for every job
that publishes artifacts.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230503145535.91325-2-berrange@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230630180423.558337-2-alex.bennee@linaro.org>
There are timeouts in the cross-i386-tci job that are related to plugins.
Restrict this job to basic TCI testing.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20230629130844.151453-1-richard.henderson@linaro.org>
Rename build directory to "build", like most other CI builds.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20230620153720.514882-2-marcandre.lureau@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
In forks QEMU_CI=1 can be used to create a pipeline but not auto-run any
jobs. In upstream jobs always auto-run, which is equiv of QEMU_CI=2.
This supports setting QEMU_CI=1 in upstream, to disable job auto-run.
This can be used to preserve CI minutes if repushing a branch to staging
with a specific fix that only needs testing in limited scenarios.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230608164018.2520330-6-berrange@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
In upstream context we only run pipelines on staging branches, and
limited publishing jobs on the default branch.
We don't want to run pipelines on stable branches, or tags, because
the content will have already been tested on a staging branch before
getting pushed.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230608164018.2520330-5-berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
If the stable staging branches publish containers under the 'latest' tag
they will clash with containers published on the primary staging branch,
as well as with each other. This introduces logic that overrides the
container tag when jobs run against the stable staging branches.
The CI_COMMIT_REF_SLUG variable we use expands to the git branch name,
but with most special characters removed, such that it is valid as a
docker tag name. eg 'staging-8.0' will get a slug of 'staging-8-0'
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230608164018.2520330-4-berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The CI rules have special logic for what happens in upstream. To enable
contributors who modify CI rules to test this logic, however, they need
to be able to override which repo is considered upstream. This
introduces the 'QEMU_CI_UPSTREAM' variable
git push gitlab <branch> -o ci.variable=QEMU_CI_UPSTREAM=berrange
to make it look as if my namespace is the actual upstream. Namespace in
this context refers to the path fragment in gitlab URLs that is above
the repository. Typically this will be the contributor's gitlab login
name.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230608164018.2520330-3-berrange@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
We use a fixed container tag of 'latest' so that contributors' forks
don't end up with an ever growing number of containers as they work
on throwaway feature branches.
This fixed tag causes problems running CI upstream in stable staging
branches, however, because the stable staging branch will publish old
container content that clashes with that needed by primary staging
branch. This makes it impossible to reliably run CI pipelines in
parallel in upstream for different staging branches.
This introduces $QEMU_CI_CONTAINER_TAG global variable as a way to
change which tag container publishing uses. Initially it can be set
by contributors as a git push option if they want to override the
default use of 'latest' eg
git push gitlab <branch> -o ci.variable=QEMU_CONTAINER_TAG=fish
this is useful if contributors need to run pipelines for different
branches concurrently in their forks.
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230608164018.2520330-2-berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
We are not currently running a --disable-tcg test for arm64,
like we are for mips, ppc and s390x. We have a job for the
native aarch64 runner, but it is not run by default and it
is not helpful for normal developer testing without access
to qemu's private runner.
Use --without-default-features to eliminate most tests.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* meson.build tweaks
* revert avocado update
* always upgrade/downgrade locally installed Python packages
* switch from submodules to subprojects
* remove --with-git= option
* rename --enable-pypi to --enable-download, control submodules and subprojects too
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmR/Qu8UHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroMmSwgAj5SHD8R+5D1UmptzBvI/72CfgqVv
MJa4O1LvHwUkuSmxX1MFFhRa0mo0bu6j+bPpvJ29zKS61ybVwJl87gnsRcDAMXe7
08YbcG35Chox6aZxbidUQtXm18JZ3F2aMtmxUuP0PR7LDjVXLV5FsjrHTIt8KuEZ
vUqq3IsVbc4FxCCC0ke2DzrtgpRCxYSdfPrj/t5WzAztAXId9r1zvUlCLN+FUpri
E3KIZYpkXZyOnJQ9W30KnsZo5QtDACwlIMBK6whSdoCjyNN7TwDdhNW8QkOueNO6
q3tLfwf5+u6uyEoaQTW+teE2oMXT8N4IJllRJj2RyQ1BFD49XhUUJmc33Q==
=b9QD
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* finish atomics revamp
* meson.build tweaks
* revert avocado update
* always upgrade/downgrade locally installed Python packages
* switch from submodules to subprojects
* remove --with-git= option
* rename --enable-pypi to --enable-download, control submodules and subprojects too
# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmR/Qu8UHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroMmSwgAj5SHD8R+5D1UmptzBvI/72CfgqVv
# MJa4O1LvHwUkuSmxX1MFFhRa0mo0bu6j+bPpvJ29zKS61ybVwJl87gnsRcDAMXe7
# 08YbcG35Chox6aZxbidUQtXm18JZ3F2aMtmxUuP0PR7LDjVXLV5FsjrHTIt8KuEZ
# vUqq3IsVbc4FxCCC0ke2DzrtgpRCxYSdfPrj/t5WzAztAXId9r1zvUlCLN+FUpri
# E3KIZYpkXZyOnJQ9W30KnsZo5QtDACwlIMBK6whSdoCjyNN7TwDdhNW8QkOueNO6
# q3tLfwf5+u6uyEoaQTW+teE2oMXT8N4IJllRJj2RyQ1BFD49XhUUJmc33Q==
# =b9QD
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 06 Jun 2023 07:30:07 AM PDT
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [unknown]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (21 commits)
configure: remove --with-git-submodules=
build: remove git submodule handling from main makefile
meson: subprojects: replace berkeley-{soft,test}float-3 with wraps
pc-bios/s390-ccw: always build network bootloader
configure: move SLOF submodule handling to pc-bios/s390-ccw
meson: subprojects: replace submodules with wrap files
build: log submodule update from git-submodule.sh
git-submodule: allow partial update of .git-submodule-status
configure: rename --enable-pypi to --enable-download, control subprojects too
configure: remove --with-git= option
mkvenv: always pass locally-installed packages to pip
tests: Use separate virtual environment for avocado
Revert "tests/requirements.txt: bump up avocado-framework version to 101.0"
scsi/qemu-pr-helper: Drop support for 'old' libmultipath API
meson.build: Use -Wno-undef only for SDL2 versions that need it
meson.build: Group the audio backend entries in a separate summary section
meson.build: Group the network backend entries in a separate summary section
meson.build: Group the UI entries in a separate summary section
scripts: remove dead file
atomics: eliminate mb_read/mb_set
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The only remaining user of submodules at build time is roms/SLOF,
which is handled in pc-bios/s390-ccw/Makefile. Remove the relevant
code from the main makefile.
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move the handling of the roms/SLOF submodule out of the main Makefile,
since we are going to remove submodules from the build process of QEMU.
Acked-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Compared to submodules, .wrap files have several advantages:
* option parsing and downloading is delegated to meson
* the commit is stored in a text file instead of a magic entry in the
git tree object
* we could stop shipping external dependencies that are only used as a
fallback, but not break compilation on platforms that lack them.
For example it may make sense to download dtc at build time, controlled
by --enable-download, even when building from a tarball. Right now,
this patch does the opposite: make-release treats dtc like libvfio-user
(which is not stable API and therefore hasn't found its way into any
distros) and keycodemap (which is a copylib, for better or worse).
dependency() can fall back to a wrap automatically. However, this
is only possible for libraries that come with a .pc file, and this
is not very common for libfdt even though the upstream project in
principle provides it; it also removes the control that we provide with
--enable-fdt={system,internal}. Therefore, the logic to pick system
vs. internal libfdt is left untouched.
--enable-fdt=git is removed; it was already a synonym for
--enable-fdt=internal.
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This reverts commits eea2d14117 ("Makefile: remove $(TESTS_PYTHON)",
2023-05-26) and 9c6692db55 ("tests: Use configure-provided pyvenv for
tests", 2023-05-18).
Right now, there is a conflict between wanting a ">=" constraint when
using a distro-provided package and wanting a "==" constraint when
installing Avocado from PyPI; this would provide the best of both worlds
in terms of resiliency for both distros that have required packages and
distros that don't.
The conflict is visible also for meson, where we would like to install
the latest 0.63.x version but also accept a distro 1.1.x version.
But it is worse for avocado, for two reasons:
1) we cannot use an "==" constraint to install avocado if the venv
includes a system avocado. The distro will package plugins that have
"==" constraints on the version that is included in the distro, and, using
"pip install avocado==88.1" on a venv that includes system packages will
result in this error:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
avocado-framework-plugin-varianter-yaml-to-mux 98.0 requires avocado-framework==98.0, but you have avocado-framework 88.1 which is incompatible.
avocado-framework-plugin-result-html 98.0 requires avocado-framework==98.0, but you have avocado-framework 88.1 which is incompatible.
make[1]: Leaving directory '/home/berrange/src/virt/qemu/build'
2) we cannot use ">=" either if the venv does _not_ include a system
avocado, because that would result in the installation of v101.0 which
is the one we've just reverted.
So the idea is to encode the dependencies as an (acceptable, locked)
tuple, like this hypothetical TOML that would be committed inside
python/ and used by mkvenv.py:
[meson]
meson = { minimum = "0.63.0", install = "0.63.3", canary = "meson" }
[docs]
# 6.0 drops support for Python 3.7
sphinx = { minimum = "1.6", install = "<6.0", canary = "sphinx-build" }
sphinx_rtd_theme = { minimum = "0.5" }
[avocado]
avocado-framework = { minimum = "88.1", install = "88.1", canary = "avocado" }
Once this is implemented, it would also be possible to install avocado in
pyvenv/ using "mkvenv.py ensure", thus using the distro package on Fedora
and CentOS Stream (the only distros where it's available). But until
this is implemented, keep avocado in a separate venv. There is still the
benefit of using a single python for meson custom_targets and for sphinx.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Python should have been removed in this commit:
94b8b146df
Signed-off-by: Camilla Conte <cconte@redhat.com>
Message-Id: <20230531150824.32349-2-cconte@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The 'stable' and 'stable-dind' tags are not documented as supported
tags at:
https://hub.docker.com/_/docker
Looking at their content they reflect docker 19.x.x release series,
were last built in Dec 2020, and have 3 critical and 20 high rated
CVEs unfixed. This obsolete status is attested by this commit:
606c63960a
The 'stable-dind' tag in particular appears buggy as it is unable to
resolve DNS for Fedora repos:
- Curl error (6): Couldn't resolve host name for https://mirrors.fedoraproject.org/metalink?repo=fedora-37&arch=x86_64&countme=1 [getaddrinfo() thread failed to start]
We used the 'stable' tag previously at the recommendation of GitLab
docs, but those docs are wrong and pending a fix:
https://gitlab.com/gitlab-org/gitlab/-/issues/409430
Fixes: 5f63a67adb
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Camilla Conte <cconte@redhat.com>
Message-Id: <20230531140654.1141145-1-berrange@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>