Make sure every test handler now gets a copy of wet_testsuite_data,
which we'll later use for client<->compositor synchronisation.
Signed-off-by: Daniel Stone <daniels@collabora.com>
This commit is truly horrible.
We want to run ASan with leak checking enabled in CI so we can catch
memory leaks before they're introduced. This works well with Pixman, and
with NIR-only drivers like iris or Panfrost.
But when we run under llvmpipe - which we do under CI - we start failing
because:
- Mesa pulls in llvmpipe via dlopen
- llvmpipe pulls in LLVM itself via DT_NEEDED
- initialising LLVM's global type/etc systems performs thread-local
allocations
- llvmpipe can't free those allocations since the application might
also be using LLVM
- Weston stops using GL and destroys all GL objects, leading to Mesa
unloading llvmpipe like it should
- with everything disappearing from the process's vmap, ASan can no
longer keep track of still-reachable pointers
- tests fail because LLVM is 'leaking'
Usually, an alternative is to LD_PRELOAD a shim which overrides
dlclose() to be a no-op. This is not usable here, because when
$LD_PRELOAD is not empty and ASan is not first in it, ASan immediately
errors out. Prepending ASan doesn't work, because we run our tests
through Meson (which also invokes Ninja), leading to LSan exploding
over CPython and Ninja, which is not what we're interested in.
It would be possible to inject _both_ ASan and a dlclose-does-nothing
shim DSO into the LD_PRELOAD environment for every test, but that seems
even worse, especially as Meson strongly discourages globbing for random
files in the root.
So, here we are, doing what we can: finding where swrast_dri.so (aka
llvmpipe) lives, stashing that in an environment variable, and
deliberately leaking a dlopen handle which we never close to ensure that
neither llvmpipe nor LLVM leave our process's address space before we
exit.
Signed-off-by: Daniel Stone <daniels@collabora.com>
This will be useful in CI, where we do not want to see any skips. If
something starts to skip, that's a mistake somewhere, and want to catch
it.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Use consistent terminology with the code: index starts from zero,
numbering starts from one. Fixture 0 runs all fixtures.
Suggested-by: Marius Vlad <marius.vlad@collabora.com>
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
When there is a fixture setup array, list all fixture setups with their
numbers and names. This should help people picking a single fixture to
run and makes the --list output more interesting.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Instead of "fixture %d", use the proper fixture name if it exists or
nothing. Some places still show the fixture index because it is used on
the command line.
This makes the reports more readable.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Make it more explicit that the return value is NULL when there is no
arrray.
This patch makes the following patch smaller.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
The string from get_test_name() can be used for writing screenshot files and
others. Starting the name with the fixture number makes an alphabetized listing
of output files look unorganized.
Let's change the test name to begin with the test (source) name with fixture
and element numbers as suffixes. That makes a file listing easier to look
through, when you have multiple tests each saving multiple screenshot files.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
A future test wants to access the fixture data array for the currently running
fixture index to log the test description. This patch provides access to the
array index.
Rather than adding more gloabl variables, I changed the type of the existing
one which feels slightly cleaner.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This replaces the old test harness with a new one.
The old harness relied on fork()'ing each test which makes tests independent,
but makes debugging them harder. The new harness runs client code in a thread
instead of a new process. A side-effect of not fork()'ing anymore is that any
failure will stop running a test series short. Fortunately we do not have any
tests that are expected to crash or fail.
The old harness executed 'weston' from Meson, with lots of setup as both
command line options and environment variables. The new harness executes
wet_main() instead: the test program itself calls the compositor main function
to execute the compositor in-process. Command line arguments are configured in
the test program itself, not in meson.build. Environment variables aside, you
are able to run a test by simply executing the test program, even if it is a
plugin test.
The new harness adds a new type of iteration: fixtures. For now, fixtures are
used to set up the compositor for tests that need a compositor. If necessary, a
fixture setup may include a data array of arbitrary type for executing the test
series for each element in the array. This will be most useful for running
screenshooting tests with both Pixman- and GL-renderers.
The new harness outputs TAP formatted results into stdout. Meson is not
switched to consume TAP yet though, because it would require a Meson version
requirement bump and would not have any benefits at this time. OTOH outputting
TAP is trivial and sets up a clear precedent of random test chatter belonging
to stderr.
This commit migrates only few tests to actually make use of the new features:
roles is a basic client test, subsurface-shot is a client test that
demonstrates the fixture array, and plugin-registry is a plugin test. The rest
of the tests will be migrated later.
Once all tests are migrated, we can remove the test-specific setup from
meson.build, leaving only the actual build instructions in there.
The not migrated tests and stand-alone tests suffer only a minor change: they
no longer fork() for each TEST(), otherwise they keep running as before.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Nothing is using FAIL_TEST or FAIL_TEST_P and that is good. Remove them to not
encourage using them.
If we need a test that should fail, it always needs to fail in a very specific
way which needs to be checked. For this we have e.g. expect_protocol_error().
We never want a fail-test to pass because it failed in a way we did not expect.
Therefore these macros are useless.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This avoids confusing it with the opaque struct weston_test from
protocol/weston-test.xml.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
When we move on to TAP, stdout will be reserved for TAP and stderr is for free
chatter. Set up an example that tests should use testlog() instead of fprintf
or printf to chat in the right place.
Most statements were already printing to stderr, so this just makes then a
little shorter. There are also some statements that printed to stdout and are
now corrected.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
The printf() format specifier "%m" is a glibc extension to print
the string returned by strerror(errno). While supported by other
libraries (e.g. uClibc and musl), it is not widely portable.
In Weston code the format string is often passed to a logging
function that calls other syscalls before the conversion of "%m"
takes place. If one of such syscall modifies the value in errno,
the conversion of "%m" will incorrectly report the error string
corresponding to the new value of errno.
Remove all the occurrences of the specifier "%m" in Weston code
by using directly the string returned by strerror(errno).
While there, fix some minor indentation issue.
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
The iteration counter cannot be used to detect non-iterated tests
defined with TEST and FAIL_TEST.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.co.uk>
Reviewed-by: Quentin Glidic <sardemff7+git@sardemff7.net>
Screenshot tests often want to use the test name for writing out images.
This is a helper to get the test name without writing it multiple times
in the source.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.co.uk>
Reviewed-by: Emilio Pozuelo Monfort <emilio.pozuelo@collabora.co.uk>
Reviewed-by: Micah Fedke <micah.fedke@collabora.co.uk>
Reviewed-by: Quentin Glidic <sardemff7+git@sardemff7.net>
Mostly remove headers that aren't actually needed for anything.
Add stdint.h to permit dropping xf86drm.h, which is otherwise unneeded.
Signed-off-by: Bryce Harrington <bryce@osg.samsung.com>
Acked-by: Marek Chalupa <mchqwerty@gmail.com>
Tested-by: Pekka Paalanen <pekka.paalanen@collabora.co.uk>
Tests will now return the extra command line parameters they need
when run with --params
Signed-off-by: Bryce Harrington <bryce@osg.samsung.com>
Reviewed-by: Pekka Paalanen <pekka.paalanen@collabora.co.uk>
We were calling exit(0) when tests were skipped, which counted
them as passed instead of skipped. Fix this by properly exiting
with 77 (which is what automake expects for skipped tests) from
the tests themselves, then returning 77 again from weston-test-runner
if all the tests were skipped. Finally the weston-test.so module
catches weston-test-runner's exit code and uses it as an exit code,
which is what automake will see and use.
Signed-off-by: Emilio Pozuelo Monfort <emilio.pozuelo@collabora.co.uk>
The new TEST_P macro takes a function name and a "data" argument to
point to an arbitrary array of known size of test data. This allows
multiple tests to be run with different datasets. The array is stored
as a void * but advanced by a known size on each iteration.
The data for each invocation of the test is provided as a "data" argument,
it is the responsibility of the test to cast it to something sensible.
Also fixed single-test running to only run the tests specified
We handle FAIL_TEST tests by simply inverting the success flag. The
problem with this is, that if a FAIL_TEST fails by a SIGSEGV, it will be
interpreted as passed. However, no code should ever cause a SEGV, or any
other signal than ABRT. And even ABRT only in the case of an assert()
that is meant to fail. We would probably need more sophistication for the
FAIL_TEST cases.
For now, just interpret any other signal than ABRT as a hard failure,
regardless whether it is a TEST or FAIL_TEST. At least segfaults do not
cause false passes anymore.
Signed-off-by: Pekka Paalanen <ppaalanen@gmail.com>
This adds a weston-test-runner for the weston test extension and
some weston test client helper methods.
Converted keyboard-test to use the new test interface, runner,
and helper methods.
Fixes https://bugs.freedesktop.org/show_bug.cgi?id=56822
Signed-off-by: U. Artie Eoff <ullysses.a.eoff@intel.com>