Turns out these structures do not need to be in the public header, so
move them into a private header.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
No need to use both renderer for the tests, PIXMAN one is enough.
For the kiosk-shell test which was recently added, but also for the
older paint-node test.
Signed-off-by: Marius Vlad <marius.vlad@collabora.com>
If png_create_info_struct() fails, we should pass NULL to
png_destroy_read_struct(), and not the address of the info we just
failed to create.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
In the next commits we'll add support to extract the ICC information
from the images and use the CM&HDR protocol extension to present them
with the ICC data.
Currently the decorations, background and the image content are
presented on the same surface. As we want to apply the ICC only on the
image content, move it to a subsurface.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
Cosmetic change. Instead of accessing image->frame_widget on the widget
handlers, use the parameter widget.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
In the next commits we'll add another widget to the code, so rename this
one to frame_widget.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
This patch is for our CM&HDR protocol extension test.
According to the protocol, the compositor may take the time it needs
before sending 'ready' or 'failed' for a certain image description that
the client creates through the CM&HDR protocol extension.
In our CM&HDR tests, we are assuming that the image description would
be ready immediately. Do not assume that. Instead, let's wait until
the compositor sends one of the events ('failed' or 'ready').
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
This patch is for our CM&HDR protocol extension implementation.
When a client requests to create an image description, it can only start
using that after receiving the 'ready' event. The compositor may take
the time it needs to create the backing color profile for such image
description. But nothing guarantees that clients will do the right
thing.
This fixes some issues for not well behaved clients and also documents
how we handle image description that are not ready.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
This patch is for our CM&HDR protocol extension implementation.
When we gracefully fail to create an image description, we send the
'failed' event and the client can only destroy such image description.
But nothing guarantees that clients will do the right thing.
This fixes some issues for not well behaved clients and also documents
how we handle image description whose creation gracefully failed
internally.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
There's a comment with a TODO that would be super simple to implement,
but we preferred to wait at the moment. But there are discussions on the
upstream CM&HDR protocol MR that would change that. This patch documents
that.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
It doesn't make sense to stack the plane before it's useful - so only
put it in the compositor's plane list on output_enable. The opposite of
weston_output_enable is weston_compositor_remove_output, so release the
plane there.
This stops a crash when closing one of multiple windows for a nested
backend results in the output being freed while the plane is still on the
compositor's plane list.
Signed-off-by: Derek Foreman <derek.foreman@collabora.com>
Previously we assigned any paint node to the primary_plane of the output
it was on and marked it dirty.
This doesn't make sense if we're releasing the primary_plane.
Let's just delete the paint nodes and force a view list rebuild, which
will recreate them appropriately.
Signed-off-by: Derek Foreman <derek.foreman@collabora.com>
This should produce the best results on average for all kinds of apps on
any kind of display.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
The ICC profiles created for tests here are supposed to produce the same
results regardless of whether they are of the matrix-shaper or cLUT
form, and whether the compositor uses a colorimetric or perceptual
rendering intent. This is silly, but it fits our tests very well since
we mostly want to ensure correct computations in matrix and cLUT code
rather than meaningful results from different rendering intents.
When trying to switch the compositor from colorimetric to perceptual
rendering intent as required by the color-management protocol extension,
all and only the cLUT based tests failed (color-icc-output test).
The reason is that ICCv4 defines the perceptual PCS having a specific
non-zero black point. It requires ICC profiles to convert device black
to the PCS black and vice versa. However, matrix-shaper type ICC
profiles have no way to provide a perceptual transformation to/from PCS
separate from the colorimetric transformation. Hence, LittleCMS exempts
ICCv4 matrix-shaper profiles from the ICCv4 perceptual PCS definition.
Black point compensation (BPC) is always added by LittleCMS with the
perceptual rendering intent. If an ICC profile claims to be ICC version
4, the perceptual transformation in it is assumed to adhere to the
percptual PCS black point, which is non-zero. Hence, DToB0 and BToD0
tags need to respect that so that BPC works correctly.
Before this patch, DToB0 and BToD0 transformations did not use the
correct PCS black point, so when BPC got added, the color space
conversion went wrong. This patch replicates the BPC algorithm that
LittleCMS uses in order to respect the perceptual PCS definition. This
will then cancel out with the BPC added by LittleCMS, producing the
expected color space conversion.
The problem arises only with cLUT ICC profiles because matrix-shaper
profiles are exempt: the black points between source (always
matrix-shaper sRGB profile for now) and destination color spaces match,
and no BPC is added by LittleCMS.
There is no way to ask LittleCMS to add its BPC on our will, so we need
to copy that code from LittleCMS 2.16.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
The primaries and the white point are the fundamental definition of the
color spaces in these tests. Instead of hard-coding mat2XYZ, use
LittleCMS to derive the result from the fundamental definition.
This removes derived hard-coded constants, which is a benefit in itself.
How these constants were originally produced was not mentioned in
0c5860fafb but I was able to reproduce
them with python3:
import colour
import numpy as np
x = colour.RGB_COLOURSPACES['sRGB']
w_d50 = np.array([0.34567, 0.35850])
print(x.chromatically_adapt(w_d50, 'D50', 'Bradford'))
It's identical to 3-4 decimals of the hardcoded values, and also for
Adobe RGB. I printed the LittleCMS generated values as well, and they
are the same as with python up to roughly 4 decimals.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This is pure refactoring.
Ease readability by reducing code duplication between pre and post curve
powlin handling.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This is pure refactoring.
Ease readability by reducing code duplication between pre and post curve
linpow handling.
While at it, define symbols for the counts. This patch converts only
linpow. Powlin are converted in follow-up.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This is pure refactoring.
Ease readability by reducing code duplication between pre and post curve
LUT handling.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Build failed on the latest glibc (I think?), which caused this weird error:
/usr/include/bits/fcntl2.h:50:11: error: call to ‘__open_missing_mode’ declared with attribute error: open with O_CREAT or O_TMPFILE in second argument needs 3 arguments
In these three calls, open() was being called with 'r' flag, whose hex value is
0x72, and happens to set the O_CREAT flag (0x40) which was causing this error.
The correct flag to pass is O_RDONLY.
This issue exists since the creation of that file, I’m surprised it was working
previously.
Signed-off-by: Emmanuel Gil Peyrot <linkmauve@linkmauve.fr>
gst_parse_launch can return non-NULL even though an error is set. This
indicates "a recoverable parsing error and you can try to play the
pipeline", however given that we don't (and likely can't) make any
attempt to correct the situation, we should treat this as fatal and not
try to carry on.
Signed-off-by: Ray Smith <rsmith@brightsign.biz>
We're currently calling ppoll() before calling xcb_wait_for_event(), which
may be due to initially trying to make this non-blocking.
However, xcb_wait_for_event() reads all events available - even if there
are more than one.
There are a handful of X properties we're sent that we don't explicitly
ask for, and if these end up in the same read, we could theoretically
end up in a poll() with nothing coming in.
Drop the extra ppoll() and just let xcb_wait_for_event() do the blocking
for us.
I'm hoping this fixes the occasional timeout in the xwayland test, but
it's a reasonable code simplification even if it doesn't.
Signed-off-by: Derek Foreman <derek.foreman@collabora.com>
This prevents a potential crash where users of
weston_layer_entry_insert/layer_entry_remove() would see when moving
views into a NULL layer (effectively unmapping the surface/view).
Users that have migrated to the weston_view_move_to_layer() are immune
to this issue because that takes care of paint node destruction.
Signed-off-by: Marius Vlad <marius.vlad@collabora.com>
When reading back for the remote backends we need to convert the extents
of the damage (which is in global coordinates) to output coordinates
to read back the correct region.
We were doing this in a bespoke fashion by adding the output coordinates.
Instead, use weston_matrix_transform_rect() to transform the extents by
the output transform - which includes the scale factor.
This fixes output scale on RDP with the gl renderer.
Signed-off-by: Derek Foreman <derek.foreman@collabora.com>
Without this fix, we have randomly been getting CI failures due to
LeakSanitizer itself crashing after all the tests in a program have
succeeded. This has been happening randomly for a long time, but
https://gitlab.freedesktop.org/wayland/weston/-/merge_requests/1486
made it very reliably repeatable in the job x86_64-debian-full-build
(and no other job) in the test-subsurface-shot program.
--- Fixture 2 (GL) ok: passed 4, skipped 0, failed 0, total 4
Tracer caught signal 11: addr=0x1b8 pc=0x7f6b3ba640f0 sp=0x7f6b2cc77d10
==489==LeakSanitizer has encountered a fatal error.
I was also able to get a core file after twiddling, but there it ended
up with lsan aborting itself rather than a segfault.
We got some clues that use_tls=0 might work around this, from
https://github.com/google/sanitizers/issues/1342https://github.com/google/sanitizers/issues/1409
and some other projects that have cargo-culted the same workaround.
Using that cause more false leaks to appear, so they need to be
suppressed. I suppose we are not interested in catching leaks in glib
using code, so I opted to suppress g_malloc0 altogether. Pinpointing it
better might have required much more slower stack tracing.
wl_shm_buffer_begin_access() uses TLS, so no wonder it gets flagged.
ld-*.so is simply uninteresting to us, and it got flagged too.
Since this might have been fixed already in LeakSanitizer upstream, who
knows, leave some notes to revisit this when we upgrade that in CI.
This fix seems to make the branch of
https://gitlab.freedesktop.org/wayland/weston/-/merge_requests/1486
in my quick testing.
Suggested-by: Derek Foreman <derek.foreman@collabora.com>
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
On commit "color: add support to parametric curves in
weston_color_curve" we've added support for some parametric curves in
Weston. This helped us to be more precise in some cases in which we'd
have to fallback to LUT's otherwise.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
Until now, all the curves would be represented with 3x1D LUT's. Now we
support LINPOW and POWLIN curves (arbitrary names that we've picked).
We can use these curves to represent LittleCMS curves type 1, 4 and
their inverses -1, -4. The reason why we want that is because we gain
precision using the parametric curves (compared to the LUT's);
Surprisingly we had to increase the tolerance of the sRGB->adobeRGB MAT
test. Our analysis is that the inverse EOTF power-law curve with
exponent 1.0 / 2.2 amplifies errors more than the LUT, specially for
input (optical) values closer to zero.
That makes sense, because this curve is more sensible to input values
closer to zero (i.e. little input variation results in lots of output
variation). And this model makes sense, as humans are more capable of
perceiving changes of light intensity in the dark.
But the downside of all that is that for input values closer to zero, a
little bit of noise increases significantly the error.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
In the next commit we'll add support for more color curves. So move
lut_3x1d to an union.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
Given a certain curveset, get_parametric_curveset_params() returns true
if the curveset contains parametric curves. Also, it returns the
parameters of the curveset (for each curve) and if the input should be
clamped or not.
This is not a generic function, it will specifically work for some well
behaved curveset. E.g. we return false if there are more than 3 curves
(one per color channel).
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
Currently in translate_curve_element() we always translate the curve
into a LUT. But in the future we'll be able to translate the curves to
parametric ones.
So move the current code to a new function
translate_curve_element_LUT(), so that in translate_curve_element() we
are able to call one of the two functions (_LUT() or _parametric()).
No behavior changes, just preparation for the upcoming patches.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
Not a behavior change, but this allow us to decide what function pointer
to use within this function (instead of forcing callers to decide that).
In the following commits this will be helpful. We'll add more curves
besides 3x1D LUT's and, depending on the curve, the function pointer
signature may differ.
Also, we now pass the xform directly to the function, and it can select
the curves depending if it is being called for a pre or a post curve.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
When we don't have cmsGetToneCurveSegment() at disposal, we are not able
to inspect the LittleCMS color curves and convert them to Weston's
internal representation of color curves. In such case, we need to
fallback to a more generic solution (using LUT's).
For now we always fallback to the LUT's, but in the next commits we'll
add support to inspect the curves and convert them to the internal
representations that we'll add.
This will allow us to tweak the tolerance in the color-icc-output tests.
But if we continue running these tests for systems without
cmsGetToneCurveSegment() at disposal, they may fail.
We already have a LittleCMS version in the CI that has
cmsGetToneCurveSegment(). So skip color-icc-output when we don't have
this function.
Signed-off-by: Leandro Ribeiro <leandro.ribeiro@collabora.com>
Pointer values are hard to track for humans, being long numbers. Now
that we have unique id for each color transformation, print that instead
of the pointer. It is a small number easy to track for humans.
Transformation id numbers do get re-used aggressively, so you have to
keep track of what is being destroyed and created over time when reading
logs. Pointers had the same caveat, just a lot more random.
The prefix 't' indicates "transformation".
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Just like with color profiles, generate an ID for color transformations
as well. This is not needed by protocol or anything, it is just for
debugging purposes. A small ID is easier for humans than a long pointer
value.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Pointer values are hard to track for humans, being long numbers. Now
that we have unique id for each color profile, print that instead of the
pointer. It is a small number easy to track for humans.
Profile id numbers do get re-used aggressively, so you have to keep
track of what is being destroyed and created over time when reading
logs. Pointers had the same caveat, just a lot more random.
The prefix 'p' indicates "profile", just in case we use another id space
for some other thing similarly.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
A proper dependency on egl is missing for several backends as well as
for libshared. This dependency is necessary to pull in the correct
include directories from the egl.pc pkg-config file.
Signed-off-by: Jordan Williams <jordan@jwillikers.com>
The windowed output API is implemented by the Wayland, the X11 and the
headless backends. It's currently not possible to create a secondary
headless backend when the primary backend is Wayland or X11 because
the windowed output API would be registered twice. This commit
suffixes the windowed output API names with the backend name in order
to avoid clashes: "weston_windowed_output_api_<backend>_v2".
A use case for Wayland or X11 as primary backend and headless as
secondary is for instance to request output captures on the headless
backend to avoid read backs on the primary backend's render buffers.
Signed-off-by: Loïc Molinari <loic.molinari@collabora.com>
This is mostly for a easy way to stream out content from the pipewire
backend.
Similarly to the rdp script this can used on the server after checking the
pipewire id. On the remote side the rdp script can be used. Script
mentions that as usage.
Signed-off-by: Marius Vlad <marius.vlad@collabora.com>
Failure to do so, might cause a crash if the output repaint happens
before the pipewire pipeline started -- calling pixman_region32_fini on
a uninitialized region.
Fixes 2abe4efcf7, "libweston/backends: Move damage flush into backends"
Signed-off-by: Marius Vlad <marius.vlad@collabora.com>
Plug async read back support to OpenGL ES 2 implementations using
GL_NV_pixel_buffer_object, GL_OES_mapbuffer extensions and
GL_EXT_map_buffer_range.
Signed-off-by: Loïc Molinari <loic.molinari@collabora.com>
Using a fence sync triggered on read back completion allows to
precisely know when it completed. The timeout path is kept as a
fallback when fence syncs aren't available.
Signed-off-by: Loïc Molinari <loic.molinari@collabora.com>
SHM buffer stride validation is duplicated in sync and async output
capture paths. Move it into a common path to avoid duplication.
Signed-off-by: Loïc Molinari <loic.molinari@collabora.com>
ReadPixels() implies a synchronous read back of the render buffer to
return pixel data. OpenGL ES 3 adds asynchronous read back support by
writing the pixel data into a dedicated buffer object. This commit
adds asynchronous read back support to the output capture code. It
spawns a read back request and schedules a timeout a few frames later
in order to store the pixels into the client SHM buffer.
Signed-off-by: Loïc Molinari <loic.molinari@collabora.com>
There is no reason why cmlcms_fill_in_3dlut() would not work for
blend-to-output category, so the assert is a little misplaced.
However, there would be a bug if 3D LUT was used for blend-to-output,
because we should never fail to optimize that chain. Put the assert
where it belongs.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Stop special-casing the blend-to-output category, and pass it through
the same mechnisms and optimizations as all other transformations. In
the future, more curve types will be added to weston_color_transform,
meaning that blend-to-output does not always have to be a LUT. It could
become a parametric curve, which is more efficient and more precise to
compute, when VCGT does not exist.
Drop the special crafting of output_inv_eotf_vcgt LUT and replace it
with inv_eotf cms profile. inv_eotf will be combined with vcgt cms
profile as a chain as needed instead.
Blend-to-output transformations do not use a render intent, but we have
to tell cmsCreateMultiprofileTransformTHR() something, so arbitrarily
pick ICC-Absolute render intent for it.
Now all color transformations go through xform_realize_chain(), where
the documentation is improved.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
We need it as a cms profile, so let's make it one to start with. We even
gain non-datal error handling.
This will also be useful in rewriting output_inv_eotf_vcgt next.
The type change of vcgt_curves is required to be able to call
cmsCreateLinearizationDeviceLinkTHR(), even though everything about
vcgt_curves should be doubly const. The curves are populated on demand
and cached in cmsHPROFILE, so we also must not explicitly free them.
Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>