tests/alpha-blending: use two_norm tolerance

Switch from per-channel max error tolerance to max two-norm (Euclidean
distance) error. Geometrically this means that previously the accepted
volume was a +/- tolerance cube around the reference point, and now it
is a sphere with tolerance radius. This makes the check slightly
stricter.

The real benefit is simplifying the code.

Signed-off-by: Pekka Paalanen <pekka.paalanen@collabora.com>
This commit is contained in:
Pekka Paalanen 2022-06-17 15:23:41 +03:00 committed by Pekka Paalanen
parent a0584e64cf
commit baf7ab5795
1 changed files with 2 additions and 6 deletions

View File

@ -229,7 +229,6 @@ check_blend_pattern(struct buffer *bg, struct buffer *fg, struct buffer *shot,
struct rgb_diff_stat diffstat = { .dump = dump, };
bool ret = true;
int x;
unsigned i;
for (x = 0; x < BLOCK_WIDTH * ALPHA_STEPS - 1; x++) {
if (!pixels_monotonic(shot_row, x))
@ -239,11 +238,8 @@ check_blend_pattern(struct buffer *bg, struct buffer *fg, struct buffer *shot,
&diffstat, space);
}
for (i = 0; i < COLOR_CHAN_NUM; i++) {
if (diffstat.rgb[i].min <= -tolerance ||
diffstat.rgb[i].max >= tolerance)
ret = false;
}
if (diffstat.two_norm.max > tolerance)
ret = false;
rgb_diff_stat_print(&diffstat, __func__, 8);