Previously, Frames could set both these properties along with a
thickness to confusing effect: Most shapes of the same shadowing only
differentiated at a thickness >= 2, and some not at all. This led
to a lot of creative but ultimately superfluous choices in the code.
Instead let's streamline our options, automate thickness, and get
the right look without so much guesswork.
Plain shadowing has been consolidated into a single Plain style,
and 0 thickness can be had by setting style to NoFrame.
This can be used to convert a profile-dependent color to the L*a*b*
color space.
(I'd like to use this to implement the DeltaE (CIE 2000) algorithm,
which is a metric for how similar two colors are perceived.
(And I'd like to use that to evaluate color conversion roundtrip
quality, once I've implemented full conversions.)
Only implemented for matrix profiles so far.
This API won't be fast enough to color manage images, but let's
get something working before getting something fast.
We used to limit this number to two and name them with their usual
usage. The specification is however broader, and you can find files that
use more tables as in the following link:
20050519/tests/A_2_1-BF-01.htm
No real behavior change. (The two globals are now both initialized
at first use instead of before main(), but that should have no
observable effect. The motivation is readability.)
Negative accumulation on the right-hand side of the accumulation bitmap
would wrap around to the left, causing a negative start for certain
lines which resulted in horizontal streaks of lower alpha values.
Prevent this by not writing out of bounds :^)
Fixes#18394
An oval approximated by quadratic bezier curves ended up with 2048 line
segments when rasterized to a bitmap of around 35 by 35 pixels, which
seems a bit much. :^)
By increasing the tolerance by an order of magnitude, that same oval is
now split up into 512 line segments, which is still more than enough for
a high quality render.
read_webp_first_chunk() sensibly assumes that if decode_webp_header()
succeeds, there are at least sizeof(WebPFileHeader) bytes available.
But if the file size in the header was less than the size of the header,
decode_webp_header() would truncate the data to less than that and
happily report success. Now it no longer does that.
Found by clusterfuzz:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=57843&sort=-opened&can=1&q=proj%3Aserenity
No behavior change.
(Well, technically mix() uses the other simple implementation of lerp
and the two have slightly different behavior if the arguments are of
very different magnitude and in various corner cases. But in practice,
for ICC profiles, it shouldn't matter. For a few profiles I tested, it
didn't have a measurable effect.)
This now applies clipping to the destination bounding box before
painting, which cuts out a load of clipped computation and allows
using the faster set_physical_pixel() method.
Alongside this it also combines the source_transform and
inverse_transform outside the hot loop (which might cut things down
a little).
The `destination_quad.contains()` check is also removed in favour
of just checking if the mapped point is inside the source rect,
which takes less work to compute than checking the bounding box.
This takes this method down from 98% of the time to 10% of the
time when painting Google Street View (with no obvious issues).
This is done to be consistent with enclosing_int_rect() which is
used elsewhere to work out offsets in PaintStyles. Without this, you
can get an off-by-one in painting.
Since close_all_subpaths() appends while iterating, the vector can
end up being resized and the iterator invalidated. Previously, this
led to a crash/UAF in some cases.
Prefix code decoding seems to work fairly well and produces a ton of
log output with `#define WEBP_DEBUG 1`, so remove the log lines.
(If needed it's always possible to just locally revert this commit.)
No behavior change, since WEBP_DEBUG isn't usually defined.
WebP lossless files that use a color indexing transform with <= 16
colors use pixel bundling to pack 2, 4, or 8 pixels into a single pixel.
If the image's width doesn't happen to be an exact multiple of the
bundling factor, we need to:
1. Use ceil_div() instead of just dividing the width by the bundling
factor
2. Remember the original width and use it instead of computing
reduced width times bundling factor
This does these changes, and adds a simple test for it -- it at least
checks that the decoded images have the right size.
(I created these images myself in Photoshop, and used the same
technique as for Tests/LibGfx/test-inputs/catdog-alert-*.webp
to create images with a certain number of colors.)
See the lengthy comment added in this commit for details.
With this, the webp lossless decoder is feature complete :^)
(...except for bug fixes and performance improvements, as always.)
...in addition to modifying in-place. This is needed for bitpacking
support for the color indexing transform (and it could also be used
to make the color indexing transform return an indexed bitmap, which
is something we could do if that's the last transform that's applied).
No behavior change.
Reduces the time to run
Build/lagom/image ~/src/libwebp/webp_js/test_webp_wasm.webp -o tmp.png
from 0.5s to 0.25s.
Before, 60% of the time was spent decoding webp and 40% writing png.
Now, 16% of the time was spent decoding webp and 84% writing png.
That means png writing takes 0.2s, and webp decoding time went from
0.3s to 0.05s.
A template expression without explicit return type deduces its return
type as if for a function whose return type is declared auto. That
does deduce return-by-value, while `decltype(auto)` would deduce
return-by-reference. Explictly saying `decltype(auto)` would work
too, but writing out the type is maybe easier to understand.
No behavior change other than being much faster.