1
Fork 0
mirror of https://github.com/RGBCube/serenity synced 2025-07-24 22:47:44 +00:00
Commit graph

1412 commits

Author SHA1 Message Date
Nico Weber
52d17afd7e WebP: Add test for vertical ALPH chunk filtering_method 2023-06-09 04:35:19 -07:00
Nico Weber
816674de36 WebP: Add test for horizontal ALPH chunk filtering_method 2023-06-09 04:35:19 -07:00
Nico Weber
4e92027513 WebP: Add missing spec quotes in a comment, tweak whitespace 2023-06-09 04:35:19 -07:00
Nico Weber
df79a2720b WebP/Lossy+Alpha: Implement filtering_method for ALPH chunk
The spec says "Decoders are not required to use this information in any
specified way" about this field, but that's probably a typo and belongs
in the previous section. At least, images in the wild look wrong
without this, for example
https://fjord.dropboxstatic.com/warp/conversion/dropbox/warp/en-us/dropbox/Integrations_4@2x.png?id=ce8269af-ef1a-460a-8199-65af3dd978a3&output_type=webp

Implementation-wise, this now copies both uncompressed and compressed
data to yet another buffer for processing the alpha, then does
filtering on that buffer, and then copies the filtered alpha data
into the final image. (The RGB data comes from a lossy webp.)
This is a bit wasteful and we could probably manage without that
local copy, but that'd make the code more convoluted, so this is
good enough for now, at least until we've added tests for this case.

No test, because I haven't yet found out how to create images in this
format.
2023-06-07 08:08:52 +02:00
Nico Weber
5793579d84 Revert "WebP: Add optional debugging code for dumping ALPH as standa..."
This reverts commit 7a4dd8ad79aff9e66e8ae31820b41582be0a3a6d.
Having it in git history is enough.
2023-06-07 08:08:52 +02:00
Nico Weber
fc3249a1ca WebP: Add optional debugging code for dumping ALPH as standalone file
We can quickly revert this again, but I thought it'd be nice to have
it in git history for future debugging in this area.
2023-06-07 08:08:52 +02:00
Nico Weber
bf9e730566 WebP: Log error if there is one
Else, WebP files with a broken header just return "Decoding failed"
without any more details. This way, there's some debug logging with
more details.

Maybe we'll want to remove this again since it might lead to duplicate
error messages for files that have their error not in the header.
We'll see how this feels. (But most files don't have any errors, so
it's probably fine.)
2023-06-07 08:08:52 +02:00
Luke Wilde
8993a710f8 LibGfx: Make it possible to stroke a path with a paint style 2023-06-07 06:29:46 +02:00
Lucas CHOLLET
b78622ddf7 LibGfx/PortableFormat: Reject images with a maximum value of 0
These images can't contain any meaningful information, so no need to try
to decode them. Doing so result in a `SIGFPE`, as we divide by this
value later on.

Fixes: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=57434&sort=-opened&can=1&q=proj%3Aserenity
2023-06-06 23:48:52 +02:00
Ben Wiederhake
6a351376aa Everywhere: Only use local includes where appropriate
If a local include does not point to a file in the repository, it should
be a system include instead.
2023-06-06 23:19:50 +02:00
Karol Kosek
970a3ef4d8 LibGfx: Fix partial loading of tall and interlaced PNG files
The function stopped copying data from a subimage when the _y_ value
exceeded the image _width_ value, resulting in an incomplete image.
2023-06-06 19:55:51 +02:00
MacDue
c36b54a7bf LibGfx: Use stroke_to_fill() for rendering strokes in the AA painter
This probably should be cached somewhere, but this alone seems to give
nice results, and performance does not seem much worse.

Fixes #18519
2023-06-06 09:17:06 +02:00
MacDue
95a07bd4e5 LibGfx: Add Path::stroke_to_fill(thickness)
This function generates a new path, which can be filled to rasterize
a stroke of the original path (at whatever thickness you like). It
does this by convolving a circular pen with the path, so right now
only supports round line caps.

Since filled paths now have good antialiasing, doing this results in
good stroked paths for "free". It also (for free) fixes stroked lines
with an opacity < 1, nice line joins, and is possible to fill with a
paint style (e.g. a gradient or an image).

Algorithm from: https://keithp.com/~keithp/talks/cairo2003.pdf
2023-06-06 09:17:06 +02:00
MacDue
e853139cf0 LibGfx: Fix adding active edges in filled path rasterizer
This was previously masked by sorting the edges on max_y, but if the
last added edge pointed to an edge that ended on the current scanline,
that edge (and what it points to) would also end up added to the active
edges. These edges would then never get removed, and break things very
badly!
2023-06-04 05:40:39 +02:00
MacDue
6abc51d9f8 LibGfx: Simplify segmentizing paths
Remove SplitLineSegment and replace it with a FloatLine, nobody was
interested in its extra fields anymore. Also, remove the sorting of
the split segments, this really should not have been done here
anyway, and is not required by the rasterizer anymore. Keeping the
segments in stroke order will also make it possible to generate
stroked path geometry (in future).
2023-06-04 05:40:39 +02:00
Lucas CHOLLET
9e6d91032e LibGfx/JPEG: Use Error to propagate errors
This patch removes a long time remnant of the pre-`AK::Error` era.
2023-06-02 20:07:27 +02:00
Nico Weber
2452cf6b55 WebP/Lossy: Allow negative values from segment adjustment too
The spec doesn't talk about this happening in the text, but
`dequant_init()` in 20.4 processes segment adjustment and quantization
index adjustment in the same variable `q` before clamping.
Since we had to adjust the latter step in the previous commit, do
it for the former step too.

I haven't seen this happen in the wild yet (and now, I hopefully
never will notice it if it happens).
2023-06-01 17:36:20 +02:00
Nico Weber
661b2d394d WebP/Lossy: Clamp negative quantization indices to zero
The spec doesn't talk about this happening in the text, but
`dequant_init()` in 20.4 stores `q` in an int and clamps that
to 0 later.
2023-06-01 17:36:20 +02:00
Nico Weber
c2ec97dd79 WebP: Remove nonsensical comment
It's up to callers of the ImageDecoderPlugin to honor loop_count().
The ImageDecoderPlugin doesn't have to look at it when decoding frames.

No behavior change.
2023-06-01 16:23:46 +02:00
Nico Weber
287e2655cb WebP/Lossy: Add a missing clamp
The spec says that the AC dequantization factor for Y2 data should
be at least 8, so do that.

This only has a very small effect (only the first two AC table
entries are < 8 after multiplying with 155 / 100, so this would
have only a small effect on brightness), and this case is hit
exactly 0 times in all my test images.  But it's still good to match
the spec.
2023-06-01 16:23:46 +02:00
Nico Weber
13daa29d81 WebP/Lossy: Add const annotations to functions in Tables.h
No behavior change.
2023-06-01 16:23:46 +02:00
Nico Weber
24aa302e88 WebP/Lossy: Reduce size of MacroblockMetadata from 80 to 20 bytes
For a 1024x1024 image, saves about a quarter MB of memory use while
decoding (compared to the decompressed image data itself needing
4 MiB).  Not a huge win, but also very easy to do, so might as well.

No behavior change, no measurable performance impact.
2023-06-01 16:23:46 +02:00
Jelle Raaijmakers
4df3b5e1d2 LibGfx: Do not use divisions when calculating font subpixel offsets
No functional or performance changes; they were probably already
optimized away by the compiler.
2023-06-01 15:13:57 +02:00
Jelle Raaijmakers
5bac9df865 LibGfx: Remove SSE version of Color::blend()
I could not discover proof that this is actually faster than the non-SSE
version. In addition, for these relatively simple structures, the
compiler is often sufficiently smart to generate SSE code itself.

For a synthetic font benchmark I wrote, this results in a nice 11%
decrease in runtime.
2023-06-01 15:13:47 +02:00
Jelle Raaijmakers
5d0da8c096 LibGfx: Optimize Painter::blit_filtered()
For some reason, we were decoding the source color twice for every pixel
in the inner-most loop of `blit_filtered`. This makes sure we only
decode the source color once, and rearranges the code to improve
readability.

For my synthetic font rendering benchmark, this improves glyph rendering
performance by ~9%.
2023-06-01 12:23:24 +02:00
Andi Gallo
62c7fcd836 LibGfx: Multiply alpha channels for vector fonts, when necessary
When the background color has an alpha < 255, we can't copy over the
pixel alpha.
2023-06-01 09:22:41 +02:00
MacDue
6685656d2d LibGfx: Fix winding order of segments on elliptical arcs
for_each_line_segment_on_elliptical_arc() flips the start/end points
for negative theta deltas. When doing this we have to make sure the
line segments emitted swap the start/end points back, so that the
(correct) winding order can be calculated from them.

This makes nonzero fills not totally broken for a lot of SVGs.
2023-06-01 06:25:00 +02:00
MacDue
48fa8f97d3 LibGfx: Implement new antialiased filled path rasterizer
This is an implementation of the scanline edge-flag algorithm for
antialiased path filling described here:
https://mlab.taik.fi/~kkallio/antialiasing/EdgeFlagAA.pdf

The initial implementation does not try to implement every possible
optimization in favour of keeping things simple. However, it does
support:

   - Both evenodd and nonzero fill rules
   - Applying paint styles/gradients
   - A range of samples per pixel (8, 16, 32)
   - Very nice antialiasing :^)

This replaces the previous path filling code, that only really applied
antialiasing in the x-axis.

There's some very nice improvements around the web with this change,
especially for small icons. Strokes are still a bit wonky, as they don't
yet use this rasterizer, but I think it should be possible to convert
them to do so.
2023-06-01 06:25:00 +02:00
MacDue
e4adaa2d20 LibGfx: Make PaintStyle::paint() a public function
It's a pain (and silly) to make this private and add every user as a
friend.
2023-06-01 06:25:00 +02:00
Jelle Raaijmakers
5339b54b5d LibGfx: Improve glyph rendering speed for vector fonts
The glyph bitmap is a grayscale image that is multiplied with the
requested color provided to `Gfx::Painter::draw_glyph()` to get the
final glyph bitmap that can be blitted.

Using `Gfx::Color::multiply()` is unnecessary however: by simply taking
the destination color and copying over the glyph bitmap's alpha value,
we can prevent four multiplications and divisions per pixel.

In an artifical benchmark I wrote, this improved glyph rendering
performance by ~9%.
2023-06-01 06:18:57 +02:00
Nico Weber
8a40b49b8b WebP/Lossy: Use 8-bit buffers for prediction and YUV data
This is safe because:

* prediction only computes averages, or explicitly clamps for
  TM_PRED / B_TM_PRED. Since the inputs are in [0, 255], so will the
  outputs.
* Addition of IDCT and prediction buffer is immediately clamped back
  to [0, 255]

No behavior change, and matches what both libwebp and the reference
implementation in rfc6386 do.
2023-05-31 22:38:36 +02:00
Nico Weber
ffae065593 WebP/Lossy: Clamp right after summing IDCT output, instead of later
https://datatracker.ietf.org/doc/html/rfc6386#section-14.5 says:

"""
The summing procedure is fairly straightforward, having only a couple
of details.  The prediction and residue buffers are both arrays of
16-bit signed integers.  Each individual (Y, U, and V pixel) result
is calculated first as a 32-bit sum of the prediction and residue,
and is then saturated to 8-bit unsigned range (using, say, the
clamp255 function defined above) before being stored as an 8-bit
unsigned pixel value.
"""

It's IMHO not 100% clear if the clamping is supposed to happen
immediately (so that it affects prediction inputs for the next
macroblock) or later.

But vp8_dixie_idct_add() on page 173 in
https://datatracker.ietf.org/doc/html/rfc6386#section-20.8 does:

    recon[0] = CLAMP_255(predict[0] + ((a1 + d1 + 4) >> 3));

So it does look like it should happen immediately.

(I'm a bit confused why the spec then says "The prediction and residue
buffers are both arrays of 16-bit signed integers", since the
prediction buffer can just be an u8 buffer now, without changing
behavior.
2023-05-31 22:38:36 +02:00
Nico Weber
cf934f9bfc WebP/Lossy: Move two enums closer to the struct that uses them
I accidentally inserted a bunch of code in between.

No behavior change.
2023-05-31 16:40:40 +02:00
Nico Weber
830a3a25dc WebP/Lossy: Add a missing clamp() in TM_PRED prediction
The spec has that clamp at the end of
https://datatracker.ietf.org/doc/html/rfc6386#section-12.2:

    The exact algorithm is as follows:
    [...]
               b[r][c] = clamp255(L[r]+ A[c] - P);

For the test images I'm looking at, it doesn't seem to make a
dramatic difference, but omitting it in `B_TM_PRED` did make
a dramatic difference, so add it. (Also, the spec demands it.)
2023-05-31 16:22:49 +02:00
Nico Weber
40e1ec6cf9 WebP/Lossy: Remove an unnecessary branch
`predicted_y_above` is initialized to a row of 127s, so we can just
read from it even in the first macroblock row.

No behavior change.
2023-05-31 15:28:41 +02:00
Nico Weber
a2d8de180c WebP/Lossy: Add support for images with more than one partition
Each secondary partition has an independent BooleanDecoder.
Their bitstreams interleave per macroblock row, that is the first
macroblock row is read from the first decoder, the second from the
second, ..., until it wraps around again.

All partitions share a single prediction state though: The second
macroblock row (which reads coefficients off the second decoder) is
predicted using the result of decoding the frist macroblock row (which
reads coefficients off the first decoder).

So if I understand things right, in theory the coefficient reading could
be parallelized, but prediction can't be. (IDCT can also be
parallelized, but that's true with just a single partition too.)

I created the test image by running

    examples/cwebp -low_memory -partitions 3 -o foo.webp \
        ~/src/serenity/Tests/LibGfx/test-inputs/4.webp

using a cwebp hacked up as described in #19149. Since creating
multi-partition lossy webps requires hacking up `cwebp`, they're likely
very rare in practice. (But maybe other programs using the libwebp API
create them.)

Fixes #19149.

With this, webp lossy support is complete (*) :^)

And with that, webp support is complete: Lossless, lossy, lossy with
alpha, animated lossless, animated lossy, animated lossy with alpha all
work.

(*: Loop filtering isn't implemented yet, which has a minor visual
effect on the output. But it's only visible when carefully comparing
a webp decoded without loop filtering to the same decoded with it.
But it's technically a part of the spec that's still missing.

The upsampling of UV in the YUV->RGB code is also low-quality. This
produces somewhat visible banding in practice in some images (e.g.
in the fire breather's face in 5.webp), so we should probably improve
that at some point. Our JPG decoder has the same issue.)
2023-05-31 14:07:15 +02:00
Nico Weber
f3beff0930 WebP/Lossy: Tweak some comments 2023-05-30 06:14:56 +02:00
Nico Weber
74b50c046b WebP/Lossy: Check that file contains enough data for first partition 2023-05-30 06:14:56 +02:00
Nico Weber
f8e4a0a268 WebP/Lossy: Implement prediction and inverse DCT
This could be a bit prettier, but it works :^)
2023-05-29 19:44:45 +02:00
Nico Weber
fe6960286c WebP/Lossy: Add function for inverse 4x4 DCT from spec 2023-05-29 19:44:45 +02:00
Nico Weber
d1b5eec154 WebP/Lossy: Implement macroblock coefficient decoding
This basically adds the line

    u8 token = TRY(
        tree_decode(decoder, COEFFICIENT_TREE,
        header.coefficient_probabilities[plane][band][tricky],
        last_decoded_value == DCT_0 ? 2 : 0));

and calls it once for the 16 coefficients of a discrete cosine transform
that covers a 4x4 pixel subblock.

And then it does this 24 or 25 times per macroblock, for the 4x4 Y
subblocks and the 2x2 each U and V subblocks (plus an optional Y2 block
for some macroblocks).

It then adds a whole bunch of machinery to be able to compute `plane`,
`band`, and in particular `tricky` (which depends on if the
corresponding left or above subblock has non-zero coefficients).

(It also does dequantization, and does an inverse Walsh-Hadamard
transform when needed, to end up with complete DCT coefficients
in all cases.)
2023-05-29 10:41:53 -06:00
Nico Weber
b7483b636d WebP/Lossy: Add static data needed for decoding coefficients
Also add `vp8_short_inv_walsh4x4_c()` from the spec for the inverse
Walsh-Hadamard transform. The YUV output data must bitwise match the
behavior of the spec, so there isn't a ton of flexibility of how to
do this particular function.
2023-05-29 10:41:53 -06:00
Nico Weber
d15ae9fa93 WebP/Lossy: It's 'macroblock', not 'metablock'
Somehow my brain decided to change the name of this concept.
Not sure why, the spec consistently uses 'macroblock'.

No behavior change.
2023-05-28 18:01:31 +02:00
Shannon Booth
15151d7a4c LibGfx: Add Color::from_named_css_color_string
This is factored out from Color::from_string and determines the color
only from performing an ASCII case-insensitive search over the named
CSS colors.
2023-05-28 13:24:37 +02:00
Nico Weber
ea54c58930 WebP/Lossy: Variable naming fix for constants from last pull request 2023-05-27 15:25:00 -06:00
Nico Weber
bd5290dd45 WebP/Lossy: Add code to read macroblock metadata 2023-05-27 15:25:00 -06:00
Nico Weber
8159709c83 WebP/Lossy: Add static data tables for reading macroblock metadata 2023-05-27 15:25:00 -06:00
Nico Weber
8defd55349 WebP/Lossy: Add code to read the frame header in the first partition 2023-05-27 08:31:03 -06:00
Nico Weber
24a3687986 WebP/Lossy: Add WebPLoaderLossyTables.h
This will contain several of the fixed data tables from the VP8 spec.
For starters, it contains the tables needed to read the frame header
in the first partition. These tables are needed to read the
probabilities of the metablock predicition modes, which in turn will
be needed to read the metablock predicition modes themselves.
2023-05-27 08:31:03 -06:00
Nico Weber
bbc1f57d1e WebP/Lossy: Add a comment with a summary of the file format 2023-05-27 08:31:03 -06:00