This enum used to store very precise state about the decoding process,
let's simplify that by only including two steps: HeaderDecoder and
BitmapDecoded.
https://www.haiku-os.org/images/bg-page.png has a size of 0 for
example.
Just ignoring the chunk instead of assuming that the image is sRGB
and has Perceptual rendering intent matches what libpng does
(cf `png_handle_sRGB()`), so let's do that too.
(All other chunk handlers are still strict about size.)
In order to ease the removal of this function, let's provide a no-op
default implementation so decoders can stop implementing it one by one.
First step of #19893
The adobe specification doesn't even consider JPEG images with a single
component. So let's not consider the content of the App14 segment for
grayscale images.
This is clearly something I missed during the first implementation. The
specification is crystal clear about it: "The quantization elements
shall be specified in zig-zag scan order."
This patch fixes the weird behavior we had when using the quantization
table.
This adds a decoder for the TinyVG vector format (https://tinyvg.tech/).
TinyVG is a very simple binary vector format, but it is good enough to
represent a lot of SVGs, without needing the full web engine.
The main use case for Serenity is for scalable icons (which .tvg easily
handles).
The ideal size is the size the user will display the image. Raster
formats should ignore this parameter, but vector formats can use
it to generate a bitmap of the ideal size.
Decoding progressive JPEGs involves a much more complicated logic than
sequential JPEGs. Thanks to template specialization, this patch allow us
to skip the additional cost of progressive images when it's not needed.
It gives a nice 10% improvements on sequential JPEGs :^)
Assertion fails if the point is outside of the rect. This was introduced
in introduced in #18970 and causes serenity to crash when changing to 2x
resolution for a monitor, if the cursor after resizing is outside of
the new screen.
Added test to reproduce.
This solution is a middle ground between re-computing `cos` every time
and a much more mathematically complicated approach (as we have in the
decoder).
While still being far from optimal it already gives us a 10x
improvement, not that bad :^)
Co-authored-by: Tim Flynn <trflynn89@pm.me>
This encoder is very naive as it only output SOF0 images and uses
pre-defined Huffman tables.
There is also a small bug with quantization which make using it
over-degrade the quality.
There was a TODO questioning whether breaking on >4bpp images
was the correct behaviour when RLE4 was detected. There is no
indication in the spec that RLE4 can be used with anything >4bpp,
so I believe this doesn't require additional follow-up.
MS Spec:
https://learn.microsoft.com/en-us/windows/win32/gdi/bitmap-compression
simple-vp8l-alpha-used-false.webp is a copy of simple-vp8l.webp,
with the byte at offset 0x18 changed from 0x10 to 0x00 -- that
is, the bit in the VP8L header that stores `is_alpha_used` is cleared.
We would already allocated a BGRx8888 instead of a BGRA8888 bitmap,
but keep actual alpha data in the `x` channel.
That lead to at least `image` still writing a PNG with an alpha channel.
So explicitly set the alpha channel to 0xff when is_alpha_used is false,
to make sure all consumers of decoded lossless webp data have behavior
consistent with other webp readers.
In practice, webp encoders usually don't write files that have
`is_alpha_used` set to false and then write actual alpha data to their
output. So this is rarely observable. However, for example for
lossy+ALPH webp files, the lossless webp used to store the ALPH channel
has `is_alpha_used` set to false and all channels but green are 0
(since the lossless green channel stores the alpha channel of a
lossy+ALPH webp). So if we dump such a bitmap to a standalone webp
file (e.g. with the temporary debugging code in fc3249a1ca),
then without this commit here, `image` would convert that webp to
a fully transparent webp, while other webp software would correctly
display the green image with opaque alpha.
Methods defined in header files should generally be `inline`,
not `static`.
`static` means that each translation unit will have its own local copy
of the function when the function isn't inlined and it's up to the
linker's identical code folding to hopefully merge the potentially many
copies in the program. `inline` means that the linker can put the
identical copies in a comdat and merge them by name, without having to
compare contents.
No behavior change.
It seems for very narrow tall paths it is possible for the dxdy value
to round to a value (that after many repeated additions) overshoots
the desired end x. This caused a (rather rare) crash on this 3D canvas
demo: https://www.kevs3d.co.uk/dev/html5logo/. For now, lets just avoid
a crash here. This does not make any noticeable visual differences on
the demos I tired.
Before this change we always returned the font's point size as the
x-height which was basically never correct.
We now get it from the OS/2 table (if one with version >= 2 is available
in the file). Otherwise we fall back to using the ascent of the 'x'
glyph. Most fonts appear to have a sufficiently modern OS/2 table.
Some of these functions can be called millions of times even for images
of moderate size. Inlining these functions really helps the compiler and
gives performance improvements up to 10%.
Inside each scan, raw data is read with the following rules:
- Each `0x00` that is preceded by `0xFF` should be discarded
- If multiple `0xFF` are following, only consider one.
That, plus the fact that we don't know the size of the scan beforehand
made us put a prepared stream in a vector for an easy, later on, usage.
This patch remove this duplication by realizing these operations in a
stream-friendly way.
This class is similar to some extent to an implementation of `Stream`,
but specialized for JPEG reading.
Technically, it is composed of a `Vector` to bufferize the input and
provides a simple interface to read bytes.
This patch aims for a future better and specialized control over the
input data. The encapsulation already allows us to get rid of the last
`seek` in the decoder, thus the ability to use the decoder with a raw
`Stream`.
As it provides faster `read` routines, this patch already reduces time
spend on reading.