This might be useful for converting data from arbitrary profiles to
sRGB.
For now, this only encodes the transfer function and puts in zero values
for chromaticities, whitepoint, and chromatic adaptation matrix.
This makes the profile unusable for now. But I've spent a very long time
reading things and need to check in some code, and it's some progress.
The encoded transfer function exactly matches the one in GIMP's built-in
sRGB ICC profile (but not the Compact-ICC-Profiles v4 one or the
RawTherapee v4 one -- I'll add a comment about why later.)
This returns the font's size (distance between ascender and descender)
in pixels, rounded up to the nearest integer.
This is the number we want to use in a lot of UI code, so let's have
a friendly API for it instead of ceil'ing the pixel_size() in a million
random places.
This API is used by LibWeb's text painter. Bring it up to date with the
glyph width computations performed in draw_text_line() used by other GUI
applications.
Similar to the FontDatabase, this will be needed for Ladybird to find
emoji images. We now generate just the file name of emoji image in
LibUnicode, and look for that file in the specified path (defaulting to
/res/emoji) at runtime.
Scan with only one component are by definition not interleaved, meaning
that each value is linearly ordered in the stream. Grayscale images
were supported thanks to a hack, by forcing the subsampling to 1.
Now we properly support grayscale image with other subsampling (even if
it doesn't make sense) and more generally scans with only one component
and any sampling factors.
While this solution is more general than the last one it also feels a
bit hackish. We should probably refactor the way we iterate over
components and macroblocks. But that's work for latter, especially when
we will add support for other subsampling than 4-2-2.
Huffman streams are encountered in the scan segment. They have nothing
to do outside this segment, hence they shouldn't outlive the scan.
Please note that this patch changes behavior. The stream is now reset
after each scan.
A scan can contain fewer components that the full image. However, if
there is multiple components, they have to follow the ordering of the
frame header. It means that we can loop over components of the image
and skip those that doesn't correspond.
For now, we exit after the first scan without needing to parse `EOI`.
However, to read scans in a loop we will need to properly detect and
parse `EOI`.
This patch brings us closer to the spec point of view. And while it
makes no functional changes, it reduces the number of places where you
can misuse scan-specific data and improve support for multiple scans.
Nobody made use of the ErrorOr return value and it just added more
chance of confusion, since it was not clear if failing to sniff an
image should return an error or false. The answer was false, if you
returned Error you'd crash the ImageDecoder.
Turns out extended-lossless-animated.webp did have a loop count of 0.
So I opened it in Hex Fiend and changed the byte at position 42
(which is the first byte of the little-endian u16 storing the loop
count) to 0x2A, so that the test can compare the loop count to something
not 0.
This reorganizes things so that:
* When initially decoding chunks, we only store pointers to
their data and don't look at the contents
* We allow pausing decoding after decoding the first chunk, since
that's where the dimensions are stored, and we don't need to read
more than that if we only care about dimensions. (Currently
inconsequential, but maybe we want to get dimensions after
receiving the first few bytes off the network in the future.)
* We then have separate methods to interpret chunk data
(only for the first few bytes which store the size, so far.)
This is for lossy compression, in which case a WebP file is
a single VP8 key frame.
This only parses the 10-byte frame header, which contains image
dimensions (and some other things).
For now, just dbgln_if() all data. Eventually we'll want to use at
least width and height.
No behavior change.
(Well, technically, this now correctly sets the state to Error
if the first chunk is neither of 'VP8 ', 'VP8L', 'VP8X'. But no
*interesting* behavior change.)
This reverts commit eb1ef59603c13c43b87c099c43c4d118dc8441f6.
The idea of saving clip box to apply it to handle `overflow: hidden`
turned out to break painting if box is painted before it's containing
block (it is possible if box has negative z-index).
I drew the two webp files in Photoshop and saved them using the
"Save a Copy..." dialog, with ICC profile and all other boxes checked.
(I also tried saving with all the boxes unchecked, but it still wrote an
extended webp instead of a basic file.)
The lossless file exposed a bug: I didn't handle chunk padding
correctly before this patch.
At the moment, this processes the RIFF chunk structure and extracts
the ICCP chunk, so that `icc` can now print ICC profiles embedded
in webp files. (And are image files really more than containers
of icc profiles?)
It doesn't even decode image dimensions yet.
The lossy format is a VP8 video frame. Once we get to that, we
might want to move all the image decoders into a new LibImageDecoders
that depends on both LibGfx and LibVideo. (Other newer image formats
like heic and av1f also use video frames for image data.)
We could make UnknownTagData hold on to undecoded, raw input data and
write that back out when serializing. But for now, we don't.
On the flipside, this _does_ write unknown tags that have known types.
We could have a mode where we drop unknown tags with known types.
But for now, we don't have that either.
With this, we can for example reencode
/Library/ColorSync/Profiles/WebSafeColors.icc and icc (and other
tools) can dump the output icc file. The 'ncpi' tag with type 'ncpi'
is dropped while writing it, while the unknown tag 'dscm' with
known type 'mluc' is written to the output. (That file is a v2 file,
so 'desc' has to have type 'desc' instead of type 'mluc' which
it would have in v4 files -- 'dscm' emulates an 'mluc' description
in v2 files.)