`CMYKBitmap::to_low_quality_rgb()` morally still does the same thing,
but it has a slightly more scary name, and it doesn't use this exact
function. So let's toss it :^)
CMYK data describes which inks a printer should use to print a color.
If a screen should display a color that's supposed to look similar
to what the printer produces, it results in a color very different
to what Color::from_cmyk() produces. (It's also printer-dependent.)
There are many ICC profiles describing printing processes. It doesn't
matter too much which one we use -- most of them look somewhat
similar, and they all look dramatically better than Color::from_cmyk().
This patch adds a function to download a zip file that Adobe offers
on their web site. They even have a page for redistribution:
https://www.adobe.com/support/downloads/iccprofiles/icc_eula_win_dist.html
(That one leads to a broken download though, so this downloads the
end-user version.)
In case we have to move off this download at some point, there are also
a whole bunch of profiles at https://www.color.org/registry/index.xalter
that "may be used, embedded, exchanged, and shared without restriction".
The adobe zip contains a whole bunch of other useful and fun profiles,
so I went with it.
For now, this only unzips the USWebCoatedSWOP.icc file though, and
installs it in ${CMAKE_BINARY_DIR}/Root/res/icc/Adobe/CMYK/. In
Serenity builds, this will make it to /res/icc/Adobe/CMYK in the
disk image. And in lagom build, after #23016 this is the
lagom res staging directory that tools can install via
Core::ResourceImplementation. `pdf` and `MacPDF` already do that,
`TestPDF` now does it too.
The final piece is that LibPDF then loads the profile from there
and uses it for DeviceCMYK color conversions.
(Doing file access from the bowels of a library is a bit weird,
especially in a system that has sandboxing built in. But LibGfx does
that in FontDatabase too already, and LibPDF uses that, so it's not a
new problem.)
In cases where the stacking context painting requires a separate
bitmap, the destination position needs to be translated by the
scrolling offset to ensure it ends up in the correct position.
See #22821 for a previous attempt. This attempt should settle
things once and for all.
The opentype render path adjusts by `-font_ascender * -y_scale` in
Glyf::Glyph::append_simple_path(), so that's what we need to undo
to draw at the font's baseline.
(OpenType::Font::metrics() returns ascender scaled by y_scale already,
so no need to have the scale here where we undo the shift.)
Previously, we called `baseline()` which just returns the font's
font size, which is pretty meaningless:
https://tonsky.me/blog/font-size/https://simoncozens.github.io/fonts-and-layout/opentype.html#vertical-metrics-hhea-and-os2
Also, conceptually it makes sense to translate up by the ascender
to get from the upper edge of the glyph to the baseline.
https://llvm.org/devmtg/2022-11/slides/TechTalk5-WhatDoesItTakeToRunLLVMBuildbots.pdf
has an xref table that starts like so:
```
xref
0 214
0000000002 65535 f
0000924663 00000 n
0000000003 00000 f
0000000000 00000 f
0000000016 00000 n
0000000160 00000 n
0000000263 00000 n
```
This is a list of objects in the PDF file. The lines ending with 'f'
mean that this object is "free", that is it's not stored in the file.
In this file, objects 0, 2, 3 are free. For free objects, the first
number is the offset of the next free object: Object 0 refers to object
2, 2 to 3, and 3 back to 0 (since it's the last free object).
The lines ending with "n" are actual objects; here the first number is
a byte offset to where that object is stored in the file.
Furthermore, the file contains
```
/Outlines
2
0
R
```
in its root object, meaning that object 2 stores the page outlines.
Since object 2 is set as free, there is no object 2. But the spec
says that an invalid object reference is just the null object.
This patch makes us return null objects for references to free
objects, and it also makes us treat a null object as /Outlines value
the same as not having /Outlines in the first place.
Fixes#23023 -- we can now open that file. (We don't render it super
well, but only for already-known reasons.)
Since I found it a bit confusing: XRefTable has two related methods
here:
1. has_object() returns if an object was explicitly listed in an
xref table. The first number right after `xref` is the start
index. So if an xref table were to start with `10`, we'd implicitly
create 10 trailing objects for which has_object() would return false
2. is_object_in_use() returns true if an object that was in a table
(i.e. one where has_object() returns true) was listed with 'n' and
false if it was listed with 'f'.
DocumentParser::parse_object_with_index() should probably return a null
object for the `!has_object()` case as well instead of VERIFY()ing
that has_object() is true. But I haven't seen this in the wild yet,
so keeping as-is for now.
This change addresses an issue with overflow clipping in scenarios
where `overflow: hidden` is applied to boxes nested within elements
with `overflow: scroll`.
Fixes https://github.com/SerenityOS/serenity/issues/22733
JPEGs can store a `restart_interval`, which controls how many
minimum coded units (MCUs) apart the stream state resets.
This can be used for error correction, decoding parts of a jpeg
in parallel, etc.
We tried to use
u32 i = vcursor * context.mblock_meta.hpadded_count + hcursor;
i % (context.dc_restart_interval *
context.sampling_factors.vertical *
context.sampling_factors.horizontal) == 0
to check if we hit a multiple of an MCU.
`hcursor` is the horizontal offset into 8x8 blocks, vcursor the
vertical offset, and hpadded_count stores how many 8x8 blocks
we have per row, padded to a multiple of the sampling factor.
This isn't quite right if hcursor isn't divisible by both
the vertical and horizontal sampling factor. Tweak things so
that they work.
Also rename `i` to `number_of_mcus_decoded_so_far` since that
what it is, at least now.
For the test case, I converted an existing image to a ppm:
Build/lagom/bin/image -o out.ppm \
Tests/LibGfx/test-inputs/jpg/12-bit.jpg
Then I resized it to 102x77px in Photoshop and saved it again.
Then I turned it into a jpeg like so:
path/to/cjpeg \
-outfile Tests/LibGfx/test-inputs/jpg/odd-restart.jpg \
-sample 2x2,1x1,1x1 -quality 5 -restart 3B out.ppm
The trick here is to:
a) Pick a size that's not divisible by the data size width (8),
and that when rounded to a block size (13) still isn't divisible
by the subsample factor -- done by picking a width of 102.
b) Pick a huffman table that doesn't happen to contain the bit
pattern for a restart marker, so that reading a restart marker
from the bitstream as data causes a failure (-quality 5 happens
to do this)
c) Pick a restart interval where we fail to skip it if our calculation
is off (-restart 3B)
Together with #22987, fixes#22780.
This change makes hit-testing more consistent in the handling of hidden
overflow by reusing the same clip-rectangles.
Also, it fixes bugs where the box is visible for hit-testing even
though it is clipped by the hidden overflow of the containing block.
Hit-testing relies on updated clip rectangles and containing scroll
offsets, so it's necessary to ensure that paintables have these elements
updated.
This also removes the enclosing scroll offsets update from
`Internals::hit_test()`, as it is no longer needed.
Paintable boxes should not hold information stored in device pixels.
It should be converted from CSS pixels only by the time painting
command recording occurs.
Non-interleaved files always have an MCU of one data unit.
(A "data unit" is an 8x8 tile of pixels, and an "MCU" is a
"minium coded unit", e.g. 2x2 data units for luminance and
1 data unit each for Cr and Cb for a YCrCb image with
4:2:0 subsampling.)
For the test case, I converted an existing image to a ppm:
Build/lagom/bin/image -o out.ppm \
Tests/LibGfx/test-inputs/jpg/12-bit.jpg
Then I converted it to grayscale and saved it as a pgm in Photoshop.
Then I turned it into a weird jpeg like so:
path/to/cjpeg \
-outfile Tests/LibGfx/test-inputs/jpg/grayscale_mcu.jpg \
-sample 2x2 -restart 3 out.pgm
Makes 3 of the 5 jpegs failing to decode at #22780 go.
Where it was straightforward to do so, I've updated the users to also
use ByteStrings for their file paths, but most of them have a temporary
String::from_byte_string() call instead.
That's all this function reads from Component.
Also rename from validate_luma_and_modify_context() to
validate_sampling_factors_and_modify_context().
No behavior change.
Many widget classes need to run substantial initialization code after
they have been setup from GML. With this change, an
initialize_fallibles() function is called if available, allowing the
initialization to be invoked from the GML setup automatically. This
means that the GML-generated creation function can now be used directly
for many more cases, and reduces code duplication.
This allows positioning a child SVG relative to its parent SVG.
Note: These have been implemented as CSS properties as in SVG 2, these
are geometry properties that can be used in CSS (see
https://www.w3.org/TR/SVG/geometry.html), but there is not much browser
support for this. It is nicer to implement than the ad-hoc SVG
attribute parsing though, so I feel it may make sense to port the rest
of the attributes specified here (which should fix some issues with
viewport relative sizes).
The hit-testing position is now shifted by the scroll offsets before
performing any checks for containment. This is implemented by assigning
each PaintableBox/InlinePaintable an offset corresponding to the scroll
frame in which it is contained. The non-scroll-adjusted position is
still passed down when recursing to children because the assigned
offset accumulated for nested scroll frames.
With this change, hit testing works in the Inspector.
Fixes https://github.com/SerenityOS/serenity/issues/22068
Since we might enter Internals::hit_test() before the enclosing scroll
offsets are updated in the paintables tree during pre-paint, this
update need to be enforced.
This is a fix for regression introduced in
0bf82f748f
All CSS transforms need to be removed from the clip rectangle before
applying it. However, it is still necessary to calculate it with
applied transforms to find the correct intersection of all clip
rectangles in the containing block chain.
With this change, clip rectangles for boxes with hidden overflow or the
clip property are no longer calculated during the recording of painting
commands. Instead, it has moved to the "pre-paint" phase, along with
the assignment of scrolling offsets, and works in the following way:
1. The paintable tree is traversed to collect all paintable boxes that
have hidden overflow or use the CSS clip property. For each of these
boxes, the "final" clip rectangle is calculated by intersecting clip
rectangles in the containing block chain for a box.
2. The paintable tree is traversed another time, and a clip rectangle
is assigned for each paintable box contained by a node with hidden
overflow or the clip property.
This way, clipping becomes much easier during the painting commands
recording phase, as it only concerns the use of already assigned clip
rectangles. The same approach is applied to handle scrolling offsets.
Also, clip rectangle calculation is now implemented more correctly, as
we no longer stop at the stacking context boundary while intersecting
clip rectangles in the containing block chain.
Fixes:
https://github.com/SerenityOS/serenity/issues/22932https://github.com/SerenityOS/serenity/issues/22883https://github.com/SerenityOS/serenity/issues/22679https://github.com/SerenityOS/serenity/issues/22534
Now that these algorithms are a HeapFunction as opposed to SafeFunction,
the problem mentioned in the FIXME is no longer applicable as these
functions are GC-allocated like everything else.