This is a hack that allows block-level replaced elements to be flex
items. Flexbox layout currently assumes (in many places) that it's
always possible to create an independent formatting context for each of
its items.
This element doesn't actually support anything at the moment, but it
still massively speeds up painting performance on Wikipedia! :^)
How? Because we no longer paint SVG <path> elements found inside
<clipPath> elements. SVGClipPathElement::create_layout_node() returns
nullptr which stops the layout tree builder from recursing further into
the subtree, and so the <path> element never gets a layout or paint box.
Mousing over Wikipedia now barely break 50% CPU usage on my machine :^)
There were two main issues with these functions:
1. They were not updating layout before inspecting metrics.
2. They were not returning viewport metrics for the root and body
elements when appropriate.
Percentage stroke widths are resolved against the scaled viewport size
which we were retrieving by calling client_width() and client_height()
on the element. Now that those accessors may trigger layout, this means
that we can't use them from the stroke_width() getter, which is itself
used *from within* layout.
Previously, we would create a new Gfx::ScaledFont whenever we needed one
for an element's computed style. This worked fine on Acid3 since the use
of web fonts was extremely limited.
In the wild, web fonts obviously get used a lot more, so let's have a
per-point-size font cache for them.
Block the replacement of the favicon by the default favicon loader
when a favicon that is loaded through a link tag is already active.
This way, the favicon in the link tags will be prioritized against
the default favicons from `/favicon.ico` or the seranity default icon.
When a favicon has been loaded, trigger a favicon update on
document level. Of all the link tags in the header, the last
favicon that is load should be shown.
When the favicon could not be loaded, load the next icon in reverse tree
order.
If the font resource finishes loading we need to make sure the element
using it gets a chance to re-layout, even if the font-family property
didn't change.
AFAICT, css-values-4 tells us that clamping numbers to a range where
min>max is okay. That means we can't use AK::clamp() since it will
VERIFY that max>=min.
This patch adds a css_clamp() helper (locally in FFC for now).
This fixes an issue where a bunch of sites were crashing due to the
VERIFY in AK::clamp().
We were mistakenly treating inline replaced elements as if they are the
start of a regular display:inline element. This meant that we collected
the horizontal start and end metrics from the box model, and then added
those to the inline-level item produced by InlineLevelIterator.
This effectively meant that <img>, <svg> and other replaced elements got
double-sized values for margin/border/padding on the left and right
sides. (Which manifested as a mysterious margin around the element.)
Previously, we were running the "load fonts if needed" machine at the
start of every style computation. That was a lot of unnecessary work,
especially on sites with lots of style rules, since we had to traverse
every style sheet to see if any @font-face rules needed loading.
With this patch, we now load fonts once per sheet, right after adding
it to a document's style sheet list.
Give the Paintable class some basic helpers for traversing the paint
tree. Note that they actually piggy-back on the layout tree for links
between nodes.
When cloning the PaintContext we should be using the painter backed by
the bitmap created for this stacking context layer.
Fixes: 54c3053bc3 ("LibWeb: Preserve paint state when painting...")
This is used to skip downloading fonts in formats that we don't support.
Currently we only support TTF as far as I am aware.
The parts of a `src` are in a fixed order, unusually, which makes the
parsing more nesty instead of loopy.
Like, An+B, this is an old construct that does not fit well with modern
CSS syntax, so things get a bit hairy! We have to determine which
tokens match the grammar for `<urange>`, then turn those back into a
string, and then parse the string differently from normal. Thankfully
the spec describes in detail how to do that. :^)
This is not 100% correct, since we are not using the original source
text (referred to in the spec as the "representation") of the tokens,
but just converting them to strings in a manual, ad-hoc way.
Re-engineering the Tokenizer to keep that original text was too much of
a tangent for today. In any case, we do parse `U+4???`, `U+0-100`,
`U+1234`, and similar, so good enough for now!
"Component value" is the term used in the spec, and it doesn't conflict
with any other types, so let's use the shorter name. :^)
Also, this doesn't need to be friends with the Parser any more.