Before this change, we always derived a box's baseline from its last
child, even if the last child didn't have any line boxes inside.
This caused baselines to slip further down vertically than expected.
There are more baseline alignment issues to fix, but this one was
responsible for a fair chunk of trouble. :^)
I misunderstood the spec step for checking whether the host 'ends with a
number'. We can't simply check for it if ends with a number, this check
is actually an algorithm which is required to avoid detecting hosts that
end with a number from an IPv4 host.
Implement this missing step, and add a test to cover this.
After the CSSPixels implementation evolved from a wrapper of double
to a fixed-point saturated math arithmetic implementation, it makes
sense to have separate tests for it.
Using fixed-point saturated arithmetics for CSSPixels allows to avoid
accumulating floating-point errors.
This implementation is not complete yet: currently saturated
arithmetics implemented only for addition. But it is enough to not
regress any of layout tests we have :)
See https://github.com/SerenityOS/serenity/issues/18566
Returning a reference resulted in Mail's use of Promise causing a crash.
Also, awaiting an already-awaited promise is an odd thing to do anyway,
so let's just make it release the resolved/rejected value instead of
returning a reference to it.
Co-Authored-By: Valtteri Koskivuori <vkoskiv@gmail.com>
Follow the computing column measures section of the specification, which
gives an algorithm for setting intrinsic percentage widths when spanning
columns are involved.
This is intended to be used in adding test-js tests where there is
different behaviour between the AST interpreter and bytecode mode.
In particular, this is useful for tests which fail in AST, but pass in
bytecode, as the AST interpreter is run in CI but bytecode is not.
This image uses the modular encoding with a very simple prediction tree.
It also makes use of two features: upsampling (x2 factor) and a
non-standard bit depth (10 bits). The file has been generated on
https://jxl-art.surma.technology/ , with the following input:
Width 64
Height 64
Upsample 2
Bitdepth 10
if N > 300
- NE -6
- W 6
Change how we store type of columns. It was used where the specification
only distinguishes between percent and everything else, so it makes more
sense to store and use it as a boolean.
The specification says we should distribute excess width proportionally
to the width of the cell, not to the preferred increment. Doing the
latter leads to distributing all excess width to just the cells which
demand some increment, even if it's very modest. Moreover, there's code
which partially implements the correct criteria just below the one we
remove here.
These passes have not been shown to actually optimize any JS, and tests
have become very flaky with optimizations enabled. Until some measurable
benefit is shown, remove the optimization passes to reduce overhead of
maintaining bytecode operations and to reduce CI churn. The framework
for optimizations will live on in git history, and can be restored once
proven useful.
The Heap::uproot_cell() API was used to implement markAsGarbage() which
was used in 3 tests to forcibly destroy a value, even if it had
references on the stack or elsewhere.
This patch rewrites the 3 tests that used this mechanism to be
structured in a way that allows garbage collection to collect the values
as intended without hacks. And now that the uprooting mechanism is no
longer needed, it's uprooted as well.
This fixes 3 test-js tests in bytecode mode. :^)
I added this file thinking it was necessary for the wpt run command.
However, it's only needed for updating expectations metadata. Since wpt
run always regenerates MANIFEST.json before updating expectations, we
can safely delete this file from the repository.
Reordering these calls allow us to ensure that all encoders are able to
return the size of the image before they are requested to decode the
whole bitmap.
The Test262RunnerHandler class in test-test262 was made in order to
spawn a subprocess, connect to its input/output error pipes, and obtain
its return value.
Later on, a copy of this implementation was added to TestSed with
modifications, such as adding support for reading from the output pipes
as well.
Unify these two implementations into a new Core::Command class. This new
implementation is more closely modeled from the TestSed implementation
due to the extra functionality it implemented.
Once we've resolved the used flex item width & height, we should allow
percentage flex item sizes to resolve against them instead of forcing
flex items to always treat percentages as auto while doing intrinsic
sizing layout.
Regressed in 8dd489da61.
When specifying either `background-position-x: right` or
`background-position-y: bottom` without an offset value no
EdgeStyleValue was created.
However, the spec says the offset should be optional.
Now, if you do not provide an offset, it creates the EdgeStyleValue
with a default offset of 0 pixels.
We do this by piggybacking on FormattingContext helpers instead of
reinventing the wheel in FlexFormattingContext.
This fixes an issue where `min-width: fit-content` (and other
layout-dependent values) were treated as 0 on flex items.
This makes the cookie banners look okay on https://microsoft.com/ :^)
This is just a straight (and fairly inefficient) implementation of IPv6
parsing and serialization from the URL spec.
Note that we don't use AK::IPv6Address here because the URL spec
requires a specific serialization behavior.
If an inline-block has a percentage height that relies on the auto
height of the containing block, it should always resolve to the
automatic height of the box, regardless of the percentage value. This
change may seem confusing, but it aligns with the behavior of other
engines.
Unlike all other primitives elliptical arcs are non-trivial to
manipulate, it's tricky to correctly apply a Gfx::AffineTransform to
them. Prior to this change, Path::copy_transformed() was still
incorrectly applying transforms such as flips and skews to arcs.
This patch very closely approximates arcs with cubic beziers (I can not
visually spot any differences), which can then be easily and correctly
transformed in all cases.
Most of the maths here was taken from:
https://mortoray.com/rendering-an-svg-elliptical-arc-as-bezier-curves/
(which came from https://www.joecridge.me/content/pdf/bezier-arcs.pdf,
now a dead link).
This tests that we can successfully parse the "everything" TVG files,
which make use of every feature in TinyVG.
Test files taken from https://github.com/TinyVG/examples (MIT).
This test proves the ability of TransformStream to execute
caller supplied code in the flush callback, and have access to
TransformStreamDefaultController.
This test proves the ability of TransformStream to execute
caller supplied code in the start callback, and have access to
TransformStreamDefaultController.
This test proves the ability of TransformStream to execute to execute
caller supplied code in the transform callback that can transform
incoming chunks, and have access to TransformStreamDefaultController.
There are two parts to this fix:
- First, StyleProperties::transformations() would previously omit calc()
values entirely when returning the list of transformations. This was
very confusing to StackingContext which then tried to index into the
list based on faulty assumptions. Fix this by emitting calc values.
- Second, StackingContext::get_transformation_matrix() now always calls
resolve() on length-percentages. This takes care of actually resolving
calc() values. If no reference value for percentages is provided, we
default to 0px.
This stops LibWeb from asserting on websites with calc() in transform
values, such as https://qt.io/ :^)