A compressed ALPH chunk is a lossless webp bitstream, but without
the 5 byte "header" that stores width, height, is-alpha-channel-used
(it never is for an ALPH chunk since the ALPH chunk gets the alpha
data out of the lossless webp's green channel), and version fields.
For that reason, this cuts decode_webp_chunk_VP8L() into the
header-reading part and the remaining part, so that the remaining
part can be called by the ALPH reading routine.
Lossy webp files with a (losslessly) compressed alpha channel can
be found in the wild. Since we can't decode lossy webp data yet,
change the `#if 0` in decode_webp_chunk_VP8() to `#if 1` to test this.
ALPH chunks are only used to give lossy webp frames an alpha channel,
and lossy decompression isn't implemented yet. So this can currently
never be hit in practice -- but for debugging and testing, I put in
some code behind `#if 0` for now that fake-decompresses a lossy webp
frame by returning an empty bitmap.
But this also doesn't implement compressed ALPH chunks yet, and I
couldn't find any lossy-webp-with-alpha files that use uncompressed
alpha channels. So the code here isn't really tested.
If someone comes along who wants to implement lossy webp decoding,
they now only need to implement decode_webp_chunk_VP8() and everything
might Just Work.
It also makes it possible to implement alpha chunk decoding before
implementing lossy decoding (by making decode_webp_chunk_VP8()
return an empty black bitmap for testing).
That way, animated and non-animated webp files use the same code path
to decode images. That will make it easier to add handling for lossy
decompression and for alpha chunk handling.
No behavior change.
Width of table wrapper need to be set to to calculate width of table
box inside. Otherwise TFC will set wrong width assuming width of
containing block is 0.
There is a NOTE in our implementation of these steps which states that
the effective overload set only contains overloads with the correct
number of arguments. While this is true, we should not skip the steps to
clamp the inspected argument count to that correct number. Otherwise, we
will dereference past the end of the overload set's type list as we
blindly iterate over the user-provided arguments.
Fixes#18670.
"The official project language is American English […]."
5d2e915623/CONTRIBUTING.md (L30)
Here's a short statistic of the occurrences of the word "behavio(u)r":
$ git grep -IPioh 'behaviou?r' | sort | uniq -c | sort -n
2 BEHAVIOR
24 Behaviour
32 behaviour
407 Behavior
992 behavior
Therefore, it is clear that "behaviour" (56 occurrences) should be
regarded a typo, and "behavior" (1401 occurrences) should be preferred.
Note that The occurrences in LibJS are intentionally NOT changed,
because there are taken verbatim from the specification. Hence:
$ git grep -IPioh 'behaviou?r' | sort | uniq -c | sort -n
2 BEHAVIOR
10 behaviour
24 Behaviour
407 Behavior
1014 behavior
With this, lossless animated webp files work :^)
(Missing: Loop count handling is not yet implemented, and alpha blending
between frames isn't done in linear space.)
Old pattern:
foo.resolved(node, reference_value).to_px(node)
New pattern:
foo.to_px(node, reference_value)
Also, the reference value for to_px() is a CSSPixels, which means we
don't have to synthesize a CSS::Length just for this call.
It's not safe to hold on to a pointer to the cache slot across layout
work, since the nested layout may end up causing new entries to get
added to the cache, potentially invalidating a cache slot pointer.
Non-finite CSSPixels quantities should never make their way into hash
tables. If this ever happens, let's catch it closer to the source
instead of letting things cascade into confusion.
If proxy has an undefined trap, it will fallback to target's
internal_has_property, which will then check target's prototype for
the requested property. If Proxy's prototype is set to the Proxy itself,
it will check in itself in a loop, causing a stack overflow.
It's not enough to invalidate only layout, since changes to the DOM tree
may also cause different selectors to apply.
This brings Acid3 back to a score of 100/100. The problem was that a
:last-child selector wasn't rechecked after removing a node that was
previously the last child of its parent.
Regression from f36cbd3b65.
This fixes an issue when we sometime pass in an empty Main::Arguments to
GUI::Application::create().
Also, this mimics the behavior that Application::construct() had which
only iterated over argv when more than one argument was passed to it.
This necessitated returning `nullptr` instead of just `{}` in a lot of
places. Also, some temporary hackiness in `parse_css_value()`: That
returns a special `ParseError` type already, so we now have a
`FIXME_TRY()` macro which logs the error and then returns a generic
`ParseError::InternalError` value. Eventually this macro will go away,
once I figure out how to deal with this more nicely.
Although the algorithm for sizing tracks (rows or columns) is defined
once for both dimensions in the specification
(https://www.w3.org/TR/css-grid-2/#algo-track-sizing), we have
implemented it twice separately for sizing rows and columns.
In addition to code duplication, another issue is that these
implementations of the same algorithm have already diverged in some
places, and this divergence is likely to become even worse as our
implementation evolves.
This change unifies code for both dimension into one method that runs
track sizing.
While this change brings a bit of collateral damange (border.html and
minmax.html got changes in layout snaphots) it ultimately brings more
benefits because now we can evolve layout for both rows and colums
without duplicating the code :)