We got some errors while loading https://twinings.co.uk/ about this
interface missing, and it looked fairly simple so I sketched it out.
Note that I did leave some FIXMEs where it's not clear exactly which
metrics we should be returning.
This object is available as `window.internals` (or just `internals`) and
is only accessible while running in "test mode".
This first version only has one API: gc(), which triggers a garbage
collection immediately.
In the future, we can add more APIs here to help us test parts of the
engine that are hard or impossible to reach via public web APIs.
Everywhere only ever expects percent encoding to occur, so let's just
remove this flag altogether. At the same time, replace some
DeprecatedString with StringView.
Add the CanvasTextDrawingStyles mixin with the textAlign and
textBaseline attributes. Update fill_text in CanvasRenderingContext2D
to move the text rect by the text align and text baseline attributes.
Wrote a simple HTML example to showcase the new features.
Reactive images that had an image source selected based on viewport
size would only update when the images were first fetched, meaning the
last valid image source would become stuck in an image element. By
implementing the last step for reacting to environment changes, we can
run the proper updates even when the image does not need to be fetched.
- Requesting an unsupported image type will now fallback to PNG
(which is now always the case),
- Errors should return 'data:,' instead of empty string,
- Added spec comments
Parsing 'data:' URLs took it's own route. It never set standard URL
fields like path, query or fragment (except for scheme) and instead
gave us separate methods called `data_payload()`, `data_mime_type()`,
and `data_payload_is_base64()`.
Because parsing 'data:' didn't use standard fields, running the
following JS code:
new URL('#a', 'data:text/plain,hello').toString()
not only cleared the path as URLParser doesn't check for data from
data_payload() function (making the result be 'data:#a'), but it also
crashes the program because we forbid having an empty MIME type when we
serialize to string.
With this change, 'data:' URLs will be parsed like every other URLs.
To decode the 'data:' URL contents, one needs to call process_data_url()
on a URL, which will return a struct containing MIME type with already
decoded data! :^)
This patch changes the way StructuredSerializeInternal() serialize
strings by storing four bytes into a 32-bit entry, instead of one code
point per 32-bit entry.
StructuredDeserialize() has also been changed to deserialize strings by
the same ruleset.
In order to follow spec text to achieve this, we need to change the
underlying representation of a host in AK::URL to deserialized format.
Before this, we were parsing the host and then immediately serializing
it again.
Making that change resulted in a whole bunch of fallout.
After this change, callers can access the serialized data through
this concept-host-serializer. The functional end result of this
change is that IPv6 hosts are now correctly serialized to be
surrounded with '[' and ']'.
This patch implements "react to changes in the environment" from the
HTML spec and hooks HTMLImageElement up with viewport rect change
notifications (from the browsing context).
This fixes the issue where we'd load a low-resolution image and not
switch to a high-resolution image after resizing the window.
Note that we currently can't resolve calc() values without a layout
node, so when normalizing an image's source set, we'll flush any pending
layout updates and hope that gives us an up-to-date layout node.
I've left a FIXME about implementing this in a more elegant and less
layout-thrashy way, as that will require more architectural work.
Required by Atlassian to continue to their authorization process.
Also used by the SerenityOS FAQ redirect on the website, the Bootstrap
documentation for going to older versions from the dropdown and
likely several other sites.
Before this change, we would process each image as it finished
downloading. This often led to a situation where we'd decode 1 image,
schedule a layout, do the layout, then decode another image, schedule
a layout, do the layout, etc. Basically decoding and layouts would get
interleaved even though we had multiple images fetched and ready for
decoding.
This patch adds a simple BatchingDispatcher thingy that HTMLImageElement
uses to batch the handling of successful fetches.
With this, the number of layouts while loading https://shopify.com/ goes
from 48 to 6, and the page loads noticeably faster. :^)
This fixes an assertion on https://amazon.com/ since WindowProxy
would advertise "0" as an own property key, but then act like it was
a bogus property when actually queried for it directly.
Rather than splitting the Iterator type and its AOs into two files,
let's combine them into one file to match every other JS runtime object
that we have.
This is an editorial change in the ECMA-262 spec. See:
956e5af
This splits the GetIterator AO into two AOs, to remove some recursion
and to (soon) remove optional parameters.
This does not implement extra functionality on top of the basic parser,
but allows multiple places in LibWeb to call the 'correct' interface for
when it is fully implemented.
Now that we use the HTML image loading algorithm from spec, we can
implement complete correctly.
This (finally) fixes an issue where images were not loading on
https://twinings.co.uk/ :^)
Since the underlying HTML::Window can change, caching property accesses
on WindowProxy is not as simple as remembering the shape. Let's disable
caching here for now. We can come back to it in the future when we have
no low-hanging fruit left. :^)
Fixes an assertion failure on https://twinings.co.uk/
This function now takes an optional out parameter for callers who would
like to what kind of property we ended up getting.
This will be used to implement inline caching for property lookups.
Also, to prepare for adding more forms of caching, the out parameter
is a struct CacheablePropertyMetadata rather than just an offset. :^)
The main missing features are rootMargin, proper nested browsing
context support and content clip/clip-path support.
This makes images appear on some sites, such as YouTube and
howstuffworks.com.
We've peppered this workaround around the code base as needed in a few
different ways. This adds a helper class to perform this workaround in
order to simplify doing so, and ensure cleanup occurs in a RAII fashion.
This also makes it easier to grep for places where this workaround is
employed.
This fixes an issue where a BOM at the head of a style sheet would be
passed verbatim to the parser, who would then interpret it as an ident
token and (after some confusion) fail to parse the first rule, but then
carry on with the rest of the sheet.
This allows increasing and decreasing the media volume by 10% with the
up and down arrow keys, respectively. This also allows toggling the mute
state with the M key.
This allows seeking backwards and forwards by 5 seconds with the left
and right arrow keys, respectively. This also allows seeking to the
beginning and end of the media track with the home and end keys.