To abort the processing of any nested invocations of the tokenizer,
just return is enough in this case.
During the process of pending parsing blocking script, the
is_ready_to_be_parser_executed() check should be applied on the
blocking script, not the original script.
Now that the processResponseConsumeBody algorithm receives the internal
response body of the fetched object, we do not need to go out of our way
to read its body from outside of fetch.
However, several elements do still need to manually inspect the internal
response for other data, such as response headers and status. Note that
HTMLScriptElement already does the new workaround as a proper spec step.
This allows us to figure out where a specific CSS property comes from,
which is going to be used in a future commit to uniquely identify
running animations.
We have to check that the entry in CrossOriginProperties is the one
actually requested from the caller before executing the body of the
loop. This fixes a crash triggered by YouTube iframe embedding.
Instead of eagerly populating the stack trace with a textual
representation of every call frame, just store the raw source code range
(code, start offset, end offset). From that, we can generate the full
rich backtrace when requested, and save ourselves the trouble otherwise.
This makes test-wasm take ~7 seconds on my machine instead of ~60. :^)
This PR corrects the signs of the start/end points that are passed
to the elliptical_arc_to(), it also removes the bogus path.close()
after creating the arc. Now .arc() seems to work correctly.
BrowsingContext::set_active_document() may end up asking for the Page's
top level browsing context before Page has one. Do a workaround for now
where we have an API to ask if it's initialized.
Long-term we should find a cleaner solution.
This fixes an issue where loading an iframe would cause the current
browser tab title to get overwritten with an empty string.
The problem is that nested browsing contexts can be considered "top
level" during their initialization, but only one browsing context is
ever the Page::top_level_browsing_context(), so that's what we check.
We now create a flex container inside the input element's UA shadow tree
and add the placeholder and non-placeholder text as flex items (wrapped
in elements whose style we can manipulate).
This fixes the visual glitch where the placeholder would appear below
the bounding box of the input element. It also allows us to align the
text vertically inside the input element (like we're supposed to).
In order to achieve this, I had to make two small architectural changes
to layout tree building:
- Elements can now report that they represent a given pseudo element.
This allows us to instantiate the ::placeholder pseudo element as an
actual DOM element inside the input element's UA shadow tree.
- We no longer create a separate layout node for the shadow root itself.
Instead, children of the shadow root are treated as if they were
children of the DOM element itself for the purpose of layout tree
building.
That's what this class really is; in fact that's what the first line of
the comment says it is.
This commit does not rename the main files, since those will contain
other time-related classes in a little bit.
This fixes a plethora of rounding problems on many websites.
In the future, we may want to replace this with fixed-point arithmetic
(bug #18566) for performance (and consistency with other engines),
but in the meantime this makes the web look a bit better. :^)
There's a lot more things that could be converted to doubles, which
would reduce the amount of casting necessary in this patch.
We can do that incrementally, however.
Some of the live HTMLCollection only ever contain children of their root
node. When we know that's the case, we can avoid doing a full subtree
traversal of all descendants and only visit children.
This cuts the ECMA262 spec loading time by over 10 seconds. :^)
Rather than queueing microtasks ad nauseam to check if a media element
has a new source candidate, let the media element tell us when it might
have a new child to inspect. This removes endless CPU churn in cases
where there is never a candidate that we can play.
In order to separate the SVG content from the rest of the engine, it
gets its very own Page, PageClient, top-level browsing context, etc.
Unfortunately, we do have to get the palette and CSS/device pixel ratios
from the host Page for now, maybe that's something we could refactor in
the future.
Note that this doesn't work visually yet, since we don't calculate the
intrinsic sizes & ratio for SVG images. That comes next. :^)
This allows the painting subsystem to request a bitmap with the exact
size needed for painting, instead of being limited to "just give me a
bitmap" (which was perfectly enough for raster images, but not for
vector graphics).
The existing implementation moves down into a new subclass called
AnimatedBitmapDecodedImageData.
The purpose of this change is to create an extension point where we can
plug in an SVG renderer. :^)
If linking fails, we throw a JS exception, and if there's no execution
context on the VM stack at that time, we assert in VM::current_realm().
This is a hack to prevent crashing on failed module loads. Long term we
need to rewrite module loading since it has been refactored to share
code differently between HTML and ECMA262.
Instead of recomputing the state whenever someone asks for it, we now
cache it when the attribute is added/changed/removed.
Before this change, HTMLElement::is_editable() was 6.5% of CPU time
when furiously resizing Hacker News. After, it's less than 0.5%. :^)
The spec seems to neglect the potential nullity of an image's pending
request in various cases.
Let's protect against crashing and mark these cases with a FIXME about
figuring out whether they are really spec bugs or not.