There are a whole bunch of SVG attributes, and we shouldn't mix them in
with the HTML attributes. This patch adds some of them to the new
namespace, but there are more to be added. :^)
This is a very partial implementation, as some features (like 2 of the
possible constructor types, iteration and the getAll method) are
missing, and other's are not implemented due to the currently missing
URL built-in.
This namespace will be used for all interfaces defined in the URL
specification, like URL and URLSearchParams.
This has the unfortunate side-effect of requiring us to use the fully
qualified AK::URL name whenever we want to refer to the AK class, so
this commit also fixes all such references.
getComputedStyle(element) now returns a ComputedCSSStyleDeclaration
object, which is a live view of the computed style of a given element.
This works by ComputedCSSStyleDeclaration being a wrapper around an
element pointer. When you ask it for a CSS property, it gets the latest
computed style values from the element and returns them as a
CSS::StyleProperty object.
This first cut adds support for computed 'color' and 'display'.
In case the element doesn't have a corresponding node in the layout
tree, we fall back to using specified style instead. This is achieved by
performing an on-the-fly style resolution for the individual element and
then grabbing the requested property from that resolved style.
The spec allows us to optionally return from these for any reason.
Our reason is that we don't have all the infrastructure in place yet to
implement them.
This patch attaches a HTML::EventLoop to the main thread JS::VM used
for JavaScript bindings in the web engine.
The goal here is to model the various task scheduling mechanisms of the
HTML specification.
The direct-Document-access DOMTreeModel is no longer used, since the DOM
Inspector has to access the Document remotely over IPC. This commit
removes it, and renames DOMTreeJSONModel to take its place, since it no
longer has to differentiate itself from the non-JSON one.
In case that didn't make sense:
- Delete DOMTreeModel
- Rename DOMTreeJSONModel -> DOMTreeModel
The DOM specification says that the primary use case for these is to
give Promises abort semantics. It is also a prerequisite for Fetch,
as it is used to make Fetch abortable.
a
This is needed so all headers and files exist on disk, so that
the sonar cloud analyzer can find them when executing the compilation
commands contained in compile_commands.json, without actually building.
Co-authored-by: Andrew Kaster <akaster@serenityos.org>
Gather the custom commands for each of the 6 bindings generated targets
for libjs_js_wrapper invocations into some lists so that we can foreach
over the lists instead of having 6 copy pasted commands with one or two
things modified for each one.
Additional refactoring, use target_sources command to inform CMake about
additional source files for LibWeb, but only after it's been declared as
a library via add_library. Also avoid use of the write_if_different
script and use cmake -E copy_if_different instead. This lets us express
the actions in rules that CMake understands without going to an external
source file. It exposes a few optimization opportunities for the code
generators to accept an output filename instead of always going to
stdout.
Change all the places that were including the deprecated parser, to
include the new one instead, and then delete the old parser code.
`ParentNode::query_selector[_all]()` now treat their input as a
comma-separated list of selectors, instead of just one, and return
elements that match any of the selectors in that list. This is according
to these specs:
- querySelector/querySelectorAll:
https://dom.spec.whatwg.org/#ref-for-dom-parentnode-queryselector%E2%91%A0
- selector matching algorithm:
https://www.w3.org/TR/selectors-4/#match-against-tree
This allows you to invoke the HTML document parser and retrieve a
document as though it was loaded as a web page, minus any scripting
ability.
This does not currently support XML parsing.
This is used by YouTube (or more accurately, Web Components Polyfills)
to polyfill templates.
This introduces a new DOMTreeJSONModel, which provides the Model for the
InspectorWidget when the Browser is running using the
OutOfProcessWebView.
This Model is constructed with a JSON object received via IPC from the
WebContentServer.
Our "frame" concept very closely matches what the web specs call a
"browsing context", so let's rename it to that. :^)
The "main frame" becomes the "top-level browsing context",
and "sub-frames" are now "nested browsing contexts".
This allows the JS side to access the wasm memory, assuming it's
exported by the module.
This can be used to draw stuff on the wasm side and display them from
the js side, for example :^)
This impl is *extremely* simple, and is missing a lot of things, it's
also not particularly spec-compliant in some places, but it's definitely
a start :^)
This patch implements the HTML specification's "encoding sniffing
algorithm", which is used when no encoding can be obtained from the
Content-Type header (either because it doesn't contain a charset=...)
value or the file has not been opened via HTTP (as with local files).
It also modifies the creator of the HTMLDocumentParser to use the new
HTMLDocumentParser::create_with_uncertain_encoding static method, which
runs the encoding sniffing algorithm before instantiating the parser.
This now allows us to load local HTML pages (or remote pages without a
charset specified in the 'Content-Type' header) with a non-UTF-8
encoding such as 'windows-1252'. This would previously crash the
browser. :^)
The current ProtocolServer was really only used for requests, and with
the recent introduction of the WebSocket service, long-lasting
connections with another server are not part of it. To better reflect
this, this commit renames it to RequestServer.
This commit also changes the existing 'protocol' portal to 'request',
the existing 'protocol' user and group to 'request', and most mentions
of the 'download' aspect of the request to 'request' when relevant, to
make everything consistent across the system.
Note that LibProtocol still exists as-is, but the more generic Client
class and the more specific Download class have both been renamed to a
more accurate RequestClient and Request to match the new names.
This commit only change names, not behaviors.
The WebSocket bindings match the original specification from the
WHATWG living standard, but do not match the later update of the
standard that involves FETCH. The FETCH update will be handled later
since the changes would also affect XMLHttpRequest.
HTMLCollection is an awkward legacy interface from the DOM spec.
It provides a live view of a DOM subtree, with some kind of filtering
that determines which elements are part of the collection.
We now return HTMLCollection objects from these APIs:
- getElementsByClassName()
- getElementsByName()
- getElementsByTagName()
This initial implementation does not do any kind of caching, since that
is quite a tricky problem, and there will be plenty of time for tricky
problems later on when the engine is more mature.
HTMLInputElement now inherits from FormAssociatedElement, which will
be a common base for the handful of elements that need to track their
owner form (and register with it for the form.elements collection.)
At the moment, the owner form is assigned during DOM insertion/removal
of an HTMLInputElement. I didn't implement any of the legacy behaviors
defined by the HTML parsing spec yet.
Instead of all interested parties needing to write out the code to get
the cookie value for a load request, add a static helper to do it in
one location.
This moves the cookie parsing steps out of CookieJar into their own file
inside LibWeb. It makes sense for the cookie structures to be in LibWeb
for a couple reasons:
1. There are some steps in the spec that will need to partially happen
from LibWeb, such as the HttpOnly attribute.
2. Parsing the cookie string will be safer if it happens in the OOP tab
rather than the main Browser process. Then if the parser blows up due
to a malformed cookie, only that tab will be affected.
3. Cookies in general are a Web concept not specific to a browser.
The HTML <label> element is special in that it may be associated with
some other <input> element. When the label element is clicked, the input
element should be activated.
To achieve this, a LableableNode base class is introduced to provide an
interface for "labelable" elements to handle mouse events on their
associated labels. This not only allows clicking the label to activate
the input, but dragging the mouse from the label to the input (and vice-
versa) while the mouse is clicked will also active the label.
As of this commit, this infrastructure is not hooked up to any elements.
A FrameHostElement is an HTML element (<frame> or <iframe>) that may
have a content frame that participates in the frame tree.
This basically just moves code from <iframe> to a separate base class
so we can share it with <frame> once we implement <frame>.
Use the new CustomGet/CustomPut wrapper mechansim to intercept gets and
puts on CSSStyleDeclaration objects. This allows content to get and set
individual CSS properties from JavaScript. :^)