This patch makes IteratorRecord an Object. Although it's not exposed to
author code, this does allow us to store it in a VM register.
Now that we can store it in a VM register, we don't need to convert it
back and forth between IteratorRecord and Object when accessing it from
bytecode.
The big win here is avoiding 3 [[Get]] accesses on every iteration step
of for..of loops. There are also a bunch of smaller efficiencies gained.
20% speed-up on this microbenchmark:
function go(a) {
for (const p of a) {
}
}
const a = [];
a.length = 1_000_000;
go(a);
For now, we handle this by creating a synthetic async function to wrap
the top-level module code. This allows us to piggyback on the async
function driver wrapper mechanism.
This ensures that repeated loads of the same module succeed. (There is a
specific criteria where the same exact module object has to be returned
for multiple loads of the same referrer + specifier.)
Note that we don't check the referrer at the moment, that's a FIXME.
In particular, this patch focuses on:
- Updating the old "import assertions" to the new "import attributes"
- Allowing realms as module import referrer
This allows them to participate in the ownership graph and fixes a
lifetime issue in module loading found by ASAN.
Co-Authored-By: networkException <networkexception@serenityos.org>
Allows the bytecode interpreter to call the builtins c++
implementation directly without making a javascript call
just as the JIT.
Kraken test speedups: imaging-gaussian-blur.js (1.5x) and
audio-oscillator.js (1.2x)
We don't have the facilities to implement this method fully (namely, a
fully realized SharedArrayBuffer). But we can implement enough to
validate the values passed in by the user.
We don't have the facilities to implement these methods fully (namely, a
fully realized SharedArrayBuffer). But we can implement enough to
validate the values passed in by the user.
From test262 documentation, this flag means:
The test file should only be run when the [[CanBlock]] property of
the Agent Record executing the file is `false`.
This patch stubs out the accessor for that internal slot and skips tests
with the CanBlockIsFalse if that internal slot is true.
The number of registers in a call frame never changes, so we can
allocate it at the end of the CallFrame object and save ourselves the
cost of allocating separate Vector storage for every call frame.
Instead of allocating these in a mixture of ways, we now always put
them on the malloc heap, and keep an intrusive linked list of them
that we can iterate for GC marking purposes.
This required setting things up so that all function objects can plop
a PrimitiveString there instead of an AK string.
This is a step towards making ExecutionContext easier to allocate.
(Instead of MarkedVector<Value>.) This is a step towards not storing
argument lists in MarkedVector<Value> at all. Note that they still end
up in MarkedVectors since that's what ExecutionContext has.
This greatly reduces the number of compilations necessary when functions
declaring local functions are re-executed.
For example Octane/typescript.js goes from 58080 bytecode executables
to 960.
This patch adds two macros to declare per-type allocators:
- JS_DECLARE_ALLOCATOR(TypeName)
- JS_DEFINE_ALLOCATOR(TypeName)
When used, they add a type-specific CellAllocator that the Heap will
delegate allocation requests to.
The result of this is that GC objects of the same type always end up
within the same HeapBlock, drastically reducing the ability to perform
type confusion attacks.
It also improves HeapBlock utilization, since each block now has cells
sized exactly to the type used within that block. (Previously we only
had a handful of block sizes available, and most GC allocations ended
up with a large amount of slack in their tails.)
There is a small performance hit from this, but I'm sure we can make
up for it elsewhere.
Note that the old size-based allocators still exist, and we fall back
to them for any type that doesn't have its own CellAllocator.
Array.length is magical (since it has to reflect the number of elements
in the object's property storage).
We now handle it specially in jitted code, giving us a massive speed-up
on Kraken/ai-astar.js (and probably many other things as well) :^)