Previously the variable and lexical environments were already kept in a
NativeFunction call. However when we (try to) call a private method from
within an async function we go through async_block_start which sets up
a NativeFunction to call.
This is technically not exactly as the spec describes it, as that
requires you to actually "continue" the context. Since we don't have
that concept (yet) we use this as an implementation detail to access the
private environment from within a native function.
Note that this not allow general private environment access since most
things get blocked by the parser already.
Other engines don't give NaN if there is at least one digit after the
dot for milliseconds. We were much stricter and required exactly three
digits.
But there is real world usage of different amounts of digits such as
discord having three extra trailing zeros.
When the initialization statement of a for-loop uses 'let', we must
create a new environment for each iteration of the for loop. The
bindings of the initialization statement are copied over to the new
environment. Since the bindings are created in the same order each time,
we can use that order to directly initialize the bindings and avoid any
O(n) lookups in this hot loop.
Similar to the direct getter and setter in DeclarativeEnvironment, there
are cases where we already know the index of a binding and can avoid a
O(n) lookup to re-find that index.
This reduces the size of the DeclarativeEnvironment from 72 bytes to 48
bytes. This savings helps in the context of nested for-loops which use
'let' to bind the initial variable declarations. In this case, the spec
dicates we create a new environment for each loop iteration by way of
the CreatePerIterationEnvironment AO.
In particular, test262's generated RegExp tests contains many loops of
the form:
for (let i = 0; i < a_number_on_the_order_of_10; ++i)
for (let j = 0; j < a_number_on_the_order_of_10_thousand; ++j)
This results in creating hundreds of thousands of environments.
Constructing the HashMap in DeclarativeEnvironment was by far the most
expensive thing when making JavaScript function calls.
As it turns out, we don't really need this to be a HashMap in the first
place, as lookups are cached (by EnvironmentCoordinate) after the first
access, so after that we were not even looking in the HashMap, going
directly to the bindings Vector instead.
This reduces function_declaration_instantiation() from 16% to 9% when
idling in "Biolab Disaster". It also reduces has_binding() from 3% to
1% on the same content.
With these changes, we now actually get to idle a little bit between
game frames on my machine. :^)
This initial version lays down the basic foundation of IDL overload
resolution, but much of it will have to be replaced with the actual IDL
overload resolution algorithms once we start implementing more complex
IDL overloading scenarios.
Before this the event loop was spun until the state of the promise was
not pending, however it is possible that a promise has already been
fulfilled/rejected when awaiting it. This could then lead to a crash
below as it would not pump the event loop in such cases.
Although this change is in LibJS, it really only impacts any usage of
LibJS within a EventLoop environment such as LibWeb.
Instead of checking the state of the promise we know check that success
has a value which can only happen if either the fulfilled or rejected
closure set up by await are called.
This follows the ECMA402 spec and means String.prototype.localeCompare
will automatically become actually locale aware once StringCompare is
actually implemented based on UTS #10.
This commit adds an initial implementation (without any real locale
support) of Collator Compare Functions, as well as the matching
CompareStrings AO. These two are used to implement the ECMA402 version
of String.localeCompare() and Int.Collator.compare().
Resolves one FIXME where we can now pass a realm, and sets the length
correctly in a bunch of places that previously didn't.
Also reduces the number of "format function name string from arbitrary
PropertyKey" implementations, although two more remain present in the
AST (used with ECMAScriptFunctionObjects, which is a different beast).
Also take a length argument and set the name and length properties
internally, instead of at the call site. Additionally, allow passing a
realm, prototype, and prefix.
We were previously manually initializing them instead of just calling
GlobalObject::initialize_constructor, which aside from duplicating code
also meant we didn't set the required name property.
The JS behaviour of exponentiation on two number typed values is
not a simple matter of forwarding to ::pow(double, double). So,
this factors out the Math.pow logic to allow it to be shared with
Value::exp.
We have a fair amount of hard-coded keywords / aliases that can now be
replaced with real data from BCP 47. As a result, the also changes the
awkward way we were previously generating keys. Before, we were more or
less generating keywords as a CSV list of keys, e.g. for the "nu" key,
we'd generate "latn,arab,grek" (ordered by locale preference). Then at
runtime, we'd split on the comma. We now just generate spans of keywords
directly.
Parse JSON floating point literals properly,
No longer throwing a SyntaxError when the decimal portion
of the number exceeds the capacity of u32.
Added tests to AK/TestJSON and LibJS/builtins/JSON/JSON.parse
The ECMA verbiage for modulus is the mathematical definition implemented
by fmod, so let's just use that rather than trying to reimplement all
the edge cases.
The same expression is not allowed to contain both the
logical && and || operators, and the coalescing ?? operator.
This patch changes how "forbidden" tokens are handled, using a
finite set instead of an Vector. This supports much more efficient
merging of the forbidden tokens when propagating forward, and
allowing the return of forbidden tokens to parent contexts.
Before this was a mix of different strategies but copy_data_properties
does all of that in a spec way.
This fixes numeric properties in object spreading. And ensures that any
new properties added during spreading are not taken into account.
A common use case in JS is pushing items in an array in a loop.
A simple test case of 100_000 pushes took around ~20 seconds.
This is due to the fact that any pushed element per definition is beyond
the current size of the array. This meant calling grow_storage_if_needed
which then grew the storage by 25%. But this was done on every single
push, growing the array just a little bigger than its current capacity.
Now we now first use capacity of the array and only grow if the array
is actually full.
This decreases the time for 100_000 to around 0.35 seconds.
One problem is that we never shrink the capacity of the array but this
was already an issue before this.