This is taken from the abandoned error stacks proposal, which
already serves as the source of truth for the setter. It only requires
the this value to be an object - if it's not an Error object, the getter
returns undefined.
I have not compared this behavior to the non-standard implementations of
the stack property in other engines, but presumably the spec authors
already did that work.
This change gets the Sentry browser SDK working to a point where it can
actually send uncaught exceptions via the API :^)
By using the same NativeFunction constructor as plain ErrorConstructor
and passing the name, TypeError & co. will now include their name in
backtraces and such.
Eventually we should probably rely on [[InitialName]] for this, but for
now that's how it works.
This is an editorial change in the Intl spec:
7c13db4
This also normalizes the spelling of the "Internal slots" heading in
Intl.Collator, which is another editorial change in the Intl spec:
ec064bd
The steps for creating a DeclarativeEnvironment for each iteration of a
for-loop can be done equivalently to the spec without following the spec
directly. For each binding creating in the loop's init expression, we:
1. Create a new binding in the new environment.
2. Grab the current value of the binding in the old environment.
3. Set the value in the new environment to the old value.
This can be replaced by initializing the bindings vector in the new
environment directly with the bindings in the old environment (but only
copying the bindings of the init statement).
On the code path where we are setting a TypedArray from another
TypedArray of the same type, we forgo the spec text and simply do a
memmove between the two ArrayBuffers. However, we forgot to apply
source's byte offset on this code path.
This meant if we tried setting a TypedArray from a TypedArray we got
from .subarray(), we would still copy from the start of the subarray's
ArrayBuffer.
This is because .subarray() returns a new TypedArray with the same
ArrayBuffer but the new TypedArray has a smaller length and a byte
offset that the rest of the codebase is responsible for applying.
This affected pako when it was decompressing a zlib stream that has
multiple zlib chunks in it. To read from the second chunk, it would
set the zlib window TypedArray from the .subarray() of the chunk offset
in the stream's TypedArray. This effectively made the decompressed data
from the second chunk a mis-mash of old data that looked completely
scrambled. It would also cause all future decompression using the same
pako Inflate instance to also appear scrambled.
As a pako comment aptly puts it:
> Call updatewindow() to create and/or update the window state.
> Note: a memory error from inflate() is non-recoverable.
This allows us to properly decompress the large compressed payloads
that Discord Gateway sends down to the Discord client. For example,
for an account that's only in the Serenity Discord, one of the payloads
is a 20 KB zlib compressed blob that has two chunks in it.
Surprisingly, this is not covered by test262! I imagine this would have
been caught earlier if there was such a test :^)
This is an editorial change in the Temporal spec.
See: 26a4c4f
No behavioral change as we already did this correctly, but I changed
some implicit JS::Value creations to explicit ones.
This is an editorial change in the Temporal spec.
See:
- 7fbdd28
- f666243
- 8c7d066
- 307d108
- d9ca402
In practical terms this means we can now get rid of a couple of awkward
assertion steps that were no-ops anyway, since the types are enforced
by the compiler.
This is an editorial change in the Temporal spec.
See: 983902e
We already had these defined as structs, but now they're properly
defined in the spec (opposed to the previous anonymous records), and we
don't have to make up our own names anymore :^)
Note that while we're usually not including 'record' in the name, in
this case the 'Duration Record' has a name clash with the Duration
object. Additionally, later editorial changes introduce CreateFooRecord
AOs, so let's just go with FooRecord structs here.
Previously the variable and lexical environments were already kept in a
NativeFunction call. However when we (try to) call a private method from
within an async function we go through async_block_start which sets up
a NativeFunction to call.
This is technically not exactly as the spec describes it, as that
requires you to actually "continue" the context. Since we don't have
that concept (yet) we use this as an implementation detail to access the
private environment from within a native function.
Note that this not allow general private environment access since most
things get blocked by the parser already.
Other engines don't give NaN if there is at least one digit after the
dot for milliseconds. We were much stricter and required exactly three
digits.
But there is real world usage of different amounts of digits such as
discord having three extra trailing zeros.
Similar to the direct getter and setter in DeclarativeEnvironment, there
are cases where we already know the index of a binding and can avoid a
O(n) lookup to re-find that index.
This reduces the size of the DeclarativeEnvironment from 72 bytes to 48
bytes. This savings helps in the context of nested for-loops which use
'let' to bind the initial variable declarations. In this case, the spec
dicates we create a new environment for each loop iteration by way of
the CreatePerIterationEnvironment AO.
In particular, test262's generated RegExp tests contains many loops of
the form:
for (let i = 0; i < a_number_on_the_order_of_10; ++i)
for (let j = 0; j < a_number_on_the_order_of_10_thousand; ++j)
This results in creating hundreds of thousands of environments.
Constructing the HashMap in DeclarativeEnvironment was by far the most
expensive thing when making JavaScript function calls.
As it turns out, we don't really need this to be a HashMap in the first
place, as lookups are cached (by EnvironmentCoordinate) after the first
access, so after that we were not even looking in the HashMap, going
directly to the bindings Vector instead.
This reduces function_declaration_instantiation() from 16% to 9% when
idling in "Biolab Disaster". It also reduces has_binding() from 3% to
1% on the same content.
With these changes, we now actually get to idle a little bit between
game frames on my machine. :^)
This initial version lays down the basic foundation of IDL overload
resolution, but much of it will have to be replaced with the actual IDL
overload resolution algorithms once we start implementing more complex
IDL overloading scenarios.
Before this the event loop was spun until the state of the promise was
not pending, however it is possible that a promise has already been
fulfilled/rejected when awaiting it. This could then lead to a crash
below as it would not pump the event loop in such cases.
Although this change is in LibJS, it really only impacts any usage of
LibJS within a EventLoop environment such as LibWeb.
Instead of checking the state of the promise we know check that success
has a value which can only happen if either the fulfilled or rejected
closure set up by await are called.