We are currently allocating in Set's constructor to create the set's
underlying Map. This can cause GC to occur before the member is actually
initialized, thus we will crash in Set::visit_edges trying to visit a
member that does not exist.
Instead, create the Map in Set::initialize, where we can allocate. Also
change Map to be stored as a normal JS heap-allocated object, rather
than as a stack variable.
Although not quite like the spec says the web reality is that a lhs
target of CallExpression should not give a SyntaxError but only a
ReferenceError once executed.
This was state only used by the parser to output an error with
appropriate location. This shrinks the size of ObjectExpression from
120 bytes down to just 56. This saves roughly 2.5 MiB when loading
twitter.
Using source offsets directly means we don't have to synthesize any
SourceRanges and circumvents that SourceRange returns invalid values for
the last range of a file.
This issue could be triggered by evaluating a module twice. If it failed
during evaluation the cycle root was never set but that status is
evaluated. This failed in step 3 of Evaluate which assumes it has a
cycle root. This is a spec issue see:
https://github.com/tc39/ecma262/issues/2823
To fix this we check if the module itself has a top level capability
first before going to the cycle root.
Co-authored-by: Jamie Mansfield <jmansfield@cadixdev.org>
The bytecode interpreter can execute generator functions while the AST
interpreter cannot. This simply makes it create a new bytecode
interpreter when one doesn't exist when executing a generator function.
Doing so makes it automatically switch to the bytecode interpreter to
execute any future code until it exits the generator.
Previously, throw and return completions would not be executed inside
the generator. This is incorrect, as throw and return need to perform
unwinds which can potentially execute more code inside the generator,
such as finally blocks.
This is done by also passing the completion type alongside the passed
in value. The continuation block will immediately extract and type and
value and perform the appropriate operation for the given type.
For normal completions, this is continuing as normal.
For throw completions, it will perform `throw <value>`.
For return completions, it will perform `return <value>`, which is a
`Yield return` in this case due to being inside a generator.
This also refactors GeneratorObject to properly send across the
completion type and value to the generator inside of trying to operate
on the completions itself.
This is a prerequisite for yield*, as it performs special iterator
operations when receiving a throw/return completion and does not
complete the generator like the regular yield would.
There's still more work to be done to make GeneratorObject::execute
be closer to the spec. It's mostly a restructuring of the existing
GeneratorObject::next_impl.
Unwind contexts need to be preserved as we exit and re-enter a
generator.
For example, this would previously crash when returning from the try
statement after yielding as we lost the unwind context when yielding,
but still have a LeaveUnwindContext instruction from running
`perform_needed_unwinds` when generating the return statement.
```js
function* a() {
try {
return (yield 1);
} catch {}
}
iter = a();
iter.next();
iter.next();
```
For performance, it is desirable to defer evaluation of intrinsics that
are stored on the GlobalObject for every created Realm. To support this,
Object now maintains a global storage map to store lambdas that will
return the associated intrinsic when evaluated. Once accessed, the
instrinsic is moved from this global map to normal Object storage.
To prevent this flow from becoming observable, when a deferred intrinsic
is stored, we still place an empty object in the normal Object storage.
This is so we still create the metadata for the object, and in doing so,
can preserve insertion order of the Object storage. Otherwise, this will
be observable by way of Object.getOwnPropertyDescriptors.
This changes Intrinsics to not initialize most of its constructors and
prototype right away. We still initialize a few that are needed before
some others are created, though we may eventually be able to "link"
dependencies at compile time to avoid this.
And make it capable of printing to any Core::Stream.
This is useful on its own and can be used in a number of places, so move
it out and make it available as JS::print().
Turns out we have to be a little more lenient when dealing with the
slightly hostile inputs in tests262-parser-tests.
This fix papers over some problematic situations by returning a fallback
range if we can't figure out exactly where an error occurred.
I've added a FIXME about returning nicer values.
Generating Error objects got a lot slower after the introduction of
SourceCode in b0b022507b.
This was noticeable with `test-js` which generates a lot of errors,
so walking the source code over and over to compute (line, column)
was eating a ton of time.
This patch makes repeated lookups a lot faster by building a cache
of line break offsets in the source code. The cache is built on first
offset lookup, so we only pay for this in code that actually throws.
On my machine, this takes `test-js` runtime from 6.7 sec to 4.3 sec.