This kills 2 birds with one stone:
1. It makes sure generated check_exception() calls in the finalizer
don't mis-read the pending exception as caused by their matching
operation.
2. It implicitly ensures that terminated finally blocks (by a return
statement) overwrite any pending exceptions, since they will never
execute the ContinuePendingUnwind operation that restores the
stashed exception.
This additional logic is required in the JIT (as opposed to the
interpreter), since the JIT uses the exception register to store and
check the possibly-exceptional results from each individual operation,
while the interpreter only modifies it when an operation has thrown an
exception.
This reverts commit 0daebef727.
Finally blocks do not unconditionally swallow pending exceptions.
This resolves#21759 and fixes the 2 remaining failing test-js tests.
If Interpreter::run_and_return_frame is called with a specific entry
point we now map that to a native instruction address, which the JIT
code jumps to after the function prologue.
We were trying to stringify the stack trace without the last element,
leading to a loop bound of (size_t)(0 - 1) and accessing m_traceback[0]
out-of-bounds.
Instead, just return an empty string in that case.
Fixes#21747
The previous implementation was calling `backtrace()` for every
function call, which is quite slow.
Instead, this implementation provides VM::stack_trace() which unwinds
the native stack, maps it through NativeExecutable::get_source_range
and combines it with source ranges from interpreted call frames.
Previously these handlers duplicated code and used formats that
were different from the one Error.prototype.stack uses.
Now they use the same Error::stack_string function, which accepts
a new parameter for compacting stack traces with repeating frames.
This patch updates various parts of the script fetching implementation
to match the current specification.
Notably, the implementation of changes to the import assertions /
attributes proposal are not part of this patch(series).
This works by walking a backtrace until the currently executing
native executable is found, and then mapping the native address
to its bytecode instruction.
These are then restored upon `ContinuePendingUnwind`.
This stops us from forgetting where we needed to jump when we do extra
try-catches in finally blocks.
Co-Authored-By: Jesús "gsus" Lapastora <cyber.gsuscode@gmail.com>
This is currently only used in the bytecode dump to annotate to where
unwinds lead per block, but will be hooked up to the virtual machine in
the next commit.