This patch makes it possible for JS::Object::internal_set() to populate
a CacheablePropertyMetadata, and uses this to implement a basic
monomorphic cache for the most common form of property write access.
Now that x86-specific Assembler will be compiled on every architecture
we can't rely on void* being the right width.
It also fixes compilation on targets which have void*
be different length from u64 (WASM in particular).
This is in preparation for making LibJIT support multiple architectures.
Assembler will now be typedefed to the specific assembler
for a particular architecture.
Additionally, there's now JIT_ARCH_SUPPORTED which is defined on
architectures which LibJIT supports.
This makes JS::JIT::Compiler less architecture-specific
and unifies aligning the stack into a single operation,
where previously we were doing it separately for preserved registers
and for stack arguments.
This kills 2 birds with one stone:
1. It makes sure generated check_exception() calls in the finalizer
don't mis-read the pending exception as caused by their matching
operation.
2. It implicitly ensures that terminated finally blocks (by a return
statement) overwrite any pending exceptions, since they will never
execute the ContinuePendingUnwind operation that restores the
stashed exception.
This additional logic is required in the JIT (as opposed to the
interpreter), since the JIT uses the exception register to store and
check the possibly-exceptional results from each individual operation,
while the interpreter only modifies it when an operation has thrown an
exception.
This reverts commit 0daebef727.
Finally blocks do not unconditionally swallow pending exceptions.
This resolves#21759 and fixes the 2 remaining failing test-js tests.
If Interpreter::run_and_return_frame is called with a specific entry
point we now map that to a native instruction address, which the JIT
code jumps to after the function prologue.
The previous implementation was calling `backtrace()` for every
function call, which is quite slow.
Instead, this implementation provides VM::stack_trace() which unwinds
the native stack, maps it through NativeExecutable::get_source_range
and combines it with source ranges from interpreted call frames.
This works by walking a backtrace until the currently executing
native executable is found, and then mapping the native address
to its bytecode instruction.