This clears the handler pointer of the current unwind context
before jumping to it. This is necessary to not loop infinitely
when an exception is thrown from the handler.
In that case control flow should go to the finalizer instead.
This mirrors how unwind_context.handler_called is used in the
Bytecode::Interpreter.
`try { throw 1 } catch (e) { throw 2 } finally {}` now runs
without looping infinitely in the catch block.
Instead of emitting the lengthy exception checking/handling routine,
we only emit code for checking the presence of an exception and jump
to a common exception handler.
This code size optimization saves 2.08MiB on Kraken/ai-astar.js
The cxx_new_* functions have the exact same signature as the underlying
function they redirect to, so there's no need for them. Removing them
saves us a couple of opcodes.
Instead of pushing and popping every single caller-saved registers,
we can optimize code size (and speed!) by only pushing the one register
we actually care about: RDI (since it holds our VM&).
This means that native calls may clobber every other caller-saved
register, so this is something that you have to be aware of when
emitting native calls in the JIT.
This reduces code size on Kraken/ai-astar.js by 553 KiB and makes
execution time ~6% faster as well! :^)
Instead of JIT::Assembler making the decision for everyone and forcing
out every caller-saved register in the ABI onto the stack, we now leave
that decision to users of JIT::Assembler.
Instead of emitting the "restore callee-saved registers and return"
sequence again and again, just emit it once at the end of the generated
code, and have everyone jump to it.
This is a code size optimization that saves 207KiB on Kraken/ai-astar.js
This restores the bytecode interpreter's original call exception
throwing behaviour to the JIT.
This also fixes 8 of the 10 failing test-js tests when running with the
JIT enabled.
This replaces the existing sized immediate operands with a unified
immediate operand that leaves the size handling to the assembler,
instead of the user.
This has 2 benefits:
1. The user doesn't need to know which specific operand size the
instruction expects when using it
2. The assembler automatically chooses the minimal operand size that
fits the given value, resulting in smaller code size without any
additional effort from the user. While the change is small, it still
has a noticeable effect on performance (since it increases the I$ hit
rate), resulting in 5% speedup on kraken a-star.