Almost everything in LibJS includes Cell.h, don't force all code to
include AK/TypeCasts.h + AK/String.h. Instead include them where they
are actually used and required.
Previously, EnvironmentRecord was a JS::Object. This was done because
GlobalObject inherited from EnvironmentRecord. Now that this is no
longer the case, we can simplify things by making EnvironmentRecord
inherit from Cell directly.
This also removes the need for environment records to have a shape,
which was awkward. This will be removed in the following patch.
Mark the entirety of a heap block's storage poisoned at construction.
Unpoison all of a Cell's memory before allocating it, and re-poison as
much as possible on deallocation. Unfortunately, the entirety of the
FreelistEntry must be kept unpoisoned in order for reallocation to work
correctly.
Decreasing the size of FreelistEntry or adding a larger redzone to Cells
would make the instrumentation even better.
Use this to avoid creating a 16 byte cell allocator on x86_64, where the
size of FreelistEntry is 24 bytes. Every JS::Cell must be at least the
size of the FreelistEntry or things start crashing, so the 16 byte
allocator was wasted on that platform.
This is the coarsest grained ASAN instrumentation possible for the LibJS
heap. Future instrumentation could add red-zones to heap block
allocations, and poison the entire heap block and only un-poison used
cells at the CellAllocator level.
The previous VERIFY() call checked that aligned_alloc() didn't return
MAP_FAILED. When out of memory aligned_alloc() returns a null pointer
so let's check for that instead.
This patch adds a BlockAllocator to the GC heap where we now cache up to
64 HeapBlock-sized mmap's that get recycled when allocating HeapBlocks.
This improves test-js runtime performance by ~35%, pretty cool! :^)
So far we only have two states: Live and Dead. In the future, we can
add additional states to support incremental sweeping and/or multi-
stage cell destruction.
If we're able to allocate cells from a freelist, we should always
prefer that over the lazy freelist, since this may further defer
faulting in additional memory for the HeapBlock.
Thanks to @gunnarbeutner for pointing this out. :^)
HeapBlock now implements the same lazy freelist as LibC malloc() does,
where new blocks start out in a "bump allocator" mode that gets used
until we've bump-allocated all the way to the end of the block.
Then we fall back to the old freelist style as before.
This means we don't have to pre-initialize the freelist on HeapBlock
construction. This defers page faults and reduces memory usage for
blocks where all cells don't get used. :^)
Absolutely massive allocations > 1024 bytes would go into the size
class which was 3172 bytes. 3172 happens to not be 8 byte aligned, and
so made UBSAN very sad on x86_64. Change the largest allocator to be
3072 bytes, which is in fact a multiple of 8 :^)
SPDX License Identifiers are a more compact / standardized
way of representing file license information.
See: https://spdx.dev/resources/use/#identifiers
This was done with the `ambr` search and replace tool.
ambr --no-parent-ignore --key-from-file --rep-from-file key.txt rep.txt *
This should allow creating intrusive lists that have smart pointers,
while remaining free (compared to the impl before this commit) when
holding raw pointers :^)
As a sidenote, this also adds a `RawPtr<T>` type, which is just
equivalent to `T*`.
Note that this does not actually use such functionality, but is only
expected to pave the way for #6369, to replace NonnullRefPtrVector<T>
with intrusive lists.
As it is with zero-cost things, this makes the interface a bit less nice
by requiring the type name of what an `IntrusiveListNode` holds (and
optionally its container, if not RawPtr), and also requiring the type of
the container (normally `RawPtr`) on the `IntrusiveList` instance.
This commit makes the user-facing StdLibExtras templates and utilities
arguably more nice-looking by removing the need to reach into the
wrapper structs generated by them to get the value/type needed.
The C++ standard library had to invent `_v` and `_t` variants (likely
because of backwards compat), but we don't need to cater to any codebase
except our own, so might as well have good things for free. :^)
(...and ASSERT_NOT_REACHED => VERIFY_NOT_REACHED)
Since all of these checks are done in release builds as well,
let's rename them to VERIFY to prevent confusion, as everyone is
used to assertions being compiled out in release.
We can introduce a new ASSERT macro that is specifically for debug
checks, but I'm doing this wholesale conversion first since we've
accumulated thousands of these already, and it's not immediately
obvious which ones are suitable for ASSERT.
Allocate GC heap blocks with mmap(MAP_RANDOMIZED) for ASLR.
This may very well be too aggressive in terms of fragmentation, and we
can figure out ways to scale that back once it becomes a big problem.
For now, this makes the GC heap a lot less predictable for an attacker.