The name-to-section lookup table was only used in a handful of places,
and none of them were calling it nearly enough to justify building
a cache for it in the first place. So let's get rid of it and reduce
startup time by a little bit. :^)
It's a lot faster to iterate the GNU hash tables if we don't have to
compute the length of every symbol name before rejecting it anyway while
comparing the first character. :^)
When performing a global symbol lookup, we were recomputing the symbol
hashes once for every dynamic object searched. The hash function was
at the very top of a profile (15%) of program startup.
With this change, the hash function is no longer visible among the top
stacks in the profile. :^)
This logging mode was unusable anyway since it spams way too much.
The dynamic loader is in a pretty good place now anyway, so I think
it's okay for us to drop some of the bring-up debug logging. :^)
Also, we have to be careful with dbgln_if(FOO_DEBUG, "{}", foo())
where foo() is something expensive, since it might get evaluated
even if !FOO_DEBUG.
Let's use a stronger type than void* for this since we're talking
specifically about a virtual address and not necessarily a pointer
to something actually in memory (yet).
It was very confusing how these functions used the "undefined" state
of Symbol to signal lookup failure. Let's use Optional<T> to make things
a bit more understandable.
Tweak the PLT trampoline to avoid generating textrels in LibC.
This allows us to share all the LibC mappings, reducing per-process
memory consumption by ~200 KB. :^)
Patch originally by @nico.
To support this, I had to reorganize the "load_elf" function into two
passes. First we map all the dynamic objects, to get their symbols
into the global lookup table. Then we link all the dynamic objects.
So many read-only GOT's! :^)
The dynamic loader will now mark RELRO segments read-only after
performing relocations. This is pretty cool!
Note that this only applies to main executables so far,.
RELRO support for shared libraries will require some reorganizing
of the dynamic loader.
For a data segment that starts at a non-zero offset into a 4KB page and
crosses a 4KB page boundary, we were failing to pad the VM allocation,
which would cause the memcpy() to fail.
Make sure we round the segment bases down, and segment ends up, and the
issue goes away.
This achieves two things:
- Programs can now intentionally perform arbitrary syscalls by calling
syscall(). This allows us to work on things like syscall fuzzing.
- It restricts the ability of userspace to make syscalls to a single
4KB page of code. In order to call the kernel directly, an attacker
must now locate this page and call through it.
Using the text segment for the VM reservation ran into trouble when
there was a discrepancy between the p_filesz and p_memsz.
Simplify this mechanism and avoid trouble by making the reservation
as a MAP_PRIVATE | MAP_NORESERVE throwaway mapping instead.
Fixes#5225.
Also, before calling the main program entry function, inform the kernel
that no more syscall regions can be registered.
This effectively bans syscalls from everywhere except LibC and
LibPthread. Pretty neat! :^)
load_from_image() becomes map() and link(). This allows us to map
an object before mapping its dependencies.
This solves an issue where fixed-position executables (like GCC)
would clash with the ASLR placement of their own shared libraries.
Validation was happening in two steps, some in the constructor, and then
some later on, in load_from_image().
This made no sense so just move all the validation to the constructor.
Refactor DynamicLoader construction with a try_create() helper so that
we can call mmap() before making a loader. This way the loader doesn't
need to have an "mmap failed" state.
This patch also takes care of determining the ELF file size in
try_create() instead of expecting callers to provide it.
Section names are referred to by offset and length. We do not check
(and probably should not check) whether these names overlap in any way.
This opened the door to many sections (in this example: about 2700)
forcing ELF::Image::m_sections to contain endless copies of the same
huge string (in this case: 882K).
Fix this by loading only the first PAGE_SIZE bytes of each name.
Since section names are only relevant for relocations and debug
information and most section names are hard-coded (and far below 4096
bytes) anyway, this should be no restriction at all for 'normal'
executables.
Found by OSS-Fuzz:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=29187
Previously regions were stored in a vector and then a pointer to
regions in this vector were taken and stored. The problem is the vector
were still appended after pointers were taken, if enough regions were
present the vector would grow so large that it needed a resize, this
cause his memory to moved and now the previous pointers are now
pointing to old memory we just freed.
Fixes#5160
To support upcoming W^X changes in the kernel, the dynamic loader needs
to be careful about the order in which permissions are added to shared
library text segments.
We now start by mapping text segments read-only (no-write, no-exec).
If relocations are needed, we make them writable, and then finally,
for all text segments, we finish by making them read+exec.
Use mmap() with the new MAP_RANDOMIZED flag to load shared libraries at
random addresses in each process.
To avoid address space collisions, we start by doing a large chunk mmap
that covers enough VM for both text and data, then we unmap and remap
the data segment separately, once we know everything will fit.
This is pretty cool! :^)