About half of the Processor code is common across architectures, so
let's share it with a templated base class. Also, other code that can be
shared in some ways, like FPUState and TrapFrame functions, is adjusted
here. Functions which cannot be shared trivially (without internal
refactoring) are left alone for now.
This is enforced by the hardware and an exception is generated when the
stack pointer is not properly aligned. This brings us closer to booting
the aarch64 Kernel on baremetal.
This is the only kernel issue blocking us from running the test suite.
Having userspace backtraces printed to the debug console during crashes
isn't vital to the system's function, so let's just return an empty
trace and print a FIXME instead of crashing.
Setting the page table base register (ttbr0_el1) is not enough, and will
not flush the TLB caches, in contrary with x86_64 where setting the CR3
register will actually flush the caches. This commit adds the necessary
code to properly flush the TLB caches when context switching. This
commit also changes Processor::flush_tlb_local to use the vmalle1
variant, as previously we would be flushing the tlb's of all the cores
in the inner-shareable domain.
The details of the specific interrupt bits that must be turned on are
irrelevant to the sys$execve implementation. Abstract it away to the
Processor implementations using the InterruptsState enum.
Forked processes already have an existing value for the link register,
which we can't overwrite. But since they're forked the original link
register value that points to exit_kernel_thread was already saved
somewhere on the stack, so it's ok not to set it.
This commit adds Processor::set_thread_specific_data, and this function
is used to factor out architecture specific implementation of setting
the thread specific data. This function is implemented for
aarch64 and x86_64, and the callsites are changed to use this function
instead.
This sets up the correct ThreadRegisters state when a process is
exec'ed, which happens when the first userspace application is executed.
Also changes Processor.cpp to get the stack pointer from the
ThreadRegisters.
The code in PageDirectory.cpp now keeps track of the registered page
directories, and actually sets the TTBR0_EL1 to the page table base of
the currently executing thread. When context switching, we now also
change the TTBR0_EL1 to the page table base of the thread that we
context switch into.
Various places in the kernel were manually checking the cs register for
x86_64, however to share this with aarch64 a function in RegisterState
is added, and the call-sites are updated. While we're here the
PreviousMode enum is renamed to ExecutionMode.
Until now the kernel was always executing with SP_EL0, as this made the
initial dropping to EL1 a bit easier. This commit changes this behaviour
to use the corresponding SP_ELx for each exception level.
To make sure that the execution of the C++ code can continue, the
current stack pointer is copied into the corresponding SP_ELx just
before dropping an exception level.
There’s similar RDRAND register (encoded as ‘s3_3_c2_c4_0ʼ) to be
added if needed. RNG CPU feature on Aarch64 guarantees existence of both
RDSEED and RDRAND registers simultaneously—in contrast to x86-64, where
respective instructions are independent of each other.
This is a separate file that behaves similar to the Prekernel for
x86_64, and makes sure the CPU is dropped to EL1, the MMU is enabled,
and makes sure the CPU is running in high virtual memory. This code then
jumps to the usual init function of the kernel.
And use it the code that will be part of the early boot process.
The PANIC macro and dbgln functions cannot be used as it accesses global
variables, which in the early boot process do not work, since the MMU is
not yet enabled.
This requires two new functions, context_first_init and
restore_context_and_eret. With this code in place, we are now running
the first idle thread! :^)
This changes the stack pointer to the initial_thread stack pointer, and
pushes two pointers onto the stack that point to the initial_thread. The
function then jumps to the ip of the initial_thread, which will be
thread_context_first_enter, and hangs there because that function is not
yet implemented.
This does not handle everything correctly yet, such as setting the
correct state for running userspace applications, however this should be
enough to get kernel scheduling to work.