1
Fork 0
mirror of https://github.com/RGBCube/serenity synced 2025-05-20 14:55:08 +00:00
Commit graph

1018 commits

Author SHA1 Message Date
Timon Kruiper
a146a19636 Kernel: Make Syscalls/ptrace.cpp buildable for aarch64 2023-01-27 20:47:08 +00:00
Timon Kruiper
cab725cdfb Kernel/aarch64: Implement set_return_reg and capture_syscall_params 2023-01-27 20:47:08 +00:00
Timon Kruiper
293ece6fad Kernel/aarch64: Add stub for copy_ptrace_registers_into_kernel_registers 2023-01-27 20:47:08 +00:00
Sam Atkins
3cbc0fdbb0 Kernel: Remove declarations for non-existent methods 2023-01-27 20:33:18 +00:00
Timon Kruiper
2896d7796d Kernel/aarch64: Set Access Permission (writable bit) on PageTableEntry
This will cause page faults to be generated. Since the previous commits
introduced the handling of page faults, we can now actually correctly
handle page faults.
2023-01-27 11:41:43 +01:00
Timon Kruiper
1d58663298 Kernel/aarch64: Implement switching page directories
The code in PageDirectory.cpp now keeps track of the registered page
directories, and actually sets the TTBR0_EL1 to the page table base of
the currently executing thread. When context switching, we now also
change the TTBR0_EL1 to the page table base of the thread that we
context switch into.
2023-01-27 11:41:43 +01:00
Timon Kruiper
1f30a5e4d9 Kernel/aarch64: Store and initialize TTBR0_EL1 in ThreadRegisters 2023-01-27 11:41:43 +01:00
Timon Kruiper
9e90932bfb Kernel/aarch64: Add helper to get the current TTBR0_EL1 2023-01-27 11:41:43 +01:00
Timon Kruiper
d9946c8e89 Kernel/aarch64: Keep track of root page table and kernel directory table 2023-01-27 11:41:43 +01:00
Timon Kruiper
697c5ca5e5 Kernel: Move Memory/PageDirectory.{cpp,h} to arch-specific directory
The handling of page tables is very architecture specific, so belongs
in the Arch directory. Some parts were already architecture-specific,
however this commit moves the rest of the PageDirectory class into the
Arch directory.

While we're here the aarch64/PageDirectory.{h,cpp} files are updated to
be aarch64 specific, by renaming some members and removing x86_64
specific code.
2023-01-27 11:41:43 +01:00
Timon Kruiper
55d756a813 Kernel/aarch64: Implement initial page fault handling
The shared code is moved to a common PageFault.cpp file.
2023-01-27 11:41:43 +01:00
Timon Kruiper
a532d28905 Kernel/aarch64: Add stub for handle_safe_access_fault 2023-01-27 11:41:43 +01:00
Timon Kruiper
ade27fa6b9 Kernel: Refactor PageFault for use in the aarch64 port
The class used to look at the x86_64 specific exception code to figure
out what kind of page fault happend, however this refactor allows
aarch64 to use the same class.
2023-01-27 11:41:43 +01:00
Timon Kruiper
fb10774862 Kernel: Factor our PreviousMode into RegisterState::previous_mode
Various places in the kernel were manually checking the cs register for
x86_64, however to share this with aarch64 a function in RegisterState
is added, and the call-sites are updated. While we're here the
PreviousMode enum is renamed to ExecutionMode.
2023-01-27 11:41:43 +01:00
Timon Kruiper
247109cee6 Kernel/aarch64: Execute kernel with SP_EL1 instead of SP_EL0
Until now the kernel was always executing with SP_EL0, as this made the
initial dropping to EL1 a bit easier. This commit changes this behaviour
to use the corresponding SP_ELx for each exception level.

To make sure that the execution of the C++ code can continue, the
current stack pointer is copied into the corresponding SP_ELx just
before dropping an exception level.
2023-01-27 11:41:43 +01:00
Timon Kruiper
05659debd1 Kernel/aarch64: Move exception handler to Interrupts.cpp
Also dump the registers in a nicer format.
2023-01-27 11:41:43 +01:00
Liav A
1f9d3a3523 Kernel/PCI: Hold a reference to DeviceIdentifier in the Device class
There are now 2 separate classes for almost the same object type:
- EnumerableDeviceIdentifier, which is used in the enumeration code for
  all PCI host controller classes. This is allowed to be moved and
  copied, as it doesn't support ref-counting.
- DeviceIdentifier, which inherits from EnumerableDeviceIdentifier. This
  class uses ref-counting, and is not allowed to be copied. It has a
  spinlock member in its structure to allow safely executing complicated
  IO sequences on a PCI device and its space configuration.
  There's a static method that allows a quick conversion from
  EnumerableDeviceIdentifier to DeviceIdentifier while creating a
  NonnullRefPtr out of it.

The reason for doing this is for the sake of integrity and reliablity of
the system in 2 places:
- Ensure that "complicated" tasks that rely on manipulating PCI device
  registers are done in a safe manner. For example, determining a PCI
  BAR space size requires multiple read and writes to the same register,
  and if another CPU tries to do something else with our selected
  register, then the result will be a catastrophe.
- Allow the PCI API to have a united form around a shared object which
  actually holds much more data than the PCI::Address structure. This is
  fundamental if we want to do certain types of optimizations, and be
  able to support more features of the PCI bus in the foreseeable
  future.

This patch already has several implications:
- All PCI::Device(s) hold a reference to a DeviceIdentifier structure
  being given originally from the PCI::Access singleton. This means that
  all instances of DeviceIdentifier structures are located in one place,
  and all references are pointing to that location. This ensures that
  locking the operation spinlock will take effect in all the appropriate
  places.
- We no longer support adding PCI host controllers and then immediately
  allow for enumerating it with a lambda function. It was found that
  this method is extremely broken and too much complicated to work
  reliably with the new paradigm being introduced in this patch. This
  means that for Volume Management Devices (Intel VMD devices), we
  simply first enumerate the PCI bus for such devices in the storage
  code, and if we find a device, we attach it in the PCI::Access method
  which will scan for devices behind that bridge and will add new
  DeviceIdentifier(s) objects to its internal Vector. Afterwards, we
  just continue as usual with scanning for actual storage controllers,
  so we will find a corresponding NVMe controllers if there were any
  behind that VMD bridge.
2023-01-26 23:04:26 +01:00
konrad
95c469ca4c Kernel: Move Aarch64 MMU debug message into memory manager initializer
Doing so unifies startup debug messages visually.
2023-01-25 23:17:36 +01:00
konrad
e1c50b83e1 Kernel: Use RDSEED assembly snippet to seed RNG on Aarch64
There’s similar RDRAND register (encoded as ‘s3_3_c2_c4_0ʼ) to be
added if needed. RNG CPU feature on Aarch64 guarantees existence of both
RDSEED and RDRAND registers simultaneously—in contrast to x86-64, where
respective instructions are independent of each other.
2023-01-25 23:17:36 +01:00
konrad
7c8e61f4d1 Kernel: Unify x86-64 assembly snippets naming for RDSEED & RDRAND 2023-01-25 23:17:36 +01:00
Timon Kruiper
c4a3af12fc Kernel/aarch64: Change base address of the kernel to 0x2000000000
This is the same address that the x86_64 kernel runs at, and allows us
to run the kernel at a high virtual memory address. Since we now run
completely in high virtual memory, we can also unmap the identity
mapping. Additionally some changes in MMU.cpp are required to
successfully boot.
2023-01-24 14:54:44 +00:00
Timon Kruiper
3bc122fcef Kernel/aarch64: Ensure global variable accesses work without MMU enabled
Since we link the kernel at a high virtual memory address, the addresses
of global variables are also at virtual addresses. To be able to access
them without the MMU enabled, we have to subtract the
KERNEL_MAPPING_BASE.
2023-01-24 14:54:44 +00:00
Timon Kruiper
ebdb899d3d Kernel/aarch64: Add pre_init function for that sets up the CPU and MMU
This is a separate file that behaves similar to the Prekernel for
x86_64, and makes sure the CPU is dropped to EL1, the MMU is enabled,
and makes sure the CPU is running in high virtual memory. This code then
jumps to the usual init function of the kernel.
2023-01-24 14:54:44 +00:00
Timon Kruiper
5db32ecbe1 Kernel/aarch64: Access MMIO using mapping in high virtual memory
This ensures that we can unmap the identity mapping of the kernel in
physical memory.
2023-01-24 14:54:44 +00:00
Timon Kruiper
91d0451999 Kernel/aarch64: Use relative addressing in boot.S
As the kernel is now linked at high address in virtual memory, we cannot
use absolute addresses as they refer to high addresses in virtual
memory. At this point in the boot process we are still running with the
MMU off, so we have to make sure the accesses are using physical memory
addresses.
2023-01-24 14:54:44 +00:00
Timon Kruiper
a581cae4d4 Kernel/aarch64: Add function to MMU.cpp to unmap identity mapping
This function will be used once the kernel runs in high virtual memory
to unmap the identity mapping as userspace will later on use this memory
range instead.
2023-01-24 14:54:44 +00:00
Timon Kruiper
150c52e420 Kernel/aarch64: Add {panic,dbgln}_without_mmu
And use it the code that will be part of the early boot process.

The PANIC macro and dbgln functions cannot be used as it accesses global
variables, which in the early boot process do not work, since the MMU is
not yet enabled.
2023-01-24 14:54:44 +00:00
Timon Kruiper
69c49b3d00 Kernel/aarch64: Map kernel and MMIO in high virtual memory
In the upcoming commits, we'll change the kernel to run at a virtual
address in high memory. This commit prepares for that by making sure the
kernel and mmio are mapped into high virtual memory.
2023-01-24 14:54:44 +00:00
Jelle Raaijmakers
2428ba3832 Kernel: Remove dbgln when unregistering an unhandled x86_64 interrupt
A lot of interrupt numbers are initialized with the unhandled interrupt
handler. Whenever a new handler is registered on one of these
interrupts, the old handler is unregistered first. Let's not be verbose
about this since it is perfectly normal.
2023-01-20 15:22:42 +01:00
Jelle Raaijmakers
5f85f1abaa Kernel: Simplify (un)registering interrupt logic
Lose a level of indentation and remove a superfluous `handler_slot`
check.
2023-01-20 15:22:42 +01:00
konrad
78d6de2ec1 Kernel: Make a slightly better demo for Aarch64 multiprocessing 2023-01-18 22:58:42 +01:00
konrad
5791072280 Kernel: Detect Aarch64 virtual address bit width with CPU ID registers 2023-01-18 22:58:42 +01:00
konrad
401fc6afae Kernel: Detect Aarch64 physical address bit width with CPU ID registers 2023-01-18 22:58:42 +01:00
konrad
66c65f6e2c Kernel: Add and use accessors to read from Aarch64 CPU ID registers
Following registers accessors are updated and put in use:
* ID_AA64ISAR0_EL1, Instruction Set Attribute Register 0

Accessors for following registers are added and put in use:
* ID_AA64ISAR1_EL1, Instruction Set Attribute Register 1
* ID_AA64ISAR2_EL1, Instruction Set Attribute Register 2
* ID_AA64MMFR1_EL1, AArch64 Memory Model Feature Register 1
* ID_AA64MMFR2_EL1, AArch64 Memory Model Feature Register 2
* ID_AA64MMFR3_EL1, AArch64 Memory Model Feature Register 3
* ID_AA64MMFR4_EL1, AArch64 Memory Model Feature Register 4
* ID_AA64PFR0_EL1, AArch64 Processor Feature Register 0
* ID_AA64PFR1_EL1, AArch64 Processor Feature Register 1
* ID_AA64PFR2_EL1, AArch64 Processor Feature Register 2
* ID_AA64ZFR0_EL1, AArch64 SVE Feature ID register 0
* ID_AA64SMFR0_EL1, AArch64 SME Feature ID register 0
* ID_AA64DFR0_EL1, AArch64 Debug Feature Register 0
* ID_AA64DFR1_EL1, AArch64 Debug Feature Register 1

Additionally, there are few CPU features detected with
* TCR_EL1, Translation Control Register

but detection mechanism using it (for LPA/LPA2) is probably wrong as
this is control register, not a id register, and needs further work.

Finally, following registers are provided. Former one is already used,
while latter is given for future use:
* MIDR_EL1, Main ID Register
* AIDR_EL1, Auxiliary ID Register
2023-01-18 22:58:42 +01:00
konrad
6979cf230e Kernel: Print Aarch64 CPU features during CPU initialization 2023-01-18 22:58:42 +01:00
konrad
a8e9591bac Kernel: Split Aarch64 CPU setup into two stages
Former aims to bring the processor itself into desired state,
while latter allows for additional initialization with heap available.
2023-01-18 22:58:42 +01:00
konrad
97dce5d001 Kernel: Add Aarch64 CPU feature detection 2023-01-18 22:58:42 +01:00
konrad
0f81fb03f2 Kernel: Introduce stages in Aarch64 CPU initialization phase
Dropping to each exception level is now more explicit.
2023-01-18 22:58:42 +01:00
konrad
c08f059340 Kernel: Add CPUFeature enumeration for Aarch64 CPUs
Also, enumeration name & description mappings are provided along.
2023-01-18 22:58:42 +01:00
konrad
823aab8296 Kernel: Use a descriptive name for x86-64 cpu_feature_to_string_view
Settled for `cpu_feature_to_name` as that naming is more descriptive
and similarly named `cpu_feature_to_description` function will be
provided for Aarch64.
2023-01-18 22:58:42 +01:00
Liav A
3d87445c82 Kernel: Restore setting i8042 scan code set to scan code set 2 sequence
This seems to work perfectly OK on my ICH7 test machine and also it
works on QEMU, so it is probably OK to restore this.
This will ensure we always get scan code set 1 input, because we enable
scan code set 2 and PS/2 translation on the first (keyboard) port.
2023-01-06 11:09:56 +01:00
Liav A
0f7cc468b2 Kernel: Make i8042 controller initialization sequence more robust
The setting of scan code set sequence is removed, as it's buggy and
could lead the controller to fail immediately when doing self-test
afterwards. We will restore it when we understand how to do so safely.

Allow the user to determine a preferred detection path with a new kernel
command line argument. The defualt option is to check i8042 presence
with an ACPI check and if necessary - an "aggressive" test to determine
i8042 existence in the system.
Also, keep the i8042 controller pointer on the stack, so don't assign
m_i8042_controller member pointer if it does not exist.
2023-01-06 11:09:56 +01:00
Ben Wiederhake
c25bef59aa Kernel: Repair build for aarch64
This broke in 6fd478b6ce due to
insufficient testing on my part. Sorry!
2023-01-05 19:47:07 +01:00
Nico Weber
7f4680a377 Kernel/aarch64: Remove counterproductive volatile
Should not be needed, and triggers -Wvolatile in gcc.
See discussion on #16790.
2023-01-05 19:45:27 +01:00
Evan Smal
288a73ea0e Kernel: Add dmesgln_pci logging for Kernel::PCI
A virtual method named device_name() was added to
Kernel::PCI to support logging the PCI::Device name
and address using dmesgln_pci. Previously, PCI::Device
did not store the device name.

All devices inheriting from PCI::Device now use dmesgln_pci where
they previously used dmesgln.
2023-01-05 01:44:19 +01:00
Ben Wiederhake
65b420f996 Everywhere: Remove unused includes of AK/Memory.h
These instances were detected by searching for files that include
AK/Memory.h, but don't match the regex:

\\b(fast_u32_copy|fast_u32_fill|secure_zero|timing_safe_compare)\\b

This regex is pessimistic, so there might be more files that don't
actually use any memory function.

In theory, one might use LibCPP to detect things like this
automatically, but let's do this one step after another.
2023-01-02 20:27:20 -05:00
Ben Wiederhake
f07847e099 Everywhere: Remove unused includes of AK/Concepts.h
These instances were detected by searching for files that include
AK/Concepts.h, but don't match the regex:

\\b(AnyString|Arithmetic|ArrayLike|DerivedFrom|Enum|FallibleFunction|Flo
atingPoint|Fundamental|HashCompatible|Indexable|Integral|IterableContain
er|IteratorFunction|IteratorPairWith|OneOf|OneOfIgnoringCV|SameAs|Signed
|SpecializationOf|Unsigned|VoidFunction)\\b

(Without the linebreaks.)

This regex is pessimistic, so there might be more files that don't
actually use any concepts.

In theory, one might use LibCPP to detect things like this
automatically, but let's do this one step after another.
2023-01-02 20:27:20 -05:00
Ben Wiederhake
c2a900b853 Everywhere: Remove unused includes of AK/StdLibExtras.h
These instances were detected by searching for files that include
AK/StdLibExtras.h, but don't match the regex:

\\b(abs|AK_REPLACED_STD_NAMESPACE|array_size|ceil_div|clamp|exchange|for
ward|is_constant_evaluated|is_power_of_two|max|min|mix|move|_RawPtr|RawP
tr|round_up_to_power_of_two|swap|to_underlying)\\b

(Without the linebreaks.)

This regex is pessimistic, so there might be more files that don't
actually use any "extra stdlib" functions.

In theory, one might use LibCPP to detect things like this
automatically, but let's do this one step after another.
2023-01-02 20:27:20 -05:00
Ben Wiederhake
6fd478b6ce Everywhere: Remove unused includes of AK/Format.h
These instances were detected by searching for files that include
AK/Format.h, but don't match the regex:

\\b(CheckedFormatString|critical_dmesgln|dbgln|dbgln_if|dmesgln|FormatBu
ilder|__FormatIfSupported|FormatIfSupported|FormatParser|FormatString|Fo
rmattable|Formatter|__format_value|HasFormatter|max_format_arguments|out
|outln|set_debug_enabled|StandardFormatter|TypeErasedFormatParams|TypeEr
asedParameter|VariadicFormatParams|v_critical_dmesgln|vdbgln|vdmesgln|vf
ormat|vout|warn|warnln|warnln_if)\\b

(Without the linebreaks.)

This regex is pessimistic, so there might be more files that don't
actually use any formatting functions.

Observe that this revealed that Userland/Libraries/LibC/signal.cpp is
missing an include.

In theory, one might use LibCPP to detect things like this
automatically, but let's do this one step after another.
2023-01-02 20:27:20 -05:00
kleines Filmröllchen
a6a439243f Kernel: Turn lock ranks into template parameters
This step would ideally not have been necessary (increases amount of
refactoring and templates necessary, which in turn increases build
times), but it gives us a couple of nice properties:
- SpinlockProtected inside Singleton (a very common combination) can now
  obtain any lock rank just via the template parameter. It was not
  previously possible to do this with SingletonInstanceCreator magic.
- SpinlockProtected's lock rank is now mandatory; this is the majority
  of cases and allows us to see where we're still missing proper ranks.
- The type already informs us what lock rank a lock has, which aids code
  readability and (possibly, if gdb cooperates) lock mismatch debugging.
- The rank of a lock can no longer be dynamic, which is not something we
  wanted in the first place (or made use of). Locks randomly changing
  their rank sounds like a disaster waiting to happen.
- In some places, we might be able to statically check that locks are
  taken in the right order (with the right lock rank checking
  implementation) as rank information is fully statically known.

This refactoring even more exposes the fact that Mutex has no lock rank
capabilites, which is not fixed here.
2023-01-02 18:15:27 -05:00