1
Fork 0
mirror of https://github.com/RGBCube/serenity synced 2025-05-31 04:58:13 +00:00

Kernel: Mape quickmap functions VERIFY that MM lock is held

The quickmap_page() and unquickmap_page() functions are used to map a
single physical page at a kernel virtual address for temporary access.

These use the per-CPU quickmap buffer in the page tables, and access to
this is guarded by the MM lock. To prevent bugs, quickmap_page() should
not *take* the MM lock, but rather verify that it is already held!

This exposed two situations where we were using quickmap without holding
the MM lock during page fault handling. This patch is forced to fix
these issues (which is great!) :^)
This commit is contained in:
Andreas Kling 2021-08-22 12:57:58 +02:00
parent 1b9916439f
commit a930877f31
3 changed files with 11 additions and 7 deletions

View file

@ -1004,9 +1004,9 @@ PageTableEntry* MemoryManager::quickmap_pt(PhysicalAddress pt_paddr)
u8* MemoryManager::quickmap_page(PhysicalAddress const& physical_address)
{
VERIFY_INTERRUPTS_DISABLED();
VERIFY(s_mm_lock.own_lock());
auto& mm_data = get_data();
mm_data.m_quickmap_prev_flags = mm_data.m_quickmap_in_use.lock();
SpinlockLocker lock(s_mm_lock);
VirtualAddress vaddr(KERNEL_QUICKMAP_PER_CPU_BASE + Processor::current_id() * PAGE_SIZE);
u32 pte_idx = (vaddr.get() - KERNEL_PT1024_BASE) / PAGE_SIZE;
@ -1025,7 +1025,7 @@ u8* MemoryManager::quickmap_page(PhysicalAddress const& physical_address)
void MemoryManager::unquickmap_page()
{
VERIFY_INTERRUPTS_DISABLED();
SpinlockLocker lock(s_mm_lock);
VERIFY(s_mm_lock.own_lock());
auto& mm_data = get_data();
VERIFY(mm_data.m_quickmap_in_use.is_locked());
VirtualAddress vaddr(KERNEL_QUICKMAP_PER_CPU_BASE + Processor::current_id() * PAGE_SIZE);