1
Fork 0
mirror of https://github.com/RGBCube/serenity synced 2025-07-25 20:47:45 +00:00

Kernel/Locking: Add lock rank tracking per thread to find deadlocks

This change adds a static lock hierarchy / ranking to the Kernel with
the goal of reducing / finding deadlocks when running with SMP enabled.

We have seen quite a few lock ordering deadlocks (locks taken in a
different order, on two different code paths). As we properly annotate
locks in the system, then these facilities will find these locking
protocol violations automatically

The `LockRank` enum documents the various locks in the system and their
rank. The implementation guarantees that a thread holding one or more
locks of a lower rank cannot acquire an additional lock with rank that
is greater or equal to any of the currently held locks.
This commit is contained in:
Brian Gianforcaro 2021-09-07 02:40:31 -07:00 committed by Andreas Kling
parent 0718afa773
commit 066b0590ec
5 changed files with 128 additions and 0 deletions

View file

@ -29,6 +29,7 @@
#include <Kernel/Library/ListedRefCounted.h>
#include <Kernel/Locking/LockLocation.h>
#include <Kernel/Locking/LockMode.h>
#include <Kernel/Locking/LockRank.h>
#include <Kernel/Locking/SpinlockProtected.h>
#include <Kernel/Memory/VirtualRange.h>
#include <Kernel/Scheduler.h>
@ -1083,6 +1084,9 @@ public:
u32 saved_critical() const { return m_saved_critical; }
void save_critical(u32 critical) { m_saved_critical = critical; }
void track_lock_acquire(LockRank rank);
void track_lock_release(LockRank rank);
[[nodiscard]] bool is_active() const { return m_is_active; }
[[nodiscard]] bool is_finalizable() const
@ -1302,6 +1306,7 @@ private:
Kernel::Mutex* m_blocking_lock { nullptr };
u32 m_lock_requested_count { 0 };
IntrusiveListNode<Thread> m_blocked_threads_list_node;
LockRank m_lock_rank_mask { LockRank::None };
#if LOCK_DEBUG
struct HoldingLockInfo {