1
Fork 0
mirror of https://github.com/RGBCube/serenity synced 2025-05-31 12:38:12 +00:00

Kernel: Don't finalize a thread while it still has code running

After marking a thread for death we might end up finalizing the thread
while it still has code to run, e.g. via:

Thread::block -> Thread::dispatch_one_pending_signal
-> Thread::dispatch_signal -> Process::terminate_due_to_signal
-> Process::die -> Process::kill_all_threads -> Thread::set_should_die

This marks the thread for death. It isn't destroyed at this point
though.

The scheduler then gets invoked via:

Thread::block -> Thread::relock_process

At that point we still have a registered blocker on the stack frame
which belongs to Thread::block. Thread::relock_process drops the
critical section which allows the scheduler to run.

When the thread is then scheduled out the scheduler sets the thread
state to Thread::Dying which allows the finalizer to destroy the Thread
object and its associated resources including the kernel stack.

This probably also affects objects other than blockers which rely
on their destructor to be run, however the problem was most noticible
because blockers are allocated on the stack of the dying thread and
cause an access violation when another thread touches the blocker
which belonged to the now-dead thread.

Fixes #7823.
This commit is contained in:
Gunnar Beutner 2021-06-06 11:40:11 +02:00 committed by Andreas Kling
parent cab2ee5ea2
commit 3c2a6a25da
2 changed files with 12 additions and 18 deletions

View file

@ -175,8 +175,6 @@ bool Scheduler::pick_next()
{
VERIFY_INTERRUPTS_DISABLED();
auto current_thread = Thread::current();
// Set the m_in_scheduler flag before acquiring the spinlock. This
// prevents a recursive call into Scheduler::invoke_async upon
// leaving the scheduler lock.
@ -194,22 +192,6 @@ bool Scheduler::pick_next()
ScopedSpinLock lock(g_scheduler_lock);
if (current_thread->should_die() && current_thread->state() == Thread::Running) {
// Rather than immediately killing threads, yanking the kernel stack
// away from them (which can lead to e.g. reference leaks), we always
// allow Thread::wait_on to return. This allows the kernel stack to
// clean up and eventually we'll get here shortly before transitioning
// back to user mode (from Processor::exit_trap). At this point we
// no longer want to schedule this thread. We can't wait until
// Scheduler::enter_current because we don't want to allow it to
// transition back to user mode.
if constexpr (SCHEDULER_DEBUG)
dbgln("Scheduler[{}]: Thread {} is dying", Processor::id(), *current_thread);
current_thread->set_state(Thread::Dying);
}
if constexpr (SCHEDULER_RUNNABLE_DEBUG) {
dump_thread_list();
}