Instead of having two separate implementations of AK::RefCounted, one
for userspace and one for kernelspace, there is now RefCounted and
AtomicRefCounted.
This mechanism was unsafe to use in any multithreaded context, since
the hook function was invoked on a raw pointer *after* decrementing
the local ref count.
Since we don't use it for anything anymore, let's just get rid of it.
Look for remove_from_secondary_lists() and call it on the ref-counting
target if present *while the lock is held*.
This allows listed-ref-counted objects to be present in multiple lists
and still have synchronized removal on final unref.
When doing the last unref() on a listed-ref-counted object, we keep
the list locked while mutating the ref count. The destructor itself
is invoked after unlocking the list.
This was racy with weakable classes, since their weak pointer factory
still pointed to the object after we'd decided to destroy it. That
opened a small time window where someone could try to strong-ref a weak
pointer to an object after it was removed from the list, but just before
the destructor got invoked.
This patch closes the race window by explicitly revoking all weak
pointers while the list is locked.
TmpFS inodes rely on the call to Inode::one_ref_left() to unregister
themselves from the inode cache in TmpFS.
When moving various kernel classes to ListedRefCounted for safe unref()
while participating on lists, I forgot to make ListedRefCounted check
for (and call) one_ref_left() & will_be_destroyed() on the CRTP class.
This class implements the emerging "ref-counted object that participates
in a lock-protected list that requires safe removal on unref()" pattern
as a base class that can be inherited in place of RefCounted<T>.