1
Fork 0
mirror of https://github.com/RGBCube/serenity synced 2025-05-30 17:38:12 +00:00

Kernel: Use a shared physical page for zero-filled pages until written

This patch adds a globally shared zero-filled PhysicalPage that will
be mapped into every slot of every zero-filled AnonymousVMObject until
that page is written to, achieving CoW-like zero-filled pages.

Initial testing show that this doesn't actually achieve any sharing yet
but it seems like a good design regardless, since it may reduce the
number of page faults taken by programs.

If you look at the refcount of MM.shared_zero_page() it will have quite
a high refcount, but that's just because everything maps it everywhere.
If you want to see the "real" refcount, you can build with the
MAP_SHARED_ZERO_PAGE_LAZILY flag, and we'll defer mapping of the shared
zero page until the first NP read fault.

I've left this behavior behind a flag for future testing of this code.
This commit is contained in:
Andreas Kling 2020-02-15 13:12:02 +01:00
parent a4d857e3c5
commit c624d3875e
5 changed files with 41 additions and 8 deletions

View file

@ -25,6 +25,7 @@
*/
#include <Kernel/VM/AnonymousVMObject.h>
#include <Kernel/VM/MemoryManager.h>
#include <Kernel/VM/PhysicalPage.h>
NonnullRefPtr<AnonymousVMObject> AnonymousVMObject::create_with_size(size_t size)
@ -51,6 +52,10 @@ NonnullRefPtr<AnonymousVMObject> AnonymousVMObject::create_with_physical_page(Ph
AnonymousVMObject::AnonymousVMObject(size_t size)
: VMObject(size)
{
#ifndef MAP_SHARED_ZERO_PAGE_LAZILY
for (size_t i = 0; i < page_count(); ++i)
physical_pages()[i] = MM.shared_zero_page();
#endif
}
AnonymousVMObject::AnonymousVMObject(PhysicalAddress paddr, size_t size)