1
Fork 0
mirror of https://github.com/RGBCube/serenity synced 2025-05-18 11:05:06 +00:00

Kernel: Start implementing purgeable memory support

It's now possible to get purgeable memory by using mmap(MAP_PURGEABLE).
Purgeable memory has a "volatile" flag that can be set using madvise():

- madvise(..., MADV_SET_VOLATILE)
- madvise(..., MADV_SET_NONVOLATILE)

When in the "volatile" state, the kernel may take away the underlying
physical memory pages at any time, without notifying the owner.
This gives you a guilt discount when caching very large things. :^)

Setting a purgeable region to non-volatile will return whether or not
the memory has been taken away by the kernel while being volatile.
Basically, if madvise(..., MADV_SET_NONVOLATILE) returns 1, that means
the memory was purged while volatile, and whatever was in that piece
of memory needs to be reconstructed before use.
This commit is contained in:
Andreas Kling 2019-12-09 19:12:38 +01:00
parent 7248c34e35
commit dbb644f20c
13 changed files with 196 additions and 9 deletions

View file

@ -0,0 +1,41 @@
#include <Kernel/VM/PurgeableVMObject.h>
#include <Kernel/VM/PhysicalPage.h>
NonnullRefPtr<PurgeableVMObject> PurgeableVMObject::create_with_size(size_t size)
{
return adopt(*new PurgeableVMObject(size));
}
PurgeableVMObject::PurgeableVMObject(size_t size)
: AnonymousVMObject(size)
{
}
PurgeableVMObject::PurgeableVMObject(const PurgeableVMObject& other)
: AnonymousVMObject(other)
{
}
PurgeableVMObject::~PurgeableVMObject()
{
}
NonnullRefPtr<VMObject> PurgeableVMObject::clone()
{
return adopt(*new PurgeableVMObject(*this));
}
int PurgeableVMObject::purge()
{
LOCKER(m_paging_lock);
if (!m_volatile)
return 0;
int purged_page_count = 0;
for (size_t i = 0; i < m_physical_pages.size(); ++i) {
if (m_physical_pages[i])
++purged_page_count;
m_physical_pages[i] = nullptr;
}
m_was_purged = true;
return purged_page_count;
}