1
Fork 0
mirror of https://github.com/RGBCube/serenity synced 2025-07-27 15:27:35 +00:00

Kernel: Optimize VM range deallocation a bit

Previously, when deallocating a range of VM, we would sort and merge
the range list. This was quite slow for large processes.

This patch optimizes VM deallocation in the following ways:

- Use binary search instead of linear scan to find the place to insert
  the deallocated range.

- Insert at the right place immediately, removing the need to sort.

- Merge the inserted range with any adjacent range(s) in-line instead
  of doing a separate merge pass into a list copy.

- Add Traits<Range> to inform Vector that Range objects are trivial
  and can be moved using memmove().

I've also added an assertion that deallocated ranges are actually part
of the RangeAllocator's initial address range.

I've benchmarked this using g++ to compile Kernel/Process.cpp.
With these changes, compilation goes from ~41 sec to ~35 sec.
This commit is contained in:
Andreas Kling 2020-01-19 13:18:27 +01:00
parent 502626eecb
commit ad3f931707
4 changed files with 53 additions and 31 deletions

View file

@ -359,15 +359,19 @@ public:
}
template<typename C>
void insert_before_matching(T&& value, C callback)
void insert_before_matching(T&& value, C callback, int first_index = 0, int* inserted_index = nullptr)
{
for (int i = 0; i < size(); ++i) {
for (int i = first_index; i < size(); ++i) {
if (callback(at(i))) {
insert(i, move(value));
if (inserted_index)
*inserted_index = i;
return;
}
}
append(move(value));
if (inserted_index)
*inserted_index = size() - 1;
}
Vector& operator=(const Vector& other)