It walks all the live Inode objects and flushes pending metadata changes
wherever needed.
This could be optimized by keeping a separate list of dirty Inodes,
but let's not get ahead of ourselves.
Use a little template magic to have Retainable::release() call out to
T::will_be_destroyed() if such a function exists before actually calling
the destructor. This gives us full access to virtual functions in the
pre-destruction code.
This synchronous approach to inodes is silly, obviously. I need to rework
it so that the in-memory CoreInode object is the canonical inode, and then
we just need a sync() that flushes pending changes to disk.
The kernel now bills processes for time spent in kernelspace and userspace
separately. The accounting is forwarded to the parent process in reap().
This makes the "time" builtin in bash work.
(Don't) use this to implement short-form output in ls.
I'm too tired to make a nice column formatting algorithm.
I just wanted something concise when I type "ls".
...by adding a new class called Ext2Inode that inherits CoreInode.
The idea is that a vnode will wrap a CoreInode rather than InodeIdentifier.
Each CoreInode subclass can keep whatever caches they like.
Right now, Ext2Inode caches the list of block indices since it can be very
expensive to retrieve.
I was surprised to find that dup()'ed fds don't share the close-on-exec flag.
That means it has to be stored separately from the FileDescriptor object.
Pass the file name in a stack-allocated buffer instead of using an AK::String
when iterating directories. This dramatically reduces the amount of cycles
spent traversing the filesystem.
First of all, change sys$mmap to take a struct SC_mmap_params since our
sycsall calling convention can't handle more than 3 arguments.
This exposed a bug in Syscall::invoke() needing to use clobber lists.
It was a bit confusing to debug. :^)
This is dirty but pretty cool! If we have a pending, unmasked signal for
a process that's blocked inside the kernel, we set up alternate stacks
for that process and unblock it to execute the signal handler.
A slightly different return trampoline is used here: since we need to
get back into the kernel, a dedicated syscall is used (sys$sigreturn.)
This restores the TSS contents of the process to the state it was in
while we were originally blocking in the kernel.
NOTE: There's currently only one "kernel resume TSS" so signal nesting
definitely won't work.
It only works for sending a signal to a process that's in userspace code.
We implement reception by synthesizing a PUSHA+PUSHF in the receiving process
(operating on values in the TSS.)
The TSS CS:EIP is then rerouted to the signal handler and a tiny return
trampoline is constructed in a dedicated region in the receiving process.
Also hacked up /bin/kill to be able to send arbitrary signals (kill -N PID)
Implemented some syscalls: dup(), dup2(), getdtablesize().
FileHandle is now a retainable, since that's needed for dup()'ed fd's.
I didn't really test any of this beyond a basic smoke check.