Apparently we need to poll the drive for its status after each sector we
read if we're not doing DMA. Previously we only did it at the start,
which resulted in every sector after the first in a batch having 12 bytes
of garbage on the end. This manifested as silent read corruption.
serial_debug will output all the kprintf and dbgprintf data to COM1 at
8-N-1 57600 baud. this is particularly useful for debugging the boot
process on live hardware.
Note: it must be the first parameter in the boot cmdline.
Since this key number doesn't appear to collide with anything on the
US keymap, I was thinking we could get away with supporting a hybrid
US/UK keymap. :^)
Once we've converted from an Ethernet frame to an IPv4 packet, we can
pass the IPv4Packet around instead of the EthernetFrameHeader.
Also add some more code to ignore invalid-looking packets.
Remove the global hash tables and replace them with InlineLinkedLists.
This significantly reduces the kernel heap pressure from doing many
small mmap()'s.
Using a HashTable to track "all instances of Foo" is only useful if we
actually need to look up entries by some kind of index. And since they
are HashTable (not HashMap), the pointer *is* the index.
Since we have the pointer, we can just use it directly. Duh.
This increase sizeof(VMObject) by two pointers, but removes a global
table that had an entry for every VMObject, where the cost was higher.
It also avoids all the general hash tabling business when creating or
destroying VMObjects. Generally we should do more of this. :^)
This is comprised of five small changes:
* Keep a counter for tx/rx packets/bytes per TCP socket
* Keep a counter for tx/rx packets/bytes per network adapter
* Expose that data in /proc/net_tcp and /proc/netadapters
* Convert /proc/netadapters to JSON
* Fix up ifconfig to read the JSON from netadapters
We were only doing this in Process::deallocate_region(), which meant
that kernel-only Regions never gave back their VM.
With this patch, we can start reusing freed-up address space! :^)
This is not perfect as it uses a lot of VM, but since the buffers are
supposed to be temporary it's not super terrible.
This could be improved by giving back the unused VM to the kernel's
RangeAllocator after finishing the buffer building.
Each Function is a heap allocation, so let's make an effort to avoid
doing that during scheduling. Because of header dependencies, I had to
put the runnables iteration helpers in Thread.h, which is a bit meh but
at least this cuts out all the kmalloc() traffic in pick_next().
If kmalloc backtraces are enabled during backtracing, things don't go
super well when the backtrace code calls kmalloc()..
With this fixed, it's basically possible to get all kmalloc backtraces
on the debugger by running (as root):
sysctl kmalloc_stacks=1
This makes VMObject 8 bytes smaller since we can use the array size as
the page count.
The size() is now also computed from the page count instead of being
a separate value. This makes sizes always be a multiple of PAGE_SIZE,
which is sane.
InodeVMObject is a VMObject with an underlying Inode in the filesystem.
AnonymousVMObject has no Inode.
I'm happy that InodeVMObject::inode() can now return Inode& instead of
VMObject::inode() return Inode*. :^)
This wasn't really thought-through, I was just trying anything to see
if it would make WindowServer faster. This doesn't seem to make much of
a difference either way, so let's just not do it for now.
It's easy to bring back if we think we need it in the future.
The VMObject name was always either the owning region's name, or the
absolute path of the underlying inode.
We can reconstitute this information if wanted, no need to keep copies
of these strings around.
This class works by eagerly allocating 1MB of virtual memory but only
adding physical pages on demand. In other words, when you append to it,
its memory usage will increase by 1 page whenever you append across a
page boundary (4KB.)
Instead of dumping the dying thread's backtrace in the signal handling
code, wait until we're finalizing the thread. Since signalling happens
during scheduling, the less work we do there the better.
Basically the less that happens during a scheduler pass the better. :^)
This has several significant changes to the networking stack.
* Significant refactoring of the TCP state machine. Right now it's
probably more fragile than it used to be, but handles quite a lot
more of the handshake process.
* `TCPSocket` holds a `NetworkAdapter*`, assigned during `connect()` or
`bind()`, whichever comes first.
* `listen()` is now virtual in `Socket` and intended to be implemented
in its child classes
* `listen()` no longer works without `bind()` - this is a bit of a
regression, but listening sockets didn't work at all before, so it's
not possible to observe the regression.
* A file is exposed at `/proc/net_tcp`, which is a JSON document listing
the current TCP sockets with a bit of metadata.
* There's an `ETHERNET_VERY_DEBUG` flag for dumping packet's content out
to `kprintf`. It is, indeed, _very debug_.