Each of these strings would previously rely on StringView's char const*
constructor overload, which would call __builtin_strlen on the string.
Since we now have operator ""sv, we can replace these with much simpler
versions. This opens the door to being able to remove
StringView(char const*).
No functional changes.
This commit moves the length calculations out to be directly on the
StringView users. This is an important step towards the goal of removing
StringView(char const*), as it moves the responsibility of calculating
the size of the string to the user of the StringView (which will prevent
naive uses causing OOB access).
This is not explicitly specified by POSIX, but is supported by other
*nixes, already supported by our sys$bind, and expected by various
programs. While were here, also clean up the user memory copies a bit.
We were previously assuming that the how value was a bitfield, but that
is not the case, so we must explicitly check for SHUT_RDWR when
deciding on the read and write shutdowns.
To prevent a race condition in case we received the ARP response in the
window between creating and initializing the Thread Blocker and the
actual blocking, we were checking if the IP address was updated in the
ARP table just before starting to block.
Unfortunately, the condition was partially flipped, which meant that if
the table was updated with the IP address we would still end up
blocking, at which point we would never end unblocking again, which
would result in LookupServer locking up as well.
This change unifies the naming convention for kernel tasks.
The goal of this change is to:
- Make the task names more descriptive, so users can more
easily understand their purpose in System Monitor.
- Unify the naming convention so they are consistent.
For the same reason we ignore interfaces without an IP address when
choosing where to send a route, we should also ignore interfaces without
IP addresses when updating the ARP table on incoming packets from
local addresses.
On an interface with a null address, the mask checking would always
result in zero, which resulted in the system updating the ARP table on
almost every incoming packet from any address (private or public).
This patch fixes this behavior by only applying this check to interfaces
with valid addresses and now the ARP table won't get constantly
hammered.
Closes#13713
Previously the routing table did not store the route flags. This
adds basic support and exposes them in the /proc directory so that a
userspace caller can query the route and identify the type of each
route.
It doesn't make sense after introduction of routing table which allows
having multiple gateways for every interface, and isn't used by any of
the userspace programs now.
Previously the system had no concept of assigning different routes for
different destination addresses as the default gateway IP address was
directly assigned to a network adapter. This default gateway was
statically assigned and any update would remove the previously existing
route.
This patch is a beginning step towards implementing #180. It implements
a simple global routing table that is referenced during the routing
process. With this implementation it is now possible for a user or
service (i.e. DHCP) to dynamically add routes to the table.
The routing table will select the most specific route when possible. It
will select any direct match between the destination and routing entry
addresses. If the destination address overlaps between multiple entries,
the Kernel will use the longest prefix match, or the longest number of
matching bits between the destination address and the routing address.
In the event that there is no entries found for a specific destination
address, this implementation supports entries for a default route to be
set for any specified interface.
This is a small first step towards enhancing the system's routing
capabilities. Future enhancements would include referencing a
configuration file at boot to load pre-defined static routes.
Instead, hold the lock while we copy the contents to a stack-based
Vector then iterate on it without any locking.
Because we rely on heap allocations, we need to propagate errors back
in case of OOM condition, therefore, both PCI::enumerate API function
and PCI::Access::add_host_controller_and_enumerate_attached_devices use
now a ErrorOr<void> return value to propagate errors. OOM Error can only
occur when enumerating the m_device_identifiers vector under a spinlock
and trying to expand the temporary Vector which will be used locklessly
to actually iterate over the PCI::DeviceIdentifiers objects.
This prevents a kernel panic found in CI when m_receive_queue's size is
queried and found to be non-zero, then a different thread clears the
queue, and finally the first thread continues into the if block and
calls the queue's first() method, which then fails an assertion that
the queue's size is non-zero.
We were frequently dropping packets when downloading large files.
Then we had to wait for TCP retransmission which slowed things down.
This patch dramatically improves E1000 throughput by increasing the
number of RX/TX buffers from 32/8 to 256/256.
The largest chunk of JavaScript from Discord now downloads in roughly
1 second instead of 7 seconds. :^)
Rename the bound socket accessor from socket() to bound_socket().
Also return RefPtr<LocalSocket> instead of a raw pointer, to make it
harder for callers to mess up.
1. When receiving FIN while in FinWait1, we now reply with ACK
in addition to the FinWait1->Closing transition.
2. When receiving FIN|ACK while in FinWait1, we now reply with
ACK and transition from FinWait1->TimeWait.
3. When receiving FIN while in FinWait2, we now reply with ACK.
This commit removes the usage of HashMap in Mutex, thereby making Mutex
be allocation-free.
In order to achieve this several simplifications were made to Mutex,
removing unused code-paths and extra VERIFYs:
* We no longer support 'upgrading' a shared lock holder to an
exclusive holder when it is the only shared holder and it did not
unlock the lock before relocking it as exclusive. NOTE: Unlike the
rest of these changes, this scenario is not VERIFY-able in an
allocation-free way, as a result the new LOCK_SHARED_UPGRADE_DEBUG
debug flag was added, this flag lets Mutex allocate in order to
detect such cases when debugging a deadlock.
* We no longer support checking if a Mutex is locked by the current
thread when the Mutex was not locked exclusively, the shared version
of this check was not used anywhere.
* We no longer support force unlocking/relocking a Mutex if the Mutex
was not locked exclusively, the shared version of these functions
was not used anywhere.
Apologies for the enormous commit, but I don't see a way to split this
up nicely. In the vast majority of cases it's a simple change. A few
extra places can use TRY instead of manual error checking though. :^)