This fetches information from /var/run/utmp and shows the interactive
sessions in a little list. More things can be added here to make it
more interesting. :^)
To keep track of ongoing terminal sessions, we now have a sort-of
traditional /var/run/utmp file, like other Unix systems.
Unlike other Unix systems however, ours is of course JSON. :^)
The /bin/utmpupdate program is used to update the file, which is
not writable by regular user accounts. This helper program is
set-GID "utmp".
I suspected an error in CircularDuplexStream::read(Bytes, size_t). This
does not appear to be the case, this test case is useful regardless.
The following script was used to generate the test:
import gzip
uncompressed = []
for _ in range(0x100):
uncompressed.append(1)
for _ in range(0x7e00):
uncompressed.append(0)
for _ in range(0x100):
uncompressed.append(1)
compressed = gzip.compress(bytes(uncompressed))
compressed = ", ".join(f"0x{byte:02x}" for byte in compressed)
print(f"""\
TEST_CASE(gzip_decompress_repeat_around_buffer)
{{
const u8 compressed[] = {{
{compressed}
}};
u8 uncompressed[0x8011];
Bytes{{ uncompressed, sizeof(uncompressed) }}.fill(0);
uncompressed[0x8000] = 1;
const auto decompressed = Compress::GzipDecompressor::decompress_all({{ compressed, sizeof(compressed) }});
EXPECT(compare({{ uncompressed, sizeof(uncompressed) }}, decompressed.bytes()));
}}
""", end="")
Addresses #3394 which was caused by dereferencing null `test_name`
when no arguments were given to /bin/tt. Additionally, arguments
other than the defined tests now print usage and exit to avoid
running the default test.
This only queries a single NTP server, only does a point-to-point
request, doens't do any filtering, doesn't display the response
in any useful format, and is generally very bare-bones.
But maybe, over time it can learn to query more servers, do
filtering, run as a service that keeps state over time to
improve filtering, adjust system time, and maybe learn to
run as an NTP server then.
And also mark strlcpy() and strlcat() with __attribute__((warn_unused_result)).
Since our code is warning-free, this ensures we never misuse those functions.
(Or are very sure about doing it when turning off the warning for a particular
piece of code.)
There was a logic mistake in the code that computes the offset between
columns, forgetting to account for the extra characters after a name.
The comment was accurate, but the code didn't match the comment.
Now we have an actual stream implementation that can read arbitrary
(dynamic codes aren't supported yet) deflate encoded data. Even if
the blocks are really large.
And all of that happens with a single buffer of 32KiB. DEFLATE is
amazing!
The fact that a `MarkedValueList` had to be created was just annoying,
so here's an alternative.
This patchset also removes some (now) unneeded MarkedValueList.h includes.
While this _does_ add a point of failure, it'll be a pretty bad day when
google goes down.
And this is unlikely to put a (positive) dent in their incoming
requests, so let's just roll with it until we have our own TLS server.
A malicious caller of ifconfig could have caused the ifr_name field to
lack NUL-termination. I don't think this was an actual problem, though, as
the Kernel always forces NUL-termination by overwriting ifr_name's last byte
with NUL.
However, it feels better to do it properly.
LibJS doesn't store stacks for exception objects, so this
only amends test-common.js's __expect() with an optional
`details` function that can produce a more detailed error
message, and it lets test-js.cpp read and print that
error message. I added the optional details parameter to
a few matchers, most notably toBe() where it now prints
expected and actual value.
It'd be nice to have line numbers of failures, but that
seems hard to do with the current design, and this is already
much better than the current state.
Previously, the implementation would produce one Vector<u8> which
would contain the whole decompressed data. That can be a lot and
even exhaust memory.
With these changes it is still necessary to store the whole input data
in one piece (I am working on this next,) but the output can be read
block by block. (That's not optimal either because blocks can be
arbitrarily large, but it's good for now.)