It appeared that we sometimes failed to invoke synchronous commands on
the GPU. To temporarily fix this, wait 10 milliseconds for commands to
complete before failing.
According to the specification, modesetting can be invoked with no need
for flushing the framebuffer nor with DMA to transfer the framebuffer
rendering.
Before doing a check if offset_in_region + num_bytes of the transfer
descriptor are together more than NUM_TRANSFER_REGION_PAGES * PAGE_SIZE,
check that addition of both of these parameters will not simply overflow
which could lead to out-of-bounds read/write.
Fixes#17518.
Returning literal strings is not the proper action here, because we
should always assume that error could be propagated back to userland, so
we need to keep a valid errno when returning an Error.
This header has always been fundamentally a Kernel API file. Move it
where it belongs. Include it directly in Kernel files, and make
Userland applications include it via sys/ioctl.h rather than directly.
Instead of using a clunky switch-case paradigm, we now have all drivers
being declaring two methods for their adapter class - create and probe.
These methods are linked in each PCIGraphicsDriverInitializer structure,
in a new s_initializers static list of them.
Then, when we probe for a PCI device, we use each probe method and if
there's a match, then the corresponding create method is called.
As a result of this change, it's much more easy to add more drivers and
the initialization code is more readable.
A virtual method named device_name() was added to
Kernel::PCI to support logging the PCI::Device name
and address using dmesgln_pci. Previously, PCI::Device
did not store the device name.
All devices inheriting from PCI::Device now use dmesgln_pci where
they previously used dmesgln.
This step would ideally not have been necessary (increases amount of
refactoring and templates necessary, which in turn increases build
times), but it gives us a couple of nice properties:
- SpinlockProtected inside Singleton (a very common combination) can now
obtain any lock rank just via the template parameter. It was not
previously possible to do this with SingletonInstanceCreator magic.
- SpinlockProtected's lock rank is now mandatory; this is the majority
of cases and allows us to see where we're still missing proper ranks.
- The type already informs us what lock rank a lock has, which aids code
readability and (possibly, if gdb cooperates) lock mismatch debugging.
- The rank of a lock can no longer be dynamic, which is not something we
wanted in the first place (or made use of). Locks randomly changing
their rank sounds like a disaster waiting to happen.
- In some places, we might be able to statically check that locks are
taken in the right order (with the right lock rank checking
implementation) as rank information is fully statically known.
This refactoring even more exposes the fact that Mutex has no lock rank
capabilites, which is not fixed here.
This has been done in multiple ways:
- Each time we modeset the resolution via the VirtIOGPU DisplayConnector
we ensure that the framebuffer is updated with the new resolution.
- Each time the cursor is updated we ensure that the framebuffer console
is marked dirty so the IO Work Queue task which is scheduled to check
if it is dirty, will flush the surface.
- We only initialize a framebuffer console after we ensure that at the
very least a DisplayConnector has being set with a known resolution.
- We only call GenericFramebufferConsole::enable() when enabling the
console after the important variables of the console (m_width, m_pitch
and m_height) have been set.
This is necessary to allow transferring frame buffers larger than
~500x500 pixels back to user space. Until the buffer management is
improved this allows us to at least test the existing game ports.
This happens to be a sad truth for the VirtIOGPU driver - it lacked any
error propagation measures and generally relied on clunky assumptions
that most operations with the GPU device are infallible, although in
reality much of them could fail, so we do need to handle errors.
To fix this, synchronous GPU commands no longer rely on the wait queue
mechanism anymore, so instead we introduce a timeout-based mechanism,
similar to how other Kernel drivers use a polling based mechanism with
the assumption that hardware could get stuck in an error state and we
could abort gracefully.
Then, we change most of the VirtIOGraphicsAdapter methods to propagate
errors properly to the original callers, to ensure that if a synchronous
GPU command failed, either the Kernel or userspace could do something
meaningful about this situation.
The performance that we achieve from this technique is visually worse
compared to turning off this feature, so let's not use this until we
figure out why it happens.
As is, we never *deallocate* them, so we will run out eventually.
Creating a context, or allocating a context ID, now returns ErrorOr if
there are no available free context IDs.
`number_of_fixmes--;` :^)
Until now, our kernel has reimplemented a number of AK classes to
provide automatic internal locking:
- RefPtr
- NonnullRefPtr
- WeakPtr
- Weakable
This patch renames the Kernel classes so that they can coexist with
the original AK classes:
- RefPtr => LockRefPtr
- NonnullRefPtr => NonnullLockRefPtr
- WeakPtr => LockWeakPtr
- Weakable => LockWeakable
The goal here is to eventually get rid of the Lock* classes in favor of
using external locking.
Instead of having two separate implementations of AK::RefCounted, one
for userspace and one for kernelspace, there is now RefCounted and
AtomicRefCounted.
All users which relied on the default constructor use a None lock rank
for now. This will make it easier to in the future remove LockRank and
actually annotate the ranks by searching for None.
There's no point in keeping this method as we don't really care if a
graphics adapter is VGA compatible or not because we don't use this
method anymore.
Each of these strings would previously rely on StringView's char const*
constructor overload, which would call __builtin_strlen on the string.
Since we now have operator ""sv, we can replace these with much simpler
versions. This opens the door to being able to remove
StringView(char const*).
No functional changes.
The mmap interface was removed when we introduced the DisplayConnector
class, as it was quite unsafe to use and didn't handle switching between
graphical and text modes safely. By using the SharedFramebufferVMObject,
we are able to elegantly coordinate the switch by remapping the attached
mmap'ed-Memory::Region(s) with different mappings, therefore, keeping
WindowServer to think that the mappings it has are still valid, while
they are going to a different physical range until we are back to the
graphical mode (after a switch from text mode).
Most drivers take advantage of the fact that we know where is the actual
framebuffer in physical memory space, the SharedFramebufferVMObject is
created with that information. However, the VirtIO driver is different
in that aspect, because it relies on DMA transactions to show graphics
on the framebuffer, so the SharedFramebufferVMObject is created with
that mindset to support the arbitrary framebuffer location in physical
memory space.
Keeping the exact details of a dirty rectangle doesn't make any sense
when we just flush the entire screen, so just keep a simple boolean
value to know if the screen needs to be flushed or not.
The old methods are already can be considered deprecated, and now after
we removed framebuffer devices entirely, we can safely remove these
methods too, which simplfies the GenericGraphicsAdapter class a lot.
We shouldn't expose the VirtIO GPU3DDevice constructor as public method,
so instead, let's use the usual pattern of a static construction method
that uses the constructor within the method.
In most cases it's safe to abort the requested operation and go forward,
however, in some places it's not clear yet how to handle these failures,
therefore, we use the MUST() wrapper to force a kernel panic for now.
This code attempts to copy the `Protocol::Resource3DSpecification`
struct into request, starting at `Protocol::ResourceCreate3D::target`
member of the `Protocol::ResourceCreate3D` struct.
The problem is that the `Protocol::Resource3DSpecification` struct
does not having the trailing `u32 padding` that the `ResourceCreate3D`
struct has. Leading to memcopy overrunning the struct and corrupting
32 bits of data trailing the struct.
Found by SonarCloud:
- Memory copy function overflows the destination buffer.
These fences should not be needed, since we force the use of
synchronous operations through synchronous_virtio_gpu_command. The use
of these fences also causes severe lag when SERENITY_GL is enabled.
This commit flips VirtIOGPU back to using a Mutex for its operation
lock (instead of a spinlock). This is necessary for avoiding a few
system hangs when queuing actions on the driver from multiple
processes, which becomes much more of an issue when using VirGL from
multiple userspace process.
This does result in a few code paths where we inevitably have to grab
a mutex from inside a spinlock, the only way to fix both issues is to
move to issuing asynchronous virtio gpu commands.
Function-local `static constexpr` variables can be `constexpr`. This
can reduce memory consumption, binary size, and offer additional
compiler optimizations.
These changes result in a stripped x86_64 kernel binary size reduction
of 592 bytes.
When GraphicsManagement initializes the drivers we can disable the
bootloader framebuffer console. Right now we don't yet fully destroy
the no longer needed console as it may be in use by another CPU.
Apologies for the enormous commit, but I don't see a way to split this
up nicely. In the vast majority of cases it's a simple change. A few
extra places can use TRY instead of manual error checking though. :^)
Previously we would crash the process immediately when a promise
violation was found during a syscall. This is error prone, as we
don't unwind the stack. This means that in certain cases we can
leak resources, like an OwnPtr / RefPtr tracked on the stack. Or
even leak a lock acquired in a ScopeLockLocker.
To remedy this situation we move the promise violation handling to
the syscall handler, right before we return to user space. This
allows the code to follow the normal unwind path, and grantees
there is no longer any cleanup that needs to occur.
The Process::require_promise() and Process::require_no_promises()
functions were modified to return ErrorOr<void> so we enforce that
the errors are always propagated by the caller.