This adds a virtual base class for GPU devices located in LibGPU.
The OpenGL context now only talks to this device agnostic interface.
Currently the device interface is simply a copy of the existing SoftGPU
interface to get things going :^)
This introduces a new device independent base class for Images in LibGPU
that also keeps track of the device from which it was created in order
to prevent assigning images across devices.
This introduces a new abstraction layer, LibGPU, that serves as the
usermode interface to GPU devices. To get started we just move the
DeviceConfig there and make sure everything still works :^)
We now support generating top-left submatrices from a `Gfx::Matrix`
and we move the normal transformation calculation into
`SoftGPU::Device`. No functional changes.
Currently, LibSoftGPU is still OpenGL-minded in that it uses a
coordinate system with the origin of `(0, 0)` at the lower-left of
textures, buffers and window coordinates. Because we are blitting to a
`Gfx::Bitmap` that has the origin at the top-left, we need to flip the
Y-coordinates somewhere in the rasterization logic.
We used to do this during conversion of NDC-coordinates to window
coordinates. This resulted in some incorrect behavior when
rasterization did not pass through the vertex transformation logic,
e.g. when calling `glDrawPixels`.
This changes the coordinate system to OpenGL's throughout, only to blit
the final color buffer upside down to the target bitmap. This fixes
drawing to the depth buffer directly resulting in upside down images.
Between the OpenGL client and server, a lot of data type and color
conversion needs to happen. We are performing these conversions both in
`LibSoftGPU` and `LibGL`, which is not ideal. Additionally, some
concepts like the color, depth and stencil buffers should share their
logic but have separate implementations.
This is the first step towards generalizing our `LibSoftGPU` frame
buffer: a generalized `Typed3DBuffer` is introduced for arbitrary 3D
value storage and retrieval, and `Typed2DBuffer` wraps around it to
provide in an easy-to-use 2D pixel buffer. The color, depth and stencil
buffers are replaced by `Typed2DBuffer` and are now managed by the new
`FrameBuffer` class.
The `Image` class now uses multiple `Typed3DBuffer`s for layers and
mipmap levels. Additionally, the textures are now always stored as
BGRA8888, only converting between formats when reading or writing
pixels.
Ideally this refactor should have no functional changes, but some
graphical glitches in Grim Fandango seem to be fixed and most OpenGL
ports get an FPS boost on my machine. :^)
This function was added as a FIXME but was then arbitrarily invoked in
the rest of `Device`. We are better off removing this FIXME for now and
reevaluate introducing multithreading later on, so the code is not
littered with useless empty function calls.
This implements an 8-bit front stencil buffer. Stencil operations are
SIMD optimized. LibGL changes include:
* New `glStencilMask` and `glStencilMaskSeparate` functions
* New context parameter `GL_STENCIL_CLEAR_VALUE`
Implements support for `glRasterPos` and updating the raster position's
window coordinates through `glBitmap`. The input for `glRasterPos` is
an object position that needs to go through the same vertex
transformations as our regular triangles.
When `GL_COLOR_MATERIAL` is enabled, specific material parameters can
be overwritten by the current color per-vertex during the lighting
calculations. Which parameter is controlled by `glColorMaterial`.
Also move the lighting calculations _before_ clipping, because the spec
says so. As a result, we interpolate the resulting vertex color instead
of the input color.
This was currently only set in the OpenGL context, as the previous
architecture did all of the transformation in LibGL before passing the
transformed triangles onto the rasterizer. As this has now changed, and
we require the vertex data to be in eye-space before we can apply
lighting, we need to pass this flag along as well via the GPU options.
Most of the T&L stuff is, like on an actual GPU, now done inside of
LibSoftGPU. As such, it no longer makes sense to have specific values
like the scene ambient color inside of LibGL as part of the GL context.
These have now been moved into LibSoftGPU and use the same pattern as
the render options to set/get.
These two functions have been turned from stubs into actually doing
something. They now set the correspondingmaterial data member based on
the value passed into the `pname`argument.
Co-authored-by: Stephan Unverwerth <s.unverwerth@serenityos.org>
This implements the `glLightf{v}` family of functions used to set
lighting parameters per light in the GL. It also fixes an incorrect
prototype for the user exposed version of `glLightf{v}` in which
`params` was not marked as `const`.
Since the alpha blend configuration should not change between most calls
of draw_primitives it makes no sense to reinitialize the blend factors
for every rasterized triangle.
The alpha blend factors are now set up whenever the device config
changes. The blend factors are stored in struct AlphaBlendFactors.
This adds member functions Device::rasterize_triangle() and
Device::shade_fragments(). They were free standing functions/lambdas
previously which led to a lot of parameters being passed around.
This displays statistics regarding frame timings and number of pixels
rendered.
Timings are based on the time between draw_debug_overlay() invocations.
This measures actual number of frames presented to the user vs. wall
clock time so this also includes everything the app might do besides
rendering.
Triangles are counted after clipping. This number might actually be
higher than the number of triangles coming from LibGL.
Pixels are counted after the initial scissor and coverage test. Pixels
rejected here are not counted. Shaded pixels is the percentage of all
pixels that made it to the shading stage. Blended pixels is the
percentage of shaded pixels that were alpha blended to the color buffer.
Overdraw measures how many pixels were shaded vs. how many pixels the
render target has. e.g. a 640x480 render target has 307200 pixels. If
exactly that many pixels are shaded the overdraw number will read 0%.
614400 shaded pixels will read as an overdraw of 100%.
Sampler calls is simply the number of times sampler.sample_2d() was
called.
Texture coordinate generation is the concept of automatically
generating vertex texture coordinates instead of using the provided
coordinates (i.e. `glTexCoord`).
This commit implements support for:
* The `GL_TEXTURE_GEN_Q/R/S/T` capabilities
* The `GL_OBJECT_LINEAR`, `GL_EYE_LINEAR`, `GL_SPHERE_MAP`,
`GL_REFLECTION_MAP` and `GL_NORMAL_MAP` modes
* Object and eye plane coefficients (write-only at the moment)
This changeset allows Tux Racer to render its terrain :^)
This follows the OpenGL 1.5 spec much more closely. We need to store
the eye coordinates especially, since they are used in texture
coordinate generation and fog fragment depth calculation.
* LibGL now supports the `GL_NORMALIZE` capability
* LibSoftGPU transforms and normalizes the vertices' normals
Normals are heavily used in texture coordinate generation, to be
implemented in a future commit.
This adds a method `info()` to SoftGPU that returns the name of the
hardware vendor and device name, as well as the number of texture untis.
LibGL uses the returned texture unit count to initialize its internal
texture unit array.
Replaces the GLenum used in RasterizerConfig to select the draw buffer
with a simple boolean that disabled color output when the draw buffer
is set to GL_NONE on the OpenGL side.