A GPU (driver) is now responsible for reading and writing pixels from
and to user data. The client (LibGL) is responsible for specifying how
the user data must be interpreted or written to.
This allows us to centralize all pixel format conversion in one class,
`LibSoftGPU::PixelConverter`. For both the input and output image, it
takes a specification containing the image dimensions, the pixel type
and the selection (basically a clipping rect), and converts the pixels
from the input image to the output image.
Effectively this means we now support almost all OpenGL 1.5 formats,
and all custom logic has disappeared from:
- `glDrawPixels`
- `glReadPixels`
- `glTexImage2D`
- `glTexSubImage2D`
The new logic is still unoptimized, but on my machine I experienced no
noticeable slowdown. :^)
These enums are used to indicate byte-alignment when reading from and
to textures. The `GL_UNPACK_ROW_LENGTH` value was reimplemented to
support overriding the source data row width.
This sets the length of a row for the image to be transferred. This
value is measured in pixels. When a rectangle with a width less than
this value is transferred the remaining pixels of this row are skipped.
This extracts the sampler functionality into its own class.
Beginning with OpenGL 3 samplers are actual objects, separate
from textures. It makes sense to do this already as it also
cleans up code organization quite a bit.