BMP files encode the direction of the rows with the sign of the height.
Our BMP decoder already makes all the proper checks, however when
constructing the Gfx::Bitmap, didn't actually make the height positive.
Boog neutralized :^)
This was the only remaining codec that produced IndexedN bitmaps.
By removing them, we'll be able to get rid of those formats and simplify
the Bitmap and Painter classes.
It is unlikely this is needed anymore, and as pointed out things should
now safely return OOM if the bitmap is too large to allocate.
Also, no recently added decoders respected this limit anyway.
Fixes#20872
IFF was a generic container fileformat that was popular on the Amiga
since it was the only file format supported by Deluxe Paint.
ILBM is an image format popular in the late eighties/nineties
that uses the IFF container.
This is a very first version of the decoder that only supports
(byterun) compressed files with bpp <= 8.
Only the minimal chunks are decoded: CMAP, BODY, BMHD.
I am planning to add support for the following variants:
- EHB (32 colours + lighter 32 colours)
- HAM6 / HAM8 (special mode that allowed to display the whole Amiga
4096 colours / 262 144 colours palette)
- TrueColor (24bit)
Things that could be fun to do:
- Still images could be animated using color cycle information
Due to the way JPEG XL encodes its lossless images, it is sometimes
interesting to embed a large image and crop the result at the end. This
patch adds the functionality to crop a frame.
Note that JPEG XL supports image composition (almost like layers in
image editing software programs) and I tried to make these changes be
a step toward image composing. It's a small step as we are still unable
to read multiple frames, and we only support the `kReplace` blending
mode.
The image was previously managed as an output parameter of the
`read_frame` function. This reorganisation also brings us closer to the
spec. As it is specified that each frame have its own image that will
later on compose a greater bitmap.
While being way less frequent than for classical JPEG images, JPEG XL
images can use the YCbCr color space. Supporting it makes us properly
decode the "The Smoke Machine" image on https://jpegxl.info/jxl-art.html
On top of the Brotli/ANS compression, the entropy stream can have
another compression layer, this time using LZ77. This patch adds support
for this last layer, which makes us decode the `lz77_flower` test. This
double-compression is also often used for ICC profiles.
Since the introduction of the JPEG XL decoder, we always read the
`orientation` field in the `ImageMetadata` bundle. This patch allows us
to render the bitmap accordingly to this transformation.
The idea behind this class is to provide an abstraction for decoders.
It allows them to use the class as if it was a normal `Bitmap`, however,
under the hood, this class will honor a given orientation, as specified
by the Exif standard. This class is introduced to be used within the
JPEG XL decoder, but it should be possible to use it for every
Exif-compatible format.
We don't do anything particular with images that contain a custom
ColourEncoding yet, but at least it allows us to know how to decode the
metadata of such images.
Complex distribution - distributions that are encoded using an internal
symbol decoder with a single distribution, are very often used for lz77
compressed images. This is a requirement for the lz77_flower test case.
This type of cluster is an exception and has specific rules (but
simpler) to be read. This is a requirement to support complex
distributions as they use a symbol decoder initialized with a single
distribution.
This allows us to read many more images. This decoder is one of the two
possibilities (along Brotli) that can be used for modular images. All
the logic is directly taken from the spec.
One of the image that can now be decoded is "Lucifer's Dominion:
Synthesis" that can be found on `https://jpegxl.info/art/`, it also
makes us pass one more test of the conformance test suite, namely
"alpha_triangles".
JPEG XL supports two types of entropy encoding: the first one is
Huffman-based (Brotli) and the second one is based on ANS. To introduce
the latter, we start by storing the `Vector` of distributions in a
`Variant`. This will allow us to choose which entropy decoder we use
during execution.
This predictor is much more complicated than the others. Indeed, to be
computed, it needs its own value but for other pixels. As you can guess,
implementing it involved the introduction of a structure to hold that
data.
Fundamentally, this predictor uses the value of the error between the
predicted value and the true value (aka decoded value) of pixels around.
One of this computed error (namely max_error) is used as a property, so
this patch also solves a FIXME in `get_properties`.
To ease the access to value that are close in the channel and moving
their values around, this patch adds a `Neighborhood` struct which holds
this data. It has been used in `prediction()` and it allowed to simplify
the signature and to remove the explicit retrieval of the underlying
data.
All this work allows us to decode the default image that appears when
loading `https://jxl-art.surma.technology`. However, we still render it
incorrectly due to the lack of support for orientation values different
from 1.
The first implementation of this property was just plain wrong. Looks
like this property isn't used a lot as I found the issue by reviewing
the code and not because of a specific image.
The test image is a 32x32 mosaic of alternating black and yellow pixels,
it was generated using this code:
Bitdepth 8
RCT 1
Width 32
Height 32
if W-WW-NW+NWW > -300
- Set -1000
- Set 900
These methods are slightly more convenient than storing the Bytes
separately. However, it it feels unsanitary to reach in and access this
data directly. Both of the users of these already have the
[Readonly]Bytes available in their constructors, and can easily avoid
using these methods, so let's remove them entirely.
Now that we are able to read extra channels, it's time to include them
in the final bitmap. They are usually at a smaller resolution than the
final bitmap and the first step to render them is upscaling. Luckily,
this is not necessary for rendering the `alpha_nonpremultiplied` case of
the conformance test suite, so, as usual, I implemented this rendering
function as a check + no-op.
Then, we simply test if an alpha channel is present and emit the
corresponding data when creating the bitmap.
Finally, it means that we are now capable of rendering images with a
full size alpha channel, like `alpha_nonpremultiplied`. In other words,
we now successfully decode one of the image of the official test suite!