The old implementation of PassMode has only been tested with a single
image, and let's say that it didn't survive long in the wild. A few
cases were not considered:
- We only supported VerticalMode right after PassMode.
- It can happen that token need to be used but not consumed from the
reference line.
With that fix, we are able to decode every single PDF file from the
1000-file zip "0000" (except 0000871.pdf, which uses byte alignment).
This is massive progress compared to the hundred of errors that we were
previously receiving.
It turns out that hmtx and OS/2 table values _are_ used when
rendering OpenType for PDFs: hmtx is used for the left-side bearing
value (which is read in `Painter::draw_glyph()`), and OS/2 is used
for the ascender, which Type0's CIDFontType2::draw_glyph()
and TrueTypeFont::draw_glyph() read.
So instead of not trying to read these tables, instead try to read
them but tolerate them failing to read and ignore them then.
Follow-up to #23276.
(I've seen weird glyph positioning from not reading the hmtx table.
I haven't seen any problems caused by not reading the OS/2 table yet,
but since the PDF code does use the ascender value, let's read that
too.)
The former automatically adapts the prefix to binary and octal
output, and is what we already use in the majority of cases.
Patch generated by:
rg -l '0x\{' | xargs sed -i '' -e 's/0x{:/{:#/'
I ran it 4 times (until it stopped changing things) since each
invocation only converted one instance per line.
No behavior change.
Check if we have a cmap before dereferencing it again.
Fixes a crash on page 8 of 0000188.pdf now that the font no
longer fails to load to due to a missing name table.
Looks like this is a Type2 truetype font, where we don't provide
an external cmap. How this font is supposed to work without a cmap
I don't know -- but for now, we no longer crash on it, and draw
some of the text with the previous font (which happens to work
fine in this particular case).
Kind of reverts #21675, but #21744 made that better
4 of my 1000 test PDFs complained "Invalid table offset or length in
font" before.
For example, in 0000203.pdf, these tags had length 0: 'cvt ', 'fpgm',
'prep', 'name', 'OS/2'. (Generally it's tables that aren't needed
for rendering PDFs, and the PDF writer figured it's easier to zero
out these tables instead of omitting them altogether for some reason.)
Increases number of PDFs that render without diagnostics from
765 to 767.
It is sometimes truncated in fonts embedded in PDFs, and the data
is not needed to render PDFs. 2 of my 1000 test PDFs used to
complain "Could not load OS2 v1: Not enough data" and 1
"Could not load OS2 v2: Not enough data" before.
Increases number of PDFs that render without diagnostics from
764 to 765 (and decreases the number of distinct error messages
from 27 to 25).
It is sometimes truncated in fonts embedded in PDFs, and the data
is not needed to render PDFs. 26 of my 1000 test files complained
"Could not load Hmtx: Not enough data" before.
Increases number of PDFs that render without diagnostics from
743 to 764.
It is often missing in fonts embedded in PDFs. 75 of my 1000 test
files complained "Font is missing Name" when trying to read fonts
before.
Increases number of PDFs that render without diagnostics from
682 to 743.
Before, we used to reject profiles where the creation datetime was
invalid per spec. But invalid dates happen in practice (most commonly,
all fields set to 0). They don't affect profile conversion at all,
so be lenient about this, in exchange for slightly more wordy code
in the places that want to show the creation datetime.
Fixes a crash rendering page 2 of
https://fredrikbk.com/publications/copy-and-patch.pdf
Our current CLUT code is pretty slow. A one-element cache can make
it quite a bit faster for images that have long runs of a single
color, such as illustrations.
It also seems to help with photos some (in `0000711.pdf` page 1) --
I suppose even them have enough repeating pixels for it to be worth
it.
Some numbers, with `pdf --render-bench`:
`0000711.pdf --page 1` (high-res photo)
before: 2.9s
after: 2.43s
`0000277.pdf --page 19` (my nemesis PDF, large-ish illustration)
before: 2.17s
after: 0.58s (!)
`0000502.pdf --page 2` (wat hoe dat)
before: 0.66s
after: 0.27s
`0000521.pdf --page 10 ` (japanese)
before: 0.52s
after: 0.29s
`0000364.pdf --page 1` (4 min test case)
before: 0.48s
after: 0.19s
Thanks to that last one, reduces the time for
`time Meta/test_pdf.py ~/Downloads/0000` from 4m22s to 1m28s.
Helps quite a bit with #23157 (but high-res photos are still too
slow).
This reverts commit a4b2e5b27b.
This was just plain wrong, I remember it making sense and fixing
something but that was probably due to local changes. It should never
have landed on master, my bad.
This function was called over and over in `manage_extra_channels()`,
even if the result depends only on the metadata. Instead, we now call it
once and store the result.
While all callers of lerp_nd created a Vector with inline_capacity of 4
to ensure no heap allocation will take place for most inputs, lerp_nd
requested a reference to a Vector with 0 inline_capacity, which meant a
new vector was allocated on each call.
VectorN::length() returns the vector length (aka the square root of the
dot product of the vector with itself), not the count of components.
This is seemingly both wrong and slow.
If this is passed in, we don't use the cmap table from the font,
but use the external lookup object instead.
This conveniently happens to create a place where we can put all the
Cmap configuration logic that was a bit spread out before.
No behavior change yet.
A tile is basically a strip with a user-defined width. With that in
mind, adding support for them is quite straightforward. As a lot the
common code was named after 'strips', to avoid future confusion I
renamed everything that interact with either strips or tiles to a
global term: 'segment'.
Note that tiled images are supposed to always have a 'TileOffsets' tag
instead of 'StripOffset'. However, this doesn't seem to be enforced by
encoders, so we support having either of them indifferently.
The test case was generated with the following Python script:
import pyvips
img = pyvips.Image.new_from_file('deflate.tiff')
img.write_to_file('tiled.tiff',
compression=pyvips.ForeignTiffCompression.DEFLATE,
tile=True, tile_width=64, tile_height=64)
This variable stores the number of rows from the beginning of the image,
contrary to `row` that stores the number of rows relative to the start
of the current segment.
The first marker is always white in CCITT streams, so lines starting
with a black pixel encodes a symbol meaning 0 white pixels. Then, the
decoding would proceed with a black symbol. We used to set the symbol's
color based on `column == 0`, which is wrong in this situation.
The `read_tag()` function is not mandated to keep the reading head at a
meaningful position, so we also need to align the pointer after the last
tag. This solves a bug where reading the last field of an IFD, which is
placed after the tags, was incorrect.
Every TIFF containers is composed of a main IFD. Some entries of this
one can be a pointer to a sub-IFD. We are now capable of exploring these
underlying structures. Note that we don't do anything with them yet.
JPEGStream::byte_offset() now returns an offset relative to the start
of the stream, instead of relative to the buffered part.
No behavior change except if JPEG_DEBUG is set.
We now reject fonts where the active cmap subtable is in a format
we can't read yet, instead of silently drawing squares for all glyphs.
This doesn't fire at all for my 1000-file PDF test set, but seems
like a good thing to check.
(Instead of duplicating the switch, I first tried making a
glyph_id_for_code_point_or_else() that returns ErrorOr<u32> and then
make both glyph_id_for_code_point() and validate_format_can_be_read()
call that, but I liked less how that worked out -- felt too clever.)