...and keep a forwarding header around in VP9, so we don't have to
update all references to the class there.
In time, we probably want to merge LibGfx/ImageDecoders and LibVideo
into LibMedia, but for now I need just this class for the lossy
webp decoder. So move just it over.
No behvior change.
This adds a new WorkerThread class to run one task asynchronously,
and allow waiting for that thread to finish its work.
TileContexts are placed into multiple tile column vectors with their
streams to read from pre-created. Once those are ready, the threads can
start their work on each vector separately. The main thread waits for
those tasks to finish, then sums up the syntax element counts for each
tile that was decoded.
This doesn't appear to have had a measurable impact on performance,
and behavior is the same.
With the tiles using independent BooleanDecoders with their own
backing BitStreams, we're even one step closer to threaded tiles!
As new demuxers are added, this will get quite full of files, so it'll
be good to have a separate folder for these.
To avoid too many chained namespaces, the Containers subdirectory is
not also a namespace, but the Matroska folder is for the sake of
separating the multiple classes for parsed information entering the
Video namespace.
Otherwise, we end up propagating those dependencies into targets that
link against that library, which creates unnecessary link-time
dependencies.
Also included are changes to readd now missing dependencies to tools
that actually need them.
This file will be the basis for abstracting away the out-of-thread or
later out-of-process decoding from applications displaying videos. For
now, the demuxer is hardcoded to be MatroskaParser, since that is all
we support so far. The demuxer should later be selected based on the
file header.
The playback and decoding are currently all done on one thread using
timers. The design of the code is such that adding threading should
be trivial, at least based on an earlier version of the code. For now,
though, it's better that this runs in one thread, as the multithreaded
approach causes the Video Player to lock up permanently after a few
frames are decoded.
The class is virtual and has one subclass, SubsampledYUVFrame, which
is used by the VP9 decoder to return a single frame. The
output_to_bitmap(Bitmap&) function can be used to set pixels on an
existing bitmap of the correct size to the RGB values that
should be displayed. The to_bitmap() function will allocate a new bitmap
and fill it using output_to_bitmap.
This new class also implements bilinear scaling of the subsampled U and
V planes so that subsampled videos' colors will appear smoother.
This adds a struct called CodingIndependentCodePoints and related enums
that are used by video codecs to define its color space that frames
must be converted from when displaying a video.
Pre-multiplied matrices and lookup tables are stored to avoid most of
the floating point division and exponentiation in the conversion.
This changes MotionVector by removing the cpp file and moving all
functions to the header, where they are now declared as constexpr
so that they can be compile-time evaluated in LookupTables.h.
The first keyframe of the test video can be decoded with these changes.
Raw memory allocations in the Parser have been replaced with Vector or
Array to avoid memory leaks and OOBs.
With this patch we are finally done with section 6.4.X of the spec :^)
The only parsing left to be done is 6.5.X, motion vector prediction.
Additionally, this patch fixes how MVs were being stored in the parser.
Originally, due to the spec naming two very different values very
similarly, these properties had totally wrong data types, but this has
now been rectified.
This was required for correctly parsing more than one frame's
height/width data properly. Additionally, start handling failure
a little more gracefully. Since we don't fully parse a tile before
starting to parse the next tile, we will now no longer make it past
the first tile mark, meaning we should not handle that scenario well.
The class that was previously named Decoder handled section 6.X.X of
the spec, which actually deals with parsing out the syntax of the data,
not the actual decoding logic which is specified in section 8.X.X.
The new Decoder class will be in charge of owning and running the
Parser, as well as implementing all of the decoding behavior.
This patch brings all of the previous work together and starts to
actually parse and decode frame information. Currently it only parses
the uncompressed header data (section 6.2 of the spec).
The VP9 specification requires a special decoding process to parse a
lot of the data read from a frame, so this BitStream wrapper implements
that behavior. These processes are defined in section 9 of the
VP9 spec.
This commit initializes the LibVideo library and implements parsing
basic Matroska container files. Currently, it will only parse audio
and video tracks.