I've realized that it probably makes more sense to change the input
transfer characteristics to treat these as sRGB since color conversion
in linear converted from BT.709 doesn't really make sense. If content
creation applications expect media players to display BT.709 without
conversions, this means they expect applications to treat it as sRGB,
since that's what most displays use. That most likely also means they
process it as sRGB internally, meaning we should do the same for our
color primaries conversion.
This moves the setting of code points in CICP structs to member
functions completely so that the code having to set these code points
can be much cleaner.
This adds a struct called CodingIndependentCodePoints and related enums
that are used by video codecs to define its color space that frames
must be converted from when displaying a video.
Pre-multiplied matrices and lookup tables are stored to avoid most of
the floating point division and exponentiation in the conversion.