mirror of
https://github.com/RGBCube/serenity
synced 2025-07-27 07:47:35 +00:00
LibJS: Apply source's byte offset in TA#set when both TAs have same type
On the code path where we are setting a TypedArray from another TypedArray of the same type, we forgo the spec text and simply do a memmove between the two ArrayBuffers. However, we forgot to apply source's byte offset on this code path. This meant if we tried setting a TypedArray from a TypedArray we got from .subarray(), we would still copy from the start of the subarray's ArrayBuffer. This is because .subarray() returns a new TypedArray with the same ArrayBuffer but the new TypedArray has a smaller length and a byte offset that the rest of the codebase is responsible for applying. This affected pako when it was decompressing a zlib stream that has multiple zlib chunks in it. To read from the second chunk, it would set the zlib window TypedArray from the .subarray() of the chunk offset in the stream's TypedArray. This effectively made the decompressed data from the second chunk a mis-mash of old data that looked completely scrambled. It would also cause all future decompression using the same pako Inflate instance to also appear scrambled. As a pako comment aptly puts it: > Call updatewindow() to create and/or update the window state. > Note: a memory error from inflate() is non-recoverable. This allows us to properly decompress the large compressed payloads that Discord Gateway sends down to the Discord client. For example, for an account that's only in the Serenity Discord, one of the payloads is a 20 KB zlib compressed blob that has two chunks in it. Surprisingly, this is not covered by test262! I imagine this would have been caught earlier if there was such a test :^)
This commit is contained in:
parent
edcec09aa4
commit
a54fdd5212
2 changed files with 51 additions and 1 deletions
|
@ -732,7 +732,7 @@ JS_DEFINE_NATIVE_FUNCTION(TypedArrayPrototype::set)
|
|||
// ii. Perform SetValueInBuffer(targetBuffer, targetByteIndex, Uint8, value, true, Unordered).
|
||||
// iii. Set srcByteIndex to srcByteIndex + 1.
|
||||
// iv. Set targetByteIndex to targetByteIndex + 1.
|
||||
target_buffer->buffer().overwrite(target_byte_index, source_buffer->buffer().data(), limit - target_byte_index);
|
||||
target_buffer->buffer().overwrite(target_byte_index, source_buffer->buffer().data() + source_byte_index, limit - target_byte_index);
|
||||
} else {
|
||||
// a. Repeat, while targetByteIndex < limit,
|
||||
while (target_byte_index < limit) {
|
||||
|
|
|
@ -0,0 +1,50 @@
|
|||
const TYPED_ARRAYS = [
|
||||
{ array: Uint8Array, maxUnsignedInteger: 2 ** 8 - 1 },
|
||||
{ array: Uint8ClampedArray, maxUnsignedInteger: 2 ** 8 - 1 },
|
||||
{ array: Uint16Array, maxUnsignedInteger: 2 ** 16 - 1 },
|
||||
{ array: Uint32Array, maxUnsignedInteger: 2 ** 32 - 1 },
|
||||
{ array: Int8Array, maxUnsignedInteger: 2 ** 7 - 1 },
|
||||
{ array: Int16Array, maxUnsignedInteger: 2 ** 15 - 1 },
|
||||
{ array: Int32Array, maxUnsignedInteger: 2 ** 31 - 1 },
|
||||
{ array: Float32Array, maxUnsignedInteger: 2 ** 24 - 1 },
|
||||
{ array: Float64Array, maxUnsignedInteger: Number.MAX_SAFE_INTEGER },
|
||||
];
|
||||
|
||||
const BIGINT_TYPED_ARRAYS = [
|
||||
{ array: BigUint64Array, maxUnsignedInteger: 2n ** 64n - 1n },
|
||||
{ array: BigInt64Array, maxUnsignedInteger: 2n ** 63n - 1n },
|
||||
];
|
||||
|
||||
// FIXME: Write out a full test suite for this function. This currently only performs a single regression test.
|
||||
describe("normal behavior", () => {
|
||||
// Previously, we didn't apply source's byte offset on the code path for setting a typed array
|
||||
// from another typed array of the same type. This means the result array would previously contain
|
||||
// [maxUnsignedInteger - 3(n), maxUnsignedInteger - 2(n)] instead of [maxUnsignedInteger - 1(n), maxUnsignedInteger]
|
||||
test("two typed arrays of the same type code path applies source's byte offset", () => {
|
||||
TYPED_ARRAYS.forEach(({ array, maxUnsignedInteger }) => {
|
||||
const firstTypedArray = new array([
|
||||
maxUnsignedInteger - 3,
|
||||
maxUnsignedInteger - 2,
|
||||
maxUnsignedInteger - 1,
|
||||
maxUnsignedInteger,
|
||||
]);
|
||||
const secondTypedArray = new array(2);
|
||||
secondTypedArray.set(firstTypedArray.subarray(2, 4), 0);
|
||||
expect(secondTypedArray[0]).toBe(maxUnsignedInteger - 1);
|
||||
expect(secondTypedArray[1]).toBe(maxUnsignedInteger);
|
||||
});
|
||||
|
||||
BIGINT_TYPED_ARRAYS.forEach(({ array, maxUnsignedInteger }) => {
|
||||
const firstTypedArray = new array([
|
||||
maxUnsignedInteger - 3n,
|
||||
maxUnsignedInteger - 2n,
|
||||
maxUnsignedInteger - 1n,
|
||||
maxUnsignedInteger,
|
||||
]);
|
||||
const secondTypedArray = new array(2);
|
||||
secondTypedArray.set(firstTypedArray.subarray(2, 4), 0);
|
||||
expect(secondTypedArray[0]).toBe(maxUnsignedInteger - 1n);
|
||||
expect(secondTypedArray[1]).toBe(maxUnsignedInteger);
|
||||
});
|
||||
});
|
||||
});
|
Loading…
Add table
Add a link
Reference in a new issue