Get rid of the old, roundabout way of invalidating the rule cache by
incrementing the StyleSheetList "generation".
Instead, when something wants to invalidate the rule cache, just have it
directly invalidate the rule cache. This makes it much easier to see
what's happening anyway.
Previously, we were creating a user-agent shadow tree when constructing
a layout tree. This meant that we did DOM manipulation (and consequently
style invalidation) during layout tree construction, which made things
very hard to reason about in Layout::TreeBuilder.
Simply everything by simply creating the UA shadow tree when the input
element inserted into a parent node instead.
Style computation always happens *before* layout, so we can't rely on
things having (or not having) layout nodes, as that information will
always be one step behind.
Instead, we have to use the DOM to find all the information we need.
The style update mechanism was happily ignoring shadow subtrees.
Fix this by checking if an element has a shadow root, and recursing into
it if needed.
Before this change, style invalidation didn't propagate upwards across
shadow boundaries, so our shadow trees were sitting there with invalid
style, never actually getting updated.
This is taken from the abandoned error stacks proposal, which
already serves as the source of truth for the setter. It only requires
the this value to be an object - if it's not an Error object, the getter
returns undefined.
I have not compared this behavior to the non-standard implementations of
the stack property in other engines, but presumably the spec authors
already did that work.
This change gets the Sentry browser SDK working to a point where it can
actually send uncaught exceptions via the API :^)
By using the same NativeFunction constructor as plain ErrorConstructor
and passing the name, TypeError & co. will now include their name in
backtraces and such.
Eventually we should probably rely on [[InitialName]] for this, but for
now that's how it works.
This is an editorial change in the Intl spec:
7c13db4
This also normalizes the spelling of the "Internal slots" heading in
Intl.Collator, which is another editorial change in the Intl spec:
ec064bd
Previously, we were setting tab actions only for the active tab on a tab
change, and the same actions for the previous tab were removed.
Unfortunately, this also happened when making a new tab, which meant
that you could trick the cell editor to jump to the new sheet and start
writing there.
To fix this, every view will always have on_selection_changed
and on_selection_dropped assigned. I haven't seen much difference in
the memory usage, so I guess it'll be fine :)
In object binding, we would attempt to get NonnullRefPtr<Identifier>
from alias on the alias.has<Empty>() code path. In this case, we need
to get it from name instead.
The update block can generate bytecode that refers to the lexical
environment, so we have to end the scope after it has been generated.
Previously the Jump to the update block would terminate the block,
causing us to leave the lexical environment just before jumping to the
update block.
After we terminate a block (e.g. break, continue), we cannot generate
anymore bytecode for the block. This caused us to crash with this
example code:
```
a = 0;
switch (a) {
case 0:
break;
console.log("hello world");
}
```
Anything after a block terminating instruction is considered
unreachable code, so we can safely skip any statements after it.
Now that we have y-axis (gain) logarithmic display, we should also have
x-axis (frequency) logarithmic display; that's how our ears work. This
can be turned off with an option, but it generally looks much nicer.
For DSP reasons I can't explain myself (yet, sorry), short-time Fourier
transform (STFT) is much more accurate and aesthetically pleasing when
the windows that select the samples for STFT overlap. This implements
that behavior by storing the previous samples and performing windowed
FFT over both it as well as the current samples. This gives us 50%
overlap between windows, a common standard that is nice to look at.
The input to the FFT was distorted by the usage of fabs on the samples.
It led to a big DC offset and a distorted spectrum. Simply removing fabs
improves the quality of the spectrum a lot.
The FFT input should be windowed to reduce spectral leakage. This also
improves the visual quality of the spectrum.
Also, no need to do a FFT of the whole buffer if we only mean to render
64 bars. A 8192 point FFT may smooth out fast local changes but at 44100
hz samplerate that's 200 ms worth of sound which significantly reduces
FPS.
A better approach for a fluent visualization is to do small FFTs at the
current playing position inside the current buffer.
There may be a better way to get the current playing position, but for
now it's implemented as an estimation depending on how many frames where
already rendered with the current buffer.
Also I picked y-axis log scale as a default because there's usually a
big difference in energy between low and high frequency bands. log scale
looks nicer.
Visualization widgets should only have to tell how many samples they
need per frame and have a render method which receives all data relevant
to draw the next frame.
Although it's nice to have this as an option, it should be the default
to adjust higher frequencies as they intrinsically have less energy than
lower energies.
Windows are used in many DSP related applications. A prominent use case
is spectral analysis, where windowing the signal before doing spectral
analysis mitigates spectral leakage.
Several related improvements to our Fast Fourier Transform
implementation:
- FFT now operates on spans, allowing it to use many more container
types other than Vector. It's intended anyways that FFT transmutes the
input data.
- FFT is now constexpr, moving the implementation to the header and
removing the cpp file. This means that if we have static collections
of samples, we can transform them at compile time.
- sample_data.data() weirdness is now gone.