The JavaScript console can be opened with Control+I, or using
the menu option. The console is currently a text box with JS
syntax highlighting which will send commands to the document's
interpreter. All output is printed to an HTML view in the console.
The output is an HtmlView to easily allow complex output, such
as expandable views for JS Objects in the long run.
Previously, holding Control while using the left/right arrow keys
to navigate through a TextEditor would only be helpful if the document
had spans. Now, if there are no spans, it will navigate to the
next "word break", defined to be the threshold where text changes
from alphanumeric to non-alphanumeric, or vice versa.
We can now parse a little DOM like this:
<!DOCTYPE html>
<html>
<head></head>
<body>
<div></div>
</body>
</html>
This is pretty slow work, but the incremental progress is satisfying!
This patch adds a new HTMLDocumentParser class. It keeps a tokenizer
object internally and feeds itself with one token at a time from it.
The names and idioms in this class are expressed as closely to the
actual HTML parsing spec as possible, to make development as easy
and bug free as possible. :^)
This is going to become pretty large, but it's pretty cool!
When hit testing encountered a block with inline children, we assumed
that the inline children are nothing but text boxes. An inline-block
box is actually a block child of a block with inline children, so we
have to handle that scenario as well. :^)
Fixes#2353.
Instead of emitting data-bearing tokens immediately, do it lazily at
the next state change. This allows us to accumulate full bursts of
text in between tags instead of having one token per character. :^)
This patch adds `Array.prototype.reduce()` method to LibJS Runtime.
The implementation is (to my best knowledge) comformant to ECMA262.
The test `Array.prototype-generic-functions.js` demonstrates that the
function can be applied to other objects besides `Array`.
Let's treat it as zero like the ECMAScript spec does in toInteger().
That way we can use to_i32() and don't have to care about weird input
input values where a number is expected, i.e.
"foo".charAt() === "f"
"foo".charAt("bar") === "f"
"foo".charAt(0) === "f"
Lagom now builds under macOS. Only two minor adjustments were required:
* LibCore TCP/UDP code can't use `SOCK_{NONBLOCK,CLOEXEC}` on macOS,
use ioctl() and fcntl() instead
* LibJS `Heap` code pthread usage ported to MacOS
This makes it a compile error to omit the END_STATE. Also add some more
missing END_STATE's exposed by this (nice!)
Thanks to @predmond for suggesting the multi-pair trick! :^)
While the compiler provides __SIZE_TYPE__ for declaring size_t,
there's unfortunately no __SSIZE_TYPE__ for ssize_t.
However, we can trick the preprocessor into doing what we want anyway
by doing "#define unsigned signed" before using __SIZE_TYPE__ again.
This commit adds back suggestion pagination, and makes it 10x better.
Also adds a "< page m of n >" indicator at the bottom if there are more
suggestions than would fit in a page.
It properly handles cycling forwards and backwards :^)
`CompletionSuggestion(text, ForSearch)` creates a suggestion whose only
purpose is to be compared against.
This constructor skips initialising the views.
Previously, double clicking would select the range around your click up
until it found a space, and in the browser's location bar this behavior
didn't suffice. Now, it will select the range around your click until
there is a "word break". A word break is considered to be when your
selection changes from being alphanumeric to being non alphanumeric, or
vice versa.
This file is required for building the git port.
It was already added before and then removed again when the CI script
for license header checks was added as it seemed irrelevant.
In order to actually view the web as it is, we're gonna need a proper
HTML parser. So let's build one!
This patch introduces the Web::HTMLTokenizer class, which currently
operates on a StringView input stream where it fetches (ASCII only atm)
codepoints and tokenizes acccording to the HTML spec tokenization algo.
The tokenizer state machine looks a bit weird but is written in a way
that tries to mimic the spec as closely as possible, in order to make
development easier and bugs less likely.
This initial version is far from finished, but it can parse a trivial
document with a DOCTYPE and open/close tags. :^)
When we flush a FILE, we behave differently depending on whether we reading from
the file or writing to it:
* If we're writing, we actually write out the buffered data.
* If we're reading, we just drop the buffered (read ahead) data.
After flushing, there should be no additional buffered state stdio keeps about a
FILE, compared to what is true about the underlying file. This includes file
position (offset). When flushing writes, this is taken care of automatically,
but dropping the buffer is not enough to achieve that when reading. This commit
fixes that by seeking back explicitly in that case.
One way the problem manifested itself was upon fseek(SEEK_CUR) calls, as the
position of the underlying file was oftentimes different to the logical position
of the FILE. Since FILE::seek() already calls FILE::flush() prior to actually
modifying the position, fixing FILE::flush() to sync the positions is enough to
fix that issue.
This patch adds a GetterSetterPair object. Values can now store pointers
to objects of this type. These objects are created when using
Object.defineProperty and providing an accessor descriptor.