I had been thinking about the difference between text editing and structured editing and it dawned upon me that the latter will never win simply because it is not "natural" to edit structurally and that such editing is not forgiving of "mistakes". Sadly, text editing is good by default; and structured editing is not so by design.
We have a grammar, so we use it. But do grammars have to be policemen all the time? After all, we humans created grammar; and I have to think that we created it by convention to make sense of the otherwise chaotic barrage of communicative sounds we invented. So if natural language grammar is intended to be "guidelines and lighthouses to the islands of information in the oceans of noise", why shouldn't computer language grammar be the same?
The problem, of course, is that such a grammar would be non-deterministic. How would you specify a terminal symbol with any kind of certainty when any possible other symbol could intervene and has to be considered noise in a very context-sensitive way?
I wonder if there's an other way; and the rest of this post will record my thoughts to date on this other way.
Imagine each symbol of the language's alphabet as a cell waiting for user input. Imagine each non-terminal in the language's grammar also as a similar cell, only it waits for symbols to activate.The moment a symbol appears in the input (this could be the read of the next symbol from a file or the next key pressed by the user) the symbol grabs it and announces that its now "active". This triggers "interested" non-terminals to claim that symbol as part of themselves, although the ownership is shared - until all symbols of a particular non-terminal are satisfied and it claims all symbols to itself; and announces that it is active. This triggers the next level of non-terminals to wake up, and so forth.
This sounds pretty close to how Agent-based systems work, hence the term in the title. If I were Microsoft, I'd probably call it ActiveParsingTM.
The key advantage with such parsing would be that ambiguous input could be handled by the system. Also, input that doesn't match any production rule in the grammar would still be allowed into the system; which means that erroneous user input is tolerated, not berated.
In addition, such a system would be incremental from the ground up: although it is backed by a grammar, input is not constrained to "start at the top and end at the bottom of the grammar". It is also inherently event-driven (see design notes below) and therefore should be able to react to editing actions better. This ties in well with "natural" structural editing.
Finally, such a system would allow for hybrid languages that embed other languages within themselves, eg HTML+CSS+JS.
I cannot imagine that this hasn't been thought of before, as always. My Goog-fu not being upto snuff, however, I was not able to ferret up anything concrete. Of course, I did find a lot of references in the NLP literature to agent based systems and automata based parsers and such like, but they usually devolved into stats or deep math to prove the value which I couldn't comprehend.
Design Notes
- The system works on heartbeats called ticks. Everything happens once every one tick.
- There are cells. Cells wait for external input. All cells are guaranteed to get a new datum input into the system within the same tick. Cells activate (or not) during that same tick for a particular datum. Cell activate recognizes a single symbol or character.
- There are cells that wait on other cells for input. These cells activate when all the cells they need are activated. Their activation recognizes a token or non-terminal.
- The input could be control input (such as the delete key) as well. This would result in previously activated cells becoming deactivated, changing the state of the system. The impact would be felt in the system over the next few ticks, but also means that it would be local to only those parts of the grammar that are indeed affected by the change, not universal as is typical in standard parsing systems.
- At all levels, activation triggers an event that can be handled outside the system. In particular, recognition of a non-terminal could be handled as highlighting it as a known keyword or identifier, etc. This is how syntax highlighting would happen
- At a sufficiently high level, activation of a non-terminal means something larger than syntax highlighting, eg, validation, interpretation/compilation or execution can happen.
- Finally, all cells can be supplied with a prerequisite activation signal which would HAVE to be "on" for them to become on. This would allow controlling the "viewport" of visible cells easily enough to constrain the calculation to that part of the text that is visible.
Update
Found it! An undated paper about Object Oriented Parallel Parsing for Context-free Grammars. Although I couldnt find a date on it, references to Symbolic Lisp machines dates it to at least the 90s if not the 80s. Google search terms "parallel parsing computer language"
No comments:
Post a Comment