Ocean of Awareness

Jeffrey Kegler's blog about Marpa, his new parsing algorithm, and other topics of interest

Jeffrey's personal website

Google+

Marpa resources

The Marpa website

The Ocean of Awareness blog: home page, chronological index, and annotated index.

Mon, 10 Mar 2014


Evolvable languages

Ideally, if a syntax is useful and clear, and a programmer can easily read it at a glance, you should be able to add it to an existing language. In this post, I will describe a modest incremental change to the Perl syntax.

It's one I like, because that's beside the point, for two reasons. First, it's simply intended as an example of language evolution. Second, regardless of its merits, it is unlikely to happen, because of the way that Perl 5 is parsed. In this post I will demonstrate a way of writing a parser, so that this change, or others, can be made in a straightforward way, and without designing your language into a corner.

When initializing a hash, Perl 5 allows you to use not just commas, but also the so-called "wide comma" (=>). The wide comma is suggestive visually, and it also has some smarts about what a hash key is: The hash key is always converted into a string, so that wide comma knows that in a key-value pair like this:

    key1 => 711,

that key1 is intended as a string.

But what about something like this?

  {
   company name => 'Kamamaya Technology',
   employee 1 => first name => 'Jane',
   employee 1 => last name => 'Doe',
   employee 1 => title => 'President',
   employee 2 => first name => 'John',
   employee 2 => last name => 'Smith',
   employee 3 => first name => 'Clarence',
   employee 3 => last name => 'Darrow',
  }

Here I think the intent is obvious -- to create an employee database in the form of a hash of hashes, allowing spaces in the keys. In Data::Dumper format, the result would look like:

{
              'employee 2' => {
                                'last name' => '\'Smith\'',
                                'first name' => '\'John\''
                              },
              'company name' => '\'Kamamaya Technology\'',
              'employee 3' => {
                                'last name' => '\'Darrow\'',
                                'first name' => '\'Clarence\''
                              },
              'employee 1' => {
                                'title' => '\'President\'',
                                'last name' => '\'Doe\'',
                                'first name' => '\'Jane\''
                              }
            }

And in fact, that is the output of the script in this Github gist, which parses the previous "extended Perl 5" snippet using a Marpa grammar before passing it on to Perl.

Perl 5 does not allow a syntax like this, and looking at its parsing code will tell you why -- it's already a maintenance nightmare. The extension I've described above could, in theory, be added to Perl 5, but doing so would aggravate an already desperate maintenance situation.

Now, depending on taste, you may be just as happy that you'll never see the extensions I have just outlined in Perl 5. But I don't think it is as easy to be happy about a parsing technology that quickly paints the languages which use it into a corner.

How it works

The code is in a Github gist. For the purposes of the example, I've implemented a toy subset of Perl. But this approach has been shown to scale. There are full Marpa-powered parsers of C, ECMAScript, XPath, and liberal HTML.

Marpa is a general BNF parser, which means that anything you can write in BNF, Marpa can parse. For practical parsing, what matters are those grammars that can be parsed in linear time, and with Marpa that class is vast, including all the classes of grammar currently in practical use. To describe the class of grammars that Marpa parses in linear time, assume that you have either a left or right parser, with infinite lookahead, that uses regular expressions. (A parser like this is called LR-regular.) Assume that this LR-regular parser parses your grammar. In that case, you can be sure that Marpa will parse that grammar in linear time, and without doing the lookahead. (Instead Marpa tracks possibilities in a highly-optimized table.) Marpa also parses many grammars that are not LR-regular in linear time, but just LR-regular is very likely to include any class of grammar that you will be interested in parsing. The LR-regular grammars easily include all those that can be parsed using yacc, recursive descent or regular expressions.

Marpa excels at those special hacks so necessary in recursive descent and other techniques. Marpa allows you to define events that will stop it at symbols or rules, both before and after. While stopped, you can hand processing over to your own custom code. Your custom code can feed your own tokens to the parse for as long as you like. In doing so, it can consult Marpa to determine exactly what symbols and rules have been recognized and which ones are expected. Once finished with custom processing, you can then ask Marpa to pick up again at any point you wish.

The craps game is over

The bottom line is that if you can describe your language extension in BNF, or in BNF plus some hacks, you can rely on Marpa parsing it in reasonable time. Language design has been like shooting crap in a casino that sets you up to win a lot of the first rolls before the laws of probability grind you down. Marpa changes the game.

To learn more

Marpa::R2 is available on CPAN. A list of my Marpa tutorials can be found here. There are new tutorials by Peter Stuifzand and amon. The Ocean of Awareness blog focuses on Marpa, and it has an annotated guide. Marpa has a web page that I maintain and Ron Savage maintains another. For questions, support and discussion, there is the "marpa parser" Google Group.

Comments

Comments on this post can be made in Marpa's Google group.


posted at: 19:48 | direct link to this entry

§         §         §

Tue, 25 Feb 2014


Significant newlines? Or semicolons?

Should statements have explicit terminators, like the semicolon of Perl and the C language? Or should they avoid the clutter, and separate statements by giving whitespace syntactic significance and a real effect on the semantics, as is done in Python and Javascript?

Actually we don't have to go either way. As an example, let's look at some BNF-ish DSL. It defines a small calculator. At first glance, it looks as if this language has taken the significant-whitespace route -- there certainly are no explicit statement terminators.

:default ::= action => ::first
:start ::= Expression
Expression ::= Term
Term ::=
      Factor
    | Term '+' Term action => do_add
Factor ::=
      Number
    | Factor '*' Factor action => do_multiply
Number ~ digits
digits ~ [\d]+
:discard ~ whitespace
whitespace ~ [\s]+

The rule is that there isn't one

If we don't happen to like the layout of the above DSL, and rearrange it in various ways, we'll find that everything we try works. If we become curious about what exactly what the rules for newlines are, and look at the documentation, we won't find any. That's because there aren't any.

We can see this by thoroughly messing up the line structure:

:default ::= action => ::first :start ::= Expression Expression ::= Term
Term ::= Factor | Term '+' Term action => do_add Factor ::= Number |
Factor '*' Factor action => do_multiply Number ~ digits digits ~
[\d]+ :discard ~ whitespace whitespace ~ [\s]+

The script will continue to run just fine.

How does it work?

How does it work? Actually, pose the question this way: Can a human reader tell where the statements end? If the reader is not used to reading BNF, he might have trouble with this particular example but, for a language that he knows, the answer is simple: Yes, of course he can. So really the question is, why do we expect the parser to be so stupid that it cannot?

The only trick is that this is done without trickery. Marpa's DSL is written in itself, and Marpa's self-grammar describes exactly what a statement is and what it is not. The Marpa parser is powerful enough to simply take this self-describing DSL and act on it, finding where statements begin and end, much as a human reader is able to.

To learn more

This example was produced with the Marpa parser. Marpa::R2 is available on CPAN. The code for this example is based on that in the synopsis for its top-level document, but it is isolated conveniently in a Github gist.

A list of my Marpa tutorials can be found here. There are new tutorials by Peter Stuifzand and amon. The Ocean of Awareness blog focuses on Marpa, and it has an annotated guide. Marpa has a web page that I maintain and Ron Savage maintains another. For questions, support and discussion, there is the "marpa parser" Google Group. Comments on this post can be made there.


posted at: 15:30 | direct link to this entry

§         §         §

Thu, 01 Aug 2013


A Marpa-powered C parser

Jean-Damien Durand has just released MarpaX::Languages::C::AST, which parses C language into an abstract syntax tree (AST). MarpaX::Languages::C::AST has been tested against Perl's C source code, as well as Marpa's own C source.

Because it is Marpa-powered, MarpaX::Languages::C::AST works differently from other C parsers. In the past, C parsers have been syntax-driven -- parsing was based on a BNF description of the C grammar. More recently, C parsers have used hand-written recursive descent -- they have been procedurally-driven.

MarpaX::Languages::C::AST uses both approaches. Marpa has the advantage that it makes full knowledge of the state of the parse available to the programmer, so that procedural logic and syntax-driven parsing can reinforce each other. The result is a combined lexer/parser which is very compact and easy to understand. Among the potential applications:

The implementation

A few of Jean-Damien's implementation choices are worth noting. A C parser can take one of two strategies: approximate or precise. A compiler has, of course, to be precise. Tools, such as cross-referencers, often decide to be approximate, or sloppy. Sloppiness is easier to implement and has other advantages: A sloppy tool can tolerate missing C flags: what the C flags should be can be one of the things it guesses at.

Of the two strategies, Jean-Damien decided to go with "precise". MarpaX::Languages::C::AST follows the C11 standard, with either GCC or Microsoft extensions. This has the advantage that MarpaX::Languages::C::AST could be used as the front end of a compiler.

Because MarpaX::Languages::C::AST purpose is to take things as far as an AST, and let the user take over, it does not implement those constraints usually implemented in post-processing. One example of a post-syntactic constraint is the one that bans "case" labels outside of switch statements. Perhaps a future version can include a default "first phase" post-processor to enforce the constraints from the standards. As currently implemented, the user can check for and enforce these constraints in any way he likes. This makes it easier for extensions and customizations, which I think of as the major purpose of MarpaX::Languages::C::AST.

The parsing strategy

Those familar with the C parsing and its special issues may be interested in Jean-Damien's approach to them. MarpaX::Languages::C::AST is, with a few exceptions, syntax-driven -- the parser works from Marpa's SLIF, an extended BNF variant. The SLIF-driven logic is sufficient to deal with the if-then-else issue. Marpa handles right recursion in linear time, so that the if-then-else issue could have been dealt with by rewriting the relevant rules. But Jean-Damien wanted to have his BNF follow closely the grammar in the standards, and he decided to use Marpa's rule ranking facility instead.

More complicated is the ambiguity in C between variable names and types, which actually takes C beyond BNF and context-free grammars into context-sensitive territory. Most C parsers deal with this using lexer or post-processing hacks. Marpa allows the parser to do this more elegantly. Marpa knows the parsing context at all times and can comnunicate this to a user's customized code. Marpa also has the ability to use the parsing context to decide when to switch control from the syntax-driven logic to a user's customized procedural logic, and for the syntax-driven logic to take control back when the procedural logic wants to give it back. This allows the variable-name-versus-type ambiguity to be handled by specifically targeted code which knows the full context of the decisions it needs to make. This code can be written more directly, simply and clearly than was possible with previous parsing methods.

Compilers?

Above I mentioned special-purpose compilers. What about production compilers? MarpaX::Languages::C::AST's upper layers are in Perl, so the speed, while acceptable for special-purpose tools, will probably not be adequate for production. Perhaps a future Marpa-powered C parser will rewrite those upper layers in C, and make the race more interesting.

To learn more

Marpa::R2 is available on CPAN. A list of my Marpa tutorials can be found here. There are new tutorials by Peter Stuifzand and amon. The Ocean of Awareness blog focuses on Marpa, and it has an annotated guide. Marpa also has a web page. For questions, support and discussion, there is the "marpa parser" Google Group. Comments on this post can be made there.


posted at: 15:57 | direct link to this entry

§         §         §

Sun, 07 Jul 2013


Parsing Ada Lovelace

The application

Abstract Syntax Forests (ASF's) are my most recent project. I am adding ASF's to my Marpa parser. Marpa has long supported ambiguous parsing, and allowed users to iterate through, and examine, all the parses of an ambiguous parse. This was enough for most applications.

Even applications which avoid ambiguity benefit from better ways to detect and locate it. And there are applications that require the ability to select among and manipulate very large sets of ambiguous parses. Prominent among these is Natural Language Processing (NLP). This post will introduce an experiment. Marpa in fact seems to have some potential for NLP.

Writing an efficient ASF in not a simple matter. The naive implementation is to generate complete set of fully expanded abstract syntax trees (AST's). This approach consumes resources that can become exponential in the size of the input. Translation: the naive implementation quickly becomes unuseably slow. Marpa optimizes by aggressively identifying identical subtrees of the AST's. Especially in highly ambiguous parses, many subtrees are identical, and this optimization is often a big win.

Ada Lovelace

My primary NLP example at this point is a quote from Ada Lovelace. It is a long sentence, possibly the longest, from her Notes -- 158 words. A disadvantage of this example is that it is not typical of normal NLP. By modern standards it is an unusually long and complex sentence. An advantage of it, and my reason for the choice, is that it stresses the parser.

The "Note A" from which this sentence is taken is one of Ada's notes on a translation of a paper on the work of her mentor and colleague, Charles Babbage. Ada's "Notes" are longer than the original paper, and far more important. In these "Notes" Ada makes the first distinction between a computer and a calculator, and between software and hardware. In their collaboration, Babbage did all of the hardware design, and he wrote most of the actual programs in her paper. But these two revolutionary ideas, and their elaboration, are Ada's.

Why would Babbage ignore obvious implications of his own invention? The answer is that, while these implications are obvious to us, they simply did not fit into the 1843 view of the world. In those days, algebra was leading-edge math. The ability to manipulate equations was considered an extremely advanced form of reason. For Babbage and his contemporaries, that sort of ability to reason certainly suggested the ability to distinguish between good and evil, and this in turn suggested possession of a soul. Ada's "Notes" were written 20 years after Mary Shelly, while visiting Ada's father in Switzerland, wrote the novel Frankenstein. For Ada's contemporaries, announcing that you planned to create a machine that composed music, or did advanced mathematical reasoning, was not very different from announcing that you planned to assemble a human being in your lab.

Ada was the daughter of the poet Byron. For her, pushing boundaries was a family tradition. Babbage was happy to leave these matters to Ada. As Babbage's son put it, his father

considered the Paper by Menabrea, translated with notes by Lady Lovelace, published in volume 3 of Taylor's 'Scientific Memoirs," as quite disposing of the mathematical aspect of the invention. My business now is not with that.

On reading Ada

Ada's notes are worth reading, but the modern reader has to be prepared to face several layers of difficulty:

Ada's quote

Those who view mathematical science, not merely as a vast body of abstract and immutable truths, whose intrinsic beauty, symmetry and logical completeness, when regarded in their connexion together as a whole, entitle them to a prominent place in the interest of all profound and logical minds, but as possessing a yet deeper interest for the human race, when it is remembered that this science constitutes the language through which alone we can adequately express the great facts of the natural world, and those unceasing changes of mutual relationship which, visibly or invisibly, consciously or unconsciously to our immediate physical perceptions, are interminably going on in the agencies of the creation we live amidst: those who thus think on mathematical truth as the instrument through which the weak mind of man can most effectually read his Creator's works, will regard with especial interest all that can tend to facilitate the translation of its principles into explicit practical forms.

Ada, the bullet point version

Ada's sentence may look like what happens when two pickups carrying out-of-date dictionaries to the landfill run into each other on the way. But there is, in fact, a good deal of structure and meaning in all those words. Let's take it as bullet points:

Ada is connecting her new science of software to the history of thought in the West, something which readers of the time would expect her to do. Bullet point 1 alludes to the Greek view of mathematics, especially Plato's. Bullet point 2 alludes to the scientific view, as pioneered by Galileo and Newton. Bullet point 3 alludes to the post-Classical world view, especially the Christian one. But while the language is Christian, Ada's idea is one that Einstein would have had no trouble with. And bullet 4 is the call to action.

When we come to discuss the parse in detail, we'll see that it follows this structure. As an aside, note Ada's mention of "logical completeness" as one of the virtues of math. Gödel came along nearly a century later and showed this vision, which went back to the Greeks, was an illusion. So Ada did not predict everything. On the other hand, Gödel's result was also a complete surprise to Johnny von Neumann, who was in the room that day.

The experiment so far

I've gotten Marpa to grind through this sentence, using the same framework as the Stanford NLP demo. That demo, in fact, refuses to even attempt any sentence longer than 70 words, so my Ada quote needs to be broken up. Even on the smaller pieces, the Stanford demo becomes quite slow. Marpa, by contrast, grinds through the whole thing quickly. The Stanford demo is based on a CYK parser, and presumably is O(n3) -- cubic. Marpa seems to be exhibiting linear behavior.

Promising as this seems for Marpa, its first results may not hold up as the experiment gets more realistic. So far, I've only given Marpa enough English grammar and vocabulary to parse this one sentence. That is enough to make the grammar very complex and ambiguous, but even so it must be far less complex and ambiguous than the one behind the Stanford demo. Marpa will never have time worse than O(n3), but it's quite possible that if Marpa's grammar were as ambiguous as the Stanford one, Marpa would be no faster. Marpa, in fact, could turn out to be slower by some linear factor.

There may never be a final decision based on speed. Marpa might turn out to represent one approach, good for certain purposes. And, especially when speed is indecisive, other abilities can prove more important.

To learn more

Marpa::R2 is available on CPAN. A list of my Marpa tutorials can be found here. There are new tutorials by Peter Stuifzand and amon. The Ocean of Awareness blog focuses on Marpa, and it has an annotated guide. Marpa also has a web page. For questions, support and discussion, there is the "marpa parser" Google Group. Comments on this post can be made there.


posted at: 11:24 | direct link to this entry

§         §         §

Mon, 17 Jun 2013


Marpa v. Parse::RecDescent: a rematch

The application

In a recent post, I looked at an unusual language which serializes arrays and strings, using a mixture of counts and parentheses. Here is an example:

A2(A2(S3(Hey)S13(Hello, World!))S5(Ciao!))

The language is of special interest for comparison against recursive descent because, while simple, it requires procedural parsing -- a purely declarative BNF approach will not work. So it's a chance to find out if Marpa can play the game that is recursive descent's specialty.

The previous post focused on how to use Marpa to mix procedural and declarative parsing together smoothly, from a coding point of view. It only hinted at another aspect: speed. Over the last year, Marpa has greatly improved its speed for this kind of application. The latest release of Marpa::R2 now clocks in almost 100 times faster than Parse::RecDescent for long inputs.

The benchmark

LengthSeconds
Marpa::R2 Marpa::XS Parse::RecDescent
1000 1.569 2.938 13.616
2000 2.746 7.067 62.083
3000 3.935 13.953 132.549
10000 12.270 121.654 1373.171

Parse::RecDescent is pure Perl, while Marpa is based on a parse engine in a library written in hand-optimized C. You'd expect Marpa to win this race and it did.

And it is nice to see that the changes from Marpa::XS to Marpa::R2 have paid off. Included in the table are the Marpa numbers from my 2012 benchmark of Marpa::XS. Marpa::R2 has a new interface and an internal lexer, and now beats Marpa::XS by a factor of up to 10.

While the benchmarked language is ideally suited to show recursive descent to advantage, the input lengths were picked to emphasize Marpa's strengths. Marpa optimizes by doing a lot of precomputation, and is written with long inputs in mind. Though these days, a 500K source, longer than the longest tested, would not exactly set a new industry record.

To learn more

There are fuller descriptions of the language in Flavio's post and code, and my recent post on how to write a parser for this language. I talk more about the benchmark's methodology in my post on the 2012 benchmark.

Marpa::R2 is available on CPAN. A list of my Marpa tutorials can be found here. There is a new tutorial by Peter Stuifzand. The Ocean of Awareness blog focuses on Marpa, and it has an annotated guide. Marpa also has a web page. For questions, support and discussion, there is a Google Group: marpa-parser@googlegroups.com. Comments on this post can be made there.


posted at: 06:47 | direct link to this entry

§         §         §