Critical Facts - University of Delaware

Parsing III (Top-down parsing: recursive descent & LL(1) ) Roadmap (Where are we?) Previously We set out to study parsing Specifying syntax Context-free grammars Ambiguity Top-down parsers Algorithm & its problem with left recursion Left-recursion removal Today Predictive top-down parsing

The LL(1) condition Simple recursive descent parsers Table-driven LL(1) parsers Picking the Right Production If it picks the wrong production, a top-down parser may backtrack Alternative is to look ahead in input & use context to pick correctly How much lookahead is needed? In general, an arbitrarily large amount Use the Cocke-Younger, Kasami algorithm or Earleys algorithm Fortunately, Large subclasses of CFGs can be parsed with limited lookahead

Most programming language constructs fall in those subclasses Predictive Parsing Basic idea Given A , the parser should be able to choose between & FIRST sets For some rhs G, define FIRST() as the set of tokens that appear as the first symbol in some string that derives from That is, x FIRST() iff * x , for some We will defer the problem of how to compute FIRST sets until we look at the LR(1) table construction algorithm Predictive Parsing Basic idea Given A , the parser should be able to choose between & FIRST sets For some rhs G, define FIRST() as the set of tokens that appear as the first symbol in some string that derives from

That is, x FIRST() iff * x , for some The LL(1) Property If A and A both appear in the grammar, we would like FIRST() FIRST() = This is almost correct This would allow the parser to make a correct choice with a See the next slide lookahead of exactly one symbol ! Predictive Parsing What about -productions? They complicate the definition of LL(1) If A and A and FIRST(), then we need to ensure that FIRST() is disjoint from FOLLOW(), too Define FIRST+() as FIRST() FOLLOW(), if FIRST() FIRST(), otherwise Then, a grammar is LL(1) iff A and A implies FIRST+() FIRST+() = FOLLOW() is the set of all words in the grammar that can

legally appear immediately after an Predictive Parsing Given a grammar that has the LL(1) property Can write a simple routine to recognize each lhs Code is both simple & fast Grammars with the LL(1) property are called predictive grammars FIRST+(1) FIRST+ (2) FIRST+ (3) = because the parser can predict the correct /* find an A */ expansion at each point in if (current_word FIRST(1)) the parse. find a 1 and return true Parsers that capitalize on Consider A 1 | 2 | 3, with else if (current_word FIRST(2)) find a 2 and return true else if (current_word FIRST(3)) find a 3 and return true else report an error and return

false the LL(1) property are called predictive parsers. One kind of predictive parser is the recursive descent parser. Of course, there is more detail to find a i ( 3.3.4 in EAC) Recursive Descent Parsing Recall the expression grammar, after transformation 1 Goal 2 Expr Expr Term Expr 3 Expr +Term Expr

4 | 5 | Factor Term * Factor Term 6 Term 7 Term Term Expr 8 | / Factor Term

9 10 Factor | number 11 | id This produces a parser with six mutually recursive routines: Goal Expr EPrime Term TPrime Factor Each recognizes one NT or T The term descent refers to the direction in which the parse tree is built. Recursive Descent Parsing

(Procedural) A couple of routines from the expression parser Goal( ) word nextWord( ); if (Expr( ) = true & word = EOF) then proceed to next step; else return false; looking for Expr( ) EOF, if (Term( ) = false) found token then return false; else return Eprime( ); looking for Number or Identifier, found token instead Factor( ) if (word = ( ) then word nextWord( ); if (Expr() = false) then return false else if (word != ) ) then

report syntax error; return false; else if (word != num and word != ident) then report syntax error; return false; else word nextWord( ); return true; EPrime, Term, & TPrime follow the same basic lines (Figure 3.7, EAC) Recursive Descent Parsing To build a parse tree: Augment parsing routines to build nodes Pass nodes between routines using a stack Node for each symbol on rhs Action is to pop rhs nodes, make them children of lhs node, and push this subtree To build an abstract syntax tree

Expr( ) result true; if (Term( ) = false) then return false; else if (EPrime( ) = false) then result false; else build an Expr node pop EPrime node pop Term node make EPrime & Term children of Expr push Expr node return result; Build fewer nodes Put them together in a different Success build a piece of the parse order tree This is a preview of Chapter 4 Left Factoring What if my grammar does not have the LL(1) property? Sometimes, we can transform the grammar

The Algorithm A NT, find the longest prefix that occurs in two or more right-hand sides of A if then replace all of the A productions, A 1 | 2 | | n | , with AZ | Z 1 | 2 | | n where Z is a new element of NT Repeat until no common prefixes remain Left Factoring A graphical explanation for the same idea 1 A 1 | 2 | 3 A 3 becomes AZ Z 1 | 2

| n 2 1 A Z 2 3 Left Factoring (An example) Consider the following fragment of the expression grammar Identifier Factor | Identifier [ ExprList ] | Identifier ( ExprList ) FIRST(rhs1) = { Identifier } FIRST(rhs2) = { Identifier } FIRST(rhs3) = { Identifier } After left factoring, it becomes

Factor Arguments | | Identifier Arguments [ ExprList ] ( ExprList ) FIRST(rhs1) = { Identifier } FIRST(rhs2) = { [ } FIRST(rhs3) = { ( } FIRST(rhs4) = FOLLOW(Factor) It has the LL(1) property This form has the same syntax, with the LL(1) property Left Factoring Graphically Identifier Factor No basis for choice

Identifier [ ExprList ] Identifier ( ExprList ) [ ExprList ] ( ExprList )

becomes Factor Identifier Word determines correct choice Left Factoring (Generality) Question By eliminating left recursion and left factoring, can we transform an arbitrary CFG to a form where it meets the LL(1) condition? (and can be parsed predictively with a single token lookahead?) Answer Given a CFG that doesnt meet the LL(1) condition, it is undecidable whether or not an equivalent LL(1) grammar exists. Example {an 0 bn | n 1} {an 1 b2n | n 1} has no LL(1) grammar

Language that Cannot Be LL(1) Example {an 0 bn | n 1} {an 1 b2n | n 1} has no LL(1) grammar G aAb | aBbb A aAb | 0 B aBbb |1 Problem: need an unbounded number of a characters before you can determine whether you are in the A group or the B group. Recursive Descent (Summary) 1. Build FIRST (and FOLLOW) sets 2. Massage grammar to have LL(1) condition a. Remove left recursion b. Left factor it 3. Define a procedure for each non-terminal a. Implement a case for each right-hand side b. Call procedures as needed for non-terminals 4. Add extra code, as needed

a. Perform context-sensitive checking b. Build an IR to record the code Can we automate this process? FIRST and FOLLOW Sets FIRST() For some T NT, define FIRST() as the set of tokens that appear as the first symbol in some string that derives from That is, x FIRST() iff * x , for some FOLLOW() For some NT, define FOLLOW() as the set of symbols that can occur immediately after in a valid sentence. FOLLOW(S) = {EOF}, where S is the start symbol To build FIRST sets, we need FOLLOW sets Building Top-down Parsers Given an LL(1) grammar, and its FIRST & FOLLOW sets

Emit a routine for each non-terminal Nest of if-then-else statements to check alternate rhss Each returns true on success and throws an error on false Simple, working (, perhaps ugly,) code This automatically constructs a recursive-descent parser Improving matters I dont know of a system that does this Nest of if-then-else statements may be slow Good case statement implementation would be better What about a table to encode the options? Interpret the table with a skeleton, as we did in scanning Building Top-down Parsers Strategy Encode knowledge in a table Use a standard skeleton parser to interpret the table Example

The non-terminal Factor has three expansions ( Expr ) or Identifier or Number Table might look like: Terminal Symbols Non-terminal Factor Symbols + - * / Id.

10 Error on `+ Num. EOF 11 Reduce by rule 10 on `+ Building Top Down Parsers Building the complete table Need a row for every NT & a column for every T Need a table-driven interpreter for the table LL(1) Skeleton Parser word nextWord() push EOF onto Stack push the start symbol onto Stack TOS top of Stack loop forever if TOS = EOF and word = EOF then report success and exit

exit on success else if TOS is a terminal or eof then if TOS matches word then pop Stack // recognized TOS word nextWord() else report error looking for TOS else // TOS is a non-terminal if TABLE[TOS,word] is A B1B2Bk then pop Stack // get rid of A push Bk, Bk-1, , B1 on stack // in that order else report error expanding TOS TOS top of Stack Building Top Down Parsers Building the complete table Need a row for every NT & a column for every T Need an algorithm to build the table Filling in TABLE[X,y], X NT, y T 1. entry is the rule X , if y FIRST( )

2. entry is the rule X if y FOLLOW(X ) and X G 3. entry is error if neither 1 nor 2 define it If any entry is defined multiple times, G is not LL(1) This is the LL(1) table construction algorithm

Recently Viewed Presentations

  • Successful Technology Interventions:

    Successful Technology Interventions:

    - using a Communication Matrix virtual CoP designed to be used by parents and teachers collaboratively to improve communication outcomes for students with complex communication needs (CCN), such as autism, multiple disabilities, deaf-blindness that are not well captured by typical...
  • CALTECH 256 Greg Griffin, Alex Holub and Pietro

    CALTECH 256 Greg Griffin, Alex Holub and Pietro

    Benchmarks Clutter: 827 Background Images Acknowledgements Rob Fergus and Fei Fei Li, Pierre Moreels for code and procedures developed for the Caltech-101 image set Marco Ranzato and Claudio Fanti for miscellaneous help Sorters: Lis Fano, Nick Lo, Julie May, Weiyu...
  • PARTS OF SPEECH What are parts of speech?

    PARTS OF SPEECH What are parts of speech?

    Pronoun. Pronouns are words that take the place of nouns.. Every pronoun must have a clear . antecedent, or word it replaces. Subject. pronouns stand for people. They include:
  • Making music  Consider music as structured sound  Components

    Making music Consider music as structured sound Components

    If we set tempo to 120, each beat is 1/120 of a minute or ½ a second. PIANO instrument is default. Choices include: CLARINET, VIOLA, GLOCKENSPIEL, ACCORDION, TRUMPET, ELECTRIC_GUITAR, FLUTE List of notes Music has a lot of notes. Put...
  • Score Like A ProAZELLA Writing Sample Tests Stages I-III

    Score Like A ProAZELLA Writing Sample Tests Stages I-III

    One of the most visible and pervasive characteristics of ELL writing at all ability levels is phonetic spelling. Spelling, when it does not interfere with comprehension, and when a word is easily recognizable as an English word, is only one...
  • Cloud Computing: Trend Watch for Libraries

    Cloud Computing: Trend Watch for Libraries

    Breeding expands on these topics and provides a basic explanation of cloud computing that focuses on real advantages and disadvantages for libraries. ... Microsoft Skydrive (7GB+) Mostly used as supplemental storage and for sharing. ... Access to API's.
  • Aversive and Modern Racism: Examining Different Types of ...

    Aversive and Modern Racism: Examining Different Types of ...

    Data reduction - selected for AR and MR profiles (excluded TLP and PC), US citizens, manipulation checks for defendant race, whether there was a slap in the condition, and what crime the defendant was charged with.
  • 42nd Annual Lonergan Workshop Boston College June 14-19, 2015 ...

    42nd Annual Lonergan Workshop Boston College June 14-19, 2015 ...

    42nd Annual Lonergan WorkshopBoston CollegeJune 14-19, 2015Director: Fred [email protected] Lonergan's Challenge: Healing & Creating in History. As always enlightenment is a matter of the ancient precept, Know thyself.