0% found this document useful (0 votes)
17 views

TOC (UNIT-II)

This document covers Unit II of the Theory of Computation, focusing on Regular Languages and Context-Free Grammars. It discusses properties of regular sets, regular expressions, and methods for converting finite automata to regular expressions. Additionally, it includes concepts like identity rules, Arden's theorem, and the state elimination method for constructing finite automata.

Uploaded by

anjalireddy2308
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

TOC (UNIT-II)

This document covers Unit II of the Theory of Computation, focusing on Regular Languages and Context-Free Grammars. It discusses properties of regular sets, regular expressions, and methods for converting finite automata to regular expressions. Additionally, it includes concepts like identity rules, Arden's theorem, and the state elimination method for constructing finite automata.

Uploaded by

anjalireddy2308
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 69

Theory of

Computation
UNIT-II
BY
V.SRAVANTHI
SYLLABUS

UNIT - II
Regular Languages: Regular sets, regular expressions, identity rules,
Constructing finite Automata for a given regular expressions, Conversion
of Finite Automata to Regular expressions.
Context Free Grammars: Definition, Ambiguity in context free grammars.
Simplification of Context Free Grammars. Chomsky normal form,
Greibach normal form, Enumeration of properties of CFLs (proofs
omitted), Chomsky's hierarchy of languages.
Regular Languages:
Regular Languages: A language is said to be
regular if and only if some finite machine
recognizes it.
A language that is not regular if
 Are not recognized by an FSM

 Which require Memory

Regular Languages is also called as Rational


language.
Regular Language can be expressed using regular
expressions.
L={e,a,aa,aaa…..} or Regular set RE=a*
Regular Set: Any set that represents the value of the Regular
Expression is called a Regular Set.

Properties of Regular Sets:


Property 1. The union of two regular set is regular.
Proof −Let us take two regular expressions
RE1 = a(aa)* and RE2 = (aa)*So,
L1 = {a, aaa, aaaaa,.....}
(Strings of odd length excluding Null)
L2 ={ ε, aa, aaaa, aaaaaa,.......}
(Strings of even length including Null)
L1 ∪ L2 = { ε, a, aa, aaa, aaaa, aaaaa, aaaaaa,.......}
(Strings of all possible lengths including Null)
RE (L1 ∪ L2) = a* (which is a regular expression itself.
Hence, proved
Property 2. The intersection of two regular set
is regular.
Proof −Let us take two regular expressions
RE1 = a(a*) and RE2 = (aa)*
So, L1 = { a,aa, aaa, aaaa, ....}
(Strings of all possible lengths excluding Null)
L2 = { ε, aa, aaaa, aaaaaa,.......}
(Strings of even length including Null)
L1 ∩ L2 = { aa, aaaa, aaaaaa,.......}
(Strings of even length excluding Null)
RE (L1 ∩ L2) = aa(aa)*
which is a regular expression itself.
Hence, proved
Property 3. The complement of a regular set is regular
Proof −Let us take a regular expression −RE = (aa)*
So, L = {ε, aa, aaaa, aaaaaa, .......}
(Strings of even length including Null)
Complement of L is all the strings that is not in L.
So, L’ = {a, aaa, aaaaa, .....}
(Strings of odd length excluding Null)
RE (L’) = a(aa)*
which is a regular expression itself. Hence, proved

Property 4. The difference of two regular set is regular.


Proof −Let us take two regular expressions −
RE1 = a (a*) and RE2 = (aa)*
So, L1 = {a, aa, aaa, aaaa, ....}
(Strings of all possible lengths excluding Null)
L2 = { ε, aa, aaaa, aaaaaa,.......}
(Strings of even length including Null)
L1 – L2 = {a, aaa, aaaaa, aaaaaaa, ....}
(Strings of all odd lengths excluding Null)
RE (L1 – L2) = a (aa)* which is a regular expression. Hence, proved.
Property 5. The reversal of a regular set is regular.
Proof −We have to prove LR is also regular if L is a regular set.
Let, L = {01, 10, 11, 10}
RE (L) = 01 + 10 + 11 + 10
LR = {10, 01, 11, 01}
RE (LR) = 01 + 10 + 11 + 10 which is regular Hence, proved.

Property 6. The closure of a regular set is regular


Proof −If L = {a, aaa, aaaaa, .......} (Strings of odd length excluding Null)
i.e., RE (L) = a (aa)*
L* = {a, aa, aaa, aaaa , aaaaa,……………} (Strings of all lengths excluding
Null RE (L*) = a (a)*
Hence, proved.
Property 7. The concatenation of two regular sets is regular.
Proof −
Let RE1 = (0+1)*0 and RE2 = 01(0+1)*
Here, L1 = {0, 00, 10, 000, 010, ......} (Set of strings ending in 0)
and L2 = {01, 010,011,.....} (Set of strings beginning with 01)
Then, L1 L2 = {001,0010,0011,0001,00010,00011,1001,10010,.............}
Set of strings containing 001 as a substring which can be represented by
an RE − (0 + 1)*001(0 + 1)*
Hence, proved
Regular Expression

Regular Expression
Just as finite automata are used to recognize patterns of
strings, regular expressions are used to generate patterns of
strings. A regular expression is an algebraic formula whose
value is a pattern consisting of a set of strings, called the
language of the expression.
Operands in a regular expression can be:
characters from the alphabet over which the regular
expression is defined.
variables whose values are any pattern defined by a regular
expression.
epsilon which denotes the empty string containing no
characters.
null which denotes the empty set of strings
Operators used in regular expressions

Union: If R1 and R2 are regular expressions, then R1 | R2


(also written as R1 U R2 or R1 + R2) is also a regular
expression.
L(R1|R2) = L(R1) U L(R2).
Concatenation: If R1 and R2 are regular expressions,
then R1R2 (also written as R1.R2) is also a regular
expression.
L(R1R2) = L(R1) concatenated with L(R2).
Kleene closure: If R1 is a regular expression, then R1*
(the Kleene closure of R1) is also a regular expression.
L(R1*) = epsilon U L(R1) U L(R1R1) U L(R1R1R1) U ...
Closure has the highest precedence, followed by
concatenation, followed by union.
A Regular Expression can be recursively defined as
follows −
1. ε is a Regular Expression indicates the language
containing an empty string. (L (ε) = {ε})
2. φ is a Regular Expression denoting an empty
language. (L (φ) = { })
3. x is a Regular Expression where L = {x}
4. If X is a Regular Expression denoting the
language L(X) and Y is a Regular Expression denoting
the language L(Y), then
a. X + Y is a Regular Expression corresponding to the
language L(X) ∪ L(Y) where L(X+Y) = L(X) ∪ L(Y).
b. X . Y is a Regular Expression corresponding to the
language L(X) . L(Y) where L(X.Y) = L(X) . L(Y)
c. R* is a Regular Expression corresponding to the
language L(R*)where L(R*) = (L(R))*
5. If we apply any of the rules several times from 1 to 5,
they are Regular Expressions
Some RE Examples

Regular Expressions Regular Set

(0 + 10*) L = { 0, 1, 10, 100, 1000, 10000, … }

(0*10*) L = {1, 01, 10, 010, 0010, …}

(0 + ε)(1 + ε) L = {ε, 0, 1, 01}

Set of strings of a’s and b’s of any length including the null string. So L = { ε, a, b, aa ,
(a+b)*
ab , bb , ba, aaa…….}

Set of strings of a’s and b’s ending with the string abb. So L = {abb, aabb, babb, aaabb,
(a+b)*abb
ababb, …………..}

Set consisting of even number of 1’s including empty string, So L= {ε, 11, 1111,
(11)*
111111, ……….}

Set of strings consisting of even number of a’s followed by odd number of b’s , so L =
(aa)*(bb)*b
{b, aab, aabbb, aabbbbb, aaaab, aaaabbb, …………..}

String of a’s and b’s of even length can be obtained by concatenating any combination
(aa + ab + ba + bb)* of the strings aa, ab, ba and bb including null, so L = {aa, ab, ba, bb, aaab, aaba,
…………..}

**Example**
Identity rules for regular
expression
 The two regular expression’s P and Q are

equivalent (denoted as P=Q) if and only if P


represents the same set of strings as Q does.
 Let P, Q and R be the regular expressions

then the identity rules are as follows −


1. εR=R ε=R

2. ε*= ε ε is null string

3. (Φ)*= ε Φ is empty string

4. ΦR=R Φ= Φ

5. Φ+R=R

6. R+R=R
7. RR*=R*R=R+
8. (R*)*=R*
9. Ε+RR*=R*
10. (P+Q)R=PR+QR
11. (P+Q)*=(P*Q*)*=(P*+Q*)*
12. R*(ε+R)=( ε+R)R*=R*
13. (R+ε)*=R*
14. Ε+R*=R*
15. (PQ)*P=P(QP)*
16. R*R+R=R*R
17. L+=LL*=L*L
Arden's Theorem

The Arden's Theorem is useful for checking


the equivalence of two regular expressions as
well as in the conversion of DFA to a regular
expression.
Arden’s theorem state that: “If P and Q are
two regular expressions over “∑”, and if P
does not contain “∈” , then the following
equation in R given by R = Q + RP has a
unique solution i.e., R = QP*.”
1. proof R = QP* is the solution of R = Q +
RP
2. proof R = QP* is the unique solution of R =
Q + RP

Proof-1: solution
R = Q + RP ......(i)
Now, replacing R by R = QP*, we get,
R = Q + QP*P and by Taking Q as common,
R = Q( ∈ + P*P) = QP*
Hence Proved
(As we know that ∈ + R*R = R*). Hence proved. Thus, R = QP* is the
solution of the equation R = Q + RP.
Proof-2: solution

Consider R = Q + RP
Now, replace R by R = Q + RP,
R = Q + (Q + RP)P = Q + QP + RP2
Again, replace R by R = Q + RP :-
R = Q + QP + (Q + RP) P2 = Q + QP + QP2 + RP3 . .
R = Q + QP + QP2 + .. + Q + RP(n+1)
Now, replace R by R = QP*, we get,
R = Q + QP + QP2 + .. + QPn+ QP*
Taking Q as common,
R = Q( ∈ + P + P2 + .. + Pn + P*P(n+1) = QP* [As ∈ + P + P2 + ..
+ Pn + P*P(n+1)

represent the closure of P] Hence proved. Thus, R = QP* is


the unique solution of the equation R = Q + RP.
Converting DFA to RE

There are two ways :


1.Arden’s theorem
2.State elimination Method
3.Table method(or) Kleen’s theorem
Construct a regular expression corresponding
to the automata given below

Solution −
Here the initial state is q1 and the final state is q2
Now we write down the equations −
q1 = q1 0 + ε
q 2 = q 1 1 + q2 0
q 3 = q 2 1 + q3 0 + q3 1
Now, we will solve these three equations −
q1 = ε0* [As, εR = R]
So, q1 = 0*
q2 = 0*1 + q20
So, q2 = 0*1(0)* [By Arden’s theorem]
Hence, the regular expression is 0*10*.
q3 = q21 + q30 + q31
q3=0*10*+q30 + q31
q3=0*10*+q3 (0 +1)
Q+RP
q3=0*10*(0 +1)*
State Elimination method
Rule-1 :
 The initial state of the DFA must not have

any incoming edge.


 If there exists any incoming edge to the

initial state, then create a new initial state


having no incoming edge to it.
Example-
Rule 2: There must exist only one final state
in the DFA.
If there exists multiple final states in the DFA,
then convert all the final states into non-final
states and create a new single final state.
Rule-3 :
The final state of the DFA must not have any
outgoing edge.

If there exists any outgoing edge from the final


state, then create a new final state having no
outgoing edge from it..
Rule 4:
 Eliminate all the intermediate states one by

one.
 These states may be eliminated in any order.
 In the end, Only an initial state going to the

final state will be left.


 The cost of this transition is the required

regular expression.
NOTE
 The state elimination method can be applied

to any finite automata.


 (NFA, ∈-NFA, DFA etc
**Example**
3.Table method(or) Kleen’s
theorem
(or) Rij Method
RijK= RijK-1 + RikK-1 (RKKK–1)* RKjK–1

Where i= start state


j= to state
K=Iteration number
First time consider i=1 j=1 k=1
i.e R111

***Examples***
Constructing Finite automata for given
Regular Expression
Equivalence of NFA & RE
1. Convert the following RA into its
equivalent
1 (0 + 1)* 0
Solution
We will concatenate three expressions "1", "(0 + 1)*" and "0"
Conversion RE to FA
Examples…
There are two Pumping Lemmas defined for
1.Regular Language
2.Context –Free Languages
Example:
1. Prove that L = {anbn| n ≥ 0} is not
regular.
Regular Grammar
It can be defined as G={V,T,P,S} where
V=Set of symbols called Non terminals
T=Set of symbols called Terminals
P=Set of Production rules
S=start symbol where S belongs to V
Regular Grammar can be of two forms:
 Right Linear Regular Grammar
 Left Linear Regular Grammar

Right Linear Regular Grammar


 all the non-terminals on the right-hand side exist at the

rightmost place, i.e; right ends.


Example :
A ⇢ a, A ⇢ aB, A ⇢ ∈
where, A and B are non-terminals,
a is terminal, and ∈ is empty string

Left Linear Regular Grammar


 All the non-terminals on the left-hand side exist at the

leftmost place, i.e; left ends.


Example :
A ⇢ a, A ⇢ Ba, A ⇢ ∈ where, A and B are non-terminals, a
is terminal, and ∈ is empty string
Conversion of RE to RG
1. Convert RE to NFA with epsilon
2. Convert NFA with epsilon to FA
3. Convert FA to RG
states -Non terminals
Transitions--Productions

**Examples**
Conversion of RG to RE
a) Convert RG to FA
b) Convert FA to RE(Arden’s theorem)

**Examples**
Context free grammar

 Context free grammar is a formal grammar which is used


to generate all possible strings in a given formal
language.
 Context free grammar G can be defined by four tuples as:
G= (V, T, P, S)
T describes a finite set of terminal symbols.

V describes a finite set of non-terminal symbols

P describes a set of production rules

S is the start symbol.

 In CFG, the start symbol is used to derive the string. You


can derive the string by repeatedly replacing a non-
terminal by the right hand side of the production, until all
non-terminal have been replaced by terminal symbols.
NOTE:
1. Every RG is CFG
2. Family of regular Language is a proper
subset of CFL’s

Example
 The grammar ({A}, {a, b, c}, P, A),

P : A → aA, A → abc.

***CFG construction
Examples**
Capabilities of CFG

 Context free grammar is useful to describe


most of the programming languages.
 If the grammar is properly designed then an
efficient parser can be constructed
automatically.
 Using the features of associatively &
precedence information, suitable grammars for
expressions can be constructed.
 Context free grammar is capable of describing
nested structures like: balanced parentheses,
matching begin-end, corresponding if-then-
else's & so on.
Derivation tree (or) Parse
tree
 Derivation tree is a graphical representation for the derivation
of the given production rules for a given CFG. It is the simple
way to show how the derivation can be done to obtain some
string from a given set of production rules. The derivation tree
is also called a parse tree.
 Parse tree follows the precedence of operators. The deepest
sub-tree traversed first. So, the operator in the parent node
has less precedence over the operator in the sub-tree.
 A parse tree contains the following properties:
1. The root node is always a node indicating start symbols.
2. The derivation is read from left to right.
3. The leaf node is always terminal nodes.
4. The interior nodes are always the non-terminal nodes.

**Examples**
Sentential Form and Partial Derivation Tree
 A partial derivation tree is a sub-tree of a
derivation tree/parse tree such that either all
of its children are in the sub-tree or none of
them are in the sub-tree.
Example
 If in any CFG the productions are −

S → AB, A → aaA | ε, B → Bb| ε


 the partial derivation tree can be the

following −

If a partial derivation tree contains the root S, it


is called a sentential form.
Leftmost and Rightmost Derivation of a String

 Leftmost derivation − A leftmost


derivation is obtained by applying
production to the leftmost variable in each
step.

 Rightmost derivation − A rightmost


derivation is obtained by applying
production to the rightmost variable in each
step.
Example
 Let any set of production rules in a CFG be

 X → X+X | X*X |X| a over an alphabet {a}.


 The leftmost derivation for the string "a+a*a" may

be
 The stepwise derivation of the above string is shown

as below −
 X → X+X
→ a+X
→ a +X*X
→ a+a*X
→ a+a*a
The rightmost derivation for the above
string "a+a*a" may be −
 The stepwise derivation of the above string is

shown as below −
X → X*X
→ X*a
→ X+X*a
→ X+a*a
→ a+a*a

**Example**
Ambiguity in CFG
 If a context free grammar G has more than
one derivation tree for some string w ∈
L(G), it is called an ambiguous grammar.
There exist multiple right-most or left-most
derivations for some string generated from
that grammar.
Example: Check whether the grammar G with
production rules −X → X+X | X*X |X| a is
ambiguous or not.
Solution:
 Let’s find out the derivation tree for the string

"a+a*a". It has two leftmost derivations.


Derivation 1 −
X → X+X
→ a +X
→ a+ X*X
→ a+a*X
→ a+a*a
Derivation 2 −

X → X*X
→ X+X*X
→ a+ X*X
→ a+a*X
→ a+a*a
Since there are two parse trees for a single
string "a+a*a", the grammar G is ambiguous.

**Examples**
Simplification of Context Free Grammars.
Simplification of grammar means reduction of
grammar by removing useless symbols.
The properties of reduced grammar :
1. Each variable (i.e. non-terminal) and each
terminal of G appears in the derivation of
some word in L.
2. There should not be any production as X →
Y where X and Y are non-terminal.
3. If ε is not in the language L then there
need not to be the production X → ε.
Removal of Useless Symbols

 A symbol can be useless if it does not


appear on the right-hand side of the
production rule and does not take part in
the derivation of any string. That symbol is
known as a useless symbol. Similarly, a
variable can be useless if it does not take
part in the derivation of any string. That
variable is known as a useless variable.
Example:
T → aaB | abA | aaT
A → aA
B → ab | b
C → ad
 Here C is useless because it never be

derived (not reachable)


 A → aA it does not terminates so,useless

Final G is
T → aaB | aaT
B → ab | b
Elimination of ε Production

The productions of type S → ε are called ε


productions. These type of productions can only be
removed from those grammars that do not generate
ε.
 Step 1: First find out all null able non-terminal

variable which derives ε.


 Step 2: For each production A → a, construct all

production A → x, where x is obtained from a by


removing one or more non-terminal from step 1.
 Step 3: Now combine the result of step 2 with the

original production and remove ε productions.


Example:
Remove the production from the following
CFG by preserving the meaning of it.
S → XYX
X → 0X | ε
Y → 1Y | ε
Solution:
Now, while removing ε production, we are
deleting the rule X → ε and Y → ε. To preserve
the meaning of CFG we are actually placing ε
at the right-hand side whenever X and Y have
appeared.
S → XYX
If the first X at right-hand side is ε. Then S → YX
Similarly if the last X in R.H.S. = ε. Then S → XY
If Y = ε then S → XX
If Y and X are ε then, S→X
If both X are replaced by ε S → Y
Now, S → XY | YX | XX | X | Y
Now let us consider
X → 0X
If we place ε at right-hand side for X then,
X→0
X → 0X | 0
Similarly Y → 1Y | 1
Collectively we can rewrite the CFG with removed ε production as
S → XY | YX | XX | X | Y
X → 0X | 0
Y → 1Y | 1
Removing Unit Productions

The unit productions are the productions in


which one non-terminal gives another non-
terminal. Use the following steps to remove
unit production:
Step 1: To remove X → Y, add production X →
a to the grammar rule whenever Y → a occurs
in the grammar.
Step 2: Now delete X → Y from the grammar.
Step 3: Repeat step 1 and step 2 until all
unit productions are removed.
Example:
S → 0A | 1B | C
A → 0S | 00
B→1|A
C → 01
Solution:
S → C is a unit production. But while removing S → C
we have to consider what C gives. So, we can add a
rule to S.
 S → 0A | 1B | 01

Similarly, B → A is also a unit production so we can


modify it as B → 1 | 0S | 00
Thus finally we can write CFG without unit
production as
S → 0A | 1B | 01
A → 0S | 00
B → 1 | 0S | 00
Chomsky Normal Form(CNF)
A CFG(context free grammar) is in CNF(Chomsky
normal form) if all production rules satisfy one of
the following conditions:

1. Start symbol generating ε. For example, A → ε.


2. A non-terminal generating two non-terminals.
For example, S → AB.
3. A non-terminal generating a terminal. For
example, S → a.
Example:
G1 = {S → AB, S → c, A → a, B → b}
G2 = {S → aA, A → a, B → c}
 The production rules of Grammar G1 satisfy

the rules specified for CNF, so the grammar


G1 is in CNF.
 However, the production rule of Grammar

G2 does not satisfy the rules specified for


CNF as S → aA contains terminal followed by
non-terminal. So the grammar G2 is not in
CNF.
Steps for converting CFG into CNF

Step 1: Eliminate start symbol from the LHS. If the start


symbol S is at the right-hand side of any production, create a
new production as: S1 → S
Where S1 is the new start symbol.
Step 2: In the grammar, remove the null, unit and useless
productions.
Step 3: Eliminate terminals from the RHS of the production if
they exist with other non-terminals or terminals. For example,
production S → aA can be decomposed as:
S → RA
R→a
Step 4: Eliminate RHS with more than two non-terminals. For
example, S → ASB can be decomposed as:
S → RB
R → AS
Example:
Convert the given CFG to CNF. Consider the given
grammar G1:
S → a | aA | B
A → aBB | ε
B → Aa | b
Solution:
Step 1: We will create a new production S1 → S, as
the start symbol S appears on the RHS. The
grammar will be:
S1 → S
S → a | aA | B
A → aBB | ε
B → Aa | b
Step 2: As grammar G1 contains A → ε null production, its
removal from the grammar yields:
S1 → S
S → a | aA | B
A → aBB
B → Aa | b | a
Now, as grammar G1 contains Unit production S → B, its
removal yield:
S1 → S
S → a | aA | Aa | b
A → aBB
B → Aa | b | a
Also remove the unit production S1 → S, its removal from the
grammar yields:
S1 → a | aA | Aa | b
S → a | aA | Aa | b
A → aBB
B → Aa | b | a
Step 3: In the production rule S0 → aA | Aa, S → aA | Aa, A → aBB
and B → Aa, terminal a exists on RHS with non-terminals. So we
will replace terminal a with X:
S0 → a | XA | AX | b
S → a | XA | AX | b
A → XBB
B → AX | b | a
X→a
Step 4: In the production rule A → XBB, RHS has more than two
symbols, removing it from grammar yield:
S0 → a | XA | AX | b
S → a | XA | AX | b
A → RB
B → AX | b | a
X→a
R → XB
Hence, for the given grammar, this is the required CNF.
Greibach Normal Form (GNF)

A CFG(context free grammar) is in GNF(Greibach


normal form) if all the production rules satisfy
one of the following conditions:
1. A start symbol generating ε.
For example, S → ε.
2. A non-terminal generating a terminal.
For example, A → a.
3. A non-terminal generating a terminal which
is followed by any number of non-terminals.
For example, S → aASB.
Example:
G1 = {S → aAB | aB, A → aA| a, B → bB | b}
G2 = {S → aAB | aB, A → aA | ε, B → bB | ε}
 The production rules of Grammar G1 satisfy

the rules specified for GNF, so the grammar


G1 is in GNF.
 However, the production rule of Grammar

G2 does not satisfy the rules specified for


GNF as A → ε and B → ε contains ε(only start
symbol can generate ε). So the grammar G2
is not in GNF.
Steps for converting CFG into
GNF
Step 1: Conversion of the grammar into its CNF.

In case the given grammar is not present in CNF, first convert it into
CNF.
Step 2: Change the names of non terminals symbols into some Ai
notations in ascending order of i

Step 3: For Ai  Aj
If i=j means left recursion, so eliminate it.
If i<j means accepted by GNF
If i>j means not in GNF ,make it to get i<j

Step 4: If any production rule is not present in GNF, convert the


production rule given in the grammar into GNF form.
**EXAMPLE**
Ambiguity elimination in
CFG
Removal of Ambiguity :
1. Precedence –
 If different operators are used, we will consider the

precedence of the operators. The three important


characteristics are :

1. The level at which the production is present denotes the


priority of the operator used.
2. The production at higher levels will have operators with less
priority. In the parse tree, the nodes which are at top levels
or close to the root node will contain the lower priority
operators.
3. The production at lower levels will have operators with
higher priority. In the parse tree, the nodes which are at
lower levels or close to the leaf nodes will contain the
higher priority operators.
2. Associativity –
 If the same precedence operators are in

production, then we will have to consider the


associativity.

 If the associativity is left to right, then we have to


prompt a left recursion in the production. The
parse tree will also be left recursive and grow on
the left side.
 +, -, *, / are left associative operators.
 If the associativity is right to left, then we have to
prompt the right recursion in the productions. The
parse tree will also be right recursive and grow on
the right side.
 ^ is a right associative operator.
Example
Chomsky's hierarchy of languages
Grammar Languages Recognizing
Automaton
Finite-state
Type-3 Regular
automaton

Non-deterministic
Type-2 Context-free pushdown
automaton

Linear-bounded non-
Type-1 Context-sensitive deterministic Turing
machine

Recursively
Type-0 Turing machine
enumerable
Enumeration of properties of
CFLs
Context-free languages are closed under −

 Union
 Concatenation
 Kleene Star operation

Union
Let L1 and L2 be two context free languages. Then L1 ∪ L2 is also context free.

Example
Let L1 = { anbn , n > 0}. Corresponding grammar G1 will have P: S1 → aAb|ab

Let L2 = { cmdm , m ≥ 0}. Corresponding grammar G2 will have P: S2 → cBb| ε

Union of L1 and L2, L = L1 ∪ L2 = { anbn } ∪ { cmdm }

The corresponding grammar G will have the additional production S → S1 | S2


Concatenation
If L1 and L2 are context free languages, then L1L2 is also context free.

Example
Union of the languages L1 and L2, L = L1L2 = { anbncmdm }

The corresponding grammar G will have the additional production


S → S1 S2

Kleene Star
If L is a context free language, then L* is also context free.

Example
Let L = { anbn , n ≥ 0}. Corresponding grammar G will have P: S → aAb| ε

Kleene Star L1 = { anbn }*

The corresponding grammar G1 will have additional productions S1 → SS1



Context-free languages are not closed under

 Intersection − If L1 and L2 are context

free languages, then L1 ∩ L2 is not


necessarily context free.
 Intersection with Regular Language − If

L1 is a regular language and L2 is a context


free language, then L1 ∩ L2 is a context free
language.
 Complement − If L1 is a context free

language, then L1’ may not be context free.

You might also like