Transformational Grammar.
Transformational Grammar.
These are the grammars in which the sentence can be represented structurally
into two stages. Obtaining different structures from sentences having the same
meaning is undesirable in language understanding systems. Sentences with the
same meaning should always correspond to the same internal knowledge
structures. In one stage the basic structure of the sentence is analyzed to
determine the grammatical constituent parts and in the second stage just the vice
versa of the first one. This reveals the surface structure of the sentence, the way
the sentence is used in speech or in writing. Alternatively, we can also say that
application of the transformation rules can produce a change from passive voice
to active voice and vice versa. Let us see the structure of a sentence as given
below.
Both of the above sentences are two different sentences but they have same
meaning. Thus it is an example of a transformational grammar. These
grammars were never widely used in computational models of natural
language. The applications of this grammar are changing of voice (Active to
Passive and Passive to Active) change a question to declarative form etc.
Case Grammars (FILLMORE’s Grammar)
Case grammars use the functional relationships between noun phrases and verbs
to conduct the more deeper case of a sentence. Generally in our English
sentences, the difference between different forms of a sentence is quite
negligible. In early 1970’s Fillmore gave some idea about different cases of a
English sentence. He extended the transformational grammars of Chomsky by
focusing more on the semantic aspects of view of a sentence. In case grammars
a sentence id defined as being composed of a preposition P, a modality
constituent M, composed of mood, tense, aspect, negation and so on. Thus we
can represent a sentence like
Where P - Set of relationships among verbs and noun phrases i.e. P = (C=Case)
M - Modality constituent
The tree representation for a case grammar will identify the words by their
modality and case. The cases may be related to the actions performed by the
agents, the location and direction of actions. The cases may also be instrumental
and objective. For example “Ram cuts the apple by a knife”. Here knife is an
instrumental case. In fig 8.5 the modality constituent is the negation part, eat is
the verb and Ram, apple are nouns which are under the case C 1 and
C2 respectively. Case frames are provided for verbs to identify allowable cases.
They give the relationships which are required and which are optional.
Semantic Grammars
These sentences are analyzed and words matched to the symbols contained in
the lexicon entries. Semantic grammars are suitable for use in systems with
restricted grammars since its power of computation is limited.
Context Free Grammar (CFG)
The grammar in which each production has exactly one terminal symbol in its
left handΣ,sideV,S,andP at least one symbol at the right hand side is called
context free grammar. A CFG is a four tuple where
Each terminal symbol in a grammar denotes a language. The non terminals are
written in capital letters and terminals are written in small letters. Some
properties of CFG formalism are
PARSING PROCESS
The parser is a computer program which accepts the natural language sentence
as input and generates an output structure suitable for analysis. The lexicon is a
dictionary of words where each word contains some syntactic, some semantic
and possibly some pragmatic information. The entry in the lexicon will contain
a root word and its various derivatives. The information in the lexicon is needed
to help determine the function and meanings of the words in a sentence. The
basic parsing technique is shown in figure .
Types of Parsing
1. Top down Parsing 2. Bottom up Parsing
PARSING PROCESS
The parser is a computer program which accepts the natural language sentence
as input and generates an output structure suitable for analysis. The lexicon is a
dictionary of words where each word contains some syntactic, some semantic
and possibly some pragmatic information. The entry in the lexicon will contain
a root word and its various derivatives. The information in the lexicon is needed
to help determine the function and meanings of the words in a sentence. The
basic parsing technique is shown in figure .
Types of Parsing
2. Bottom up Parsing
Let us discuss about these two parsing techniques and how they will work for
input sentences.
Top down parsing starts with the starting symbol and proceeds towards the goal.
We can say it is the process of construction the parse tree starting at the root and
proceeds towards the leaves. It is a strategy of analyzing unknown data
relationships by hypothesizing general parse tree structures and then
considering whether the known fundamental structures are compatible with the
hypothesis. In top down parsing words of the sentence are replaced by their
categories like verb phrase (VP), Noun phrase (NP), Preposition phrase (PP),
Pronoun (PRO) etc. Let us consider some examples to illustrate top down
parsing. We will consider both the symbolical representation and the graphical
representation. We will take the words of the sentences and reach at the
complete sentence. For parsing we will consider the previous symbols like PP,
NP, VP, ART, N, V and so on. Examples of top down parsing are LL (Left-to-
right, left most derivation), recursive descent parser etc.
In this parsing technique the process begins with the sentence and the words of
the sentence is replaced by their relevant symbols. This process was first
suggested by Yngve (1955). It is also called shift reducing parsing. In bottom up
parsing the construction of parse tree starts at the leaves and proceeds towards
the root. Bottom up parsing is a strategy for analyzing unknown data
relationships that attempts to identify the most fundamental units first and then
to infer higher order structures for them. This process occurs in the analysis of
both natural languages and computer languages. It is common for bottom up
parsers to take the form of general parsing engines that can wither parse or
generate a parser for a specific programming language given a specific of its
grammar.
ART ADJ N V NP
ART ADJ N VP
ART NP VP
NP VP
S
Deterministic Parsing
A deterministic parser is one which permits only one choice for each word
category. That means there is only one replacement possibility for every word
category. Thus, each word has a different test conditions. At each stage of
parsing always the correct choice is to be taken. In deterministic parsing back
tracking to some previous positions is not possible. Always the parser has to
move forward. Suppose the parser some form of incorrect choice, then the
parser will not proceed forward. This situation arises when one word satisfies
more than one word categories, such as noun and verb or adjective and verb.
The deterministic parsing network is shown in figure.
Non-Deterministic Parsing
The non deterministic parsing allows different arcs to be labeled with the some
test. Thus, they can uniquely make the choice about the next arc to be taken. In
non deterministic parsing, the back tracking procedure can be possible. Suppose
at some extent of point, the parser does not find the correct word, then at that
stage it may backtracks to some of its previous nodes and then start parsing. But
the parser has to guess about the proper constituent and then backtrack if the
guess is later proven to be wrong. So comparative to deterministic parsing, this
procedure may be helpful for a number of sentences as it can backtrack at any
point of state. A non deterministic parsing network is shown in figure.