Unit 7 - Pragmatics, Discourse, Dialogue, and Natural Language Generation
Unit 7 - Pragmatics, Discourse, Dialogue, and Natural Language Generation
✅ Session-wise Breakdown
Session Topic
Session 1 Introduction to Pragmatics & Discourse: Reference Resolution & Phenomena
Session 2 Co-reference: Syntactic/Semantic Constraints & Pronoun Resolution
Session 3 Discourse Structure & Coherence
Session 4 Dialogue Systems: Turns, Grounding, Dialogue Acts
Session 5 Natural Language Generation: Concepts & Architecture
Session 6 Discourse Planning: Text Schemata & Rhetorical Structures + Recap
✅ Practice Exercise
Instructions: Identify the referent for the underlined word.
1. Ram met Sita. He gave her a book.
2. If you see it, don’t touch it. The snake is dangerous.
3. When they arrived, the teachers were already waiting.
4. Rita loves singing. The microphone was her best friend. → (Bridging)
✅ Mini Quiz
Q1: Define anaphora and give one example.
Q2: What’s the difference between cataphora and exophora?
Q3: Why is reference resolution important in translation?
Q4: Identify the referent in: “Mohan found a bag. It was full of books.”
📝 Summary
Pragmatics helps machines interpret implied meaning using context.
Discourse allows understanding beyond individual sentences.
Reference resolution is central to maintaining coherence across sentences.
Different phenomena like anaphora, cataphora, and exophora affect language interpretation.
📌 2. Semantic Constraints
Based on meaning and real-world knowledge
Not all co-references are grammatically acceptable
Example:
“The apple was angry.”
❌ Semantically invalid – apples can’t be angry
Correct:
“The man was angry.” ✅
🔍 Example:
“Riya met Nisha at the cafe. She ordered tea.”
➡ Who is she?
➡ What syntactic and semantic clues help decide?
Follow-up: Ask students to rephrase to remove ambiguity.
✅ Mini Quiz
Q1: Define co-reference with one example.
Q2: Name and explain one syntactic and one semantic constraint.
Q3: Why is pronoun resolution challenging in NLP?
Q4: What does the Hobbs algorithm do?
📝 Summary
Co-reference is when two expressions refer to the same entity.
Pronoun resolution ensures clarity and cohesion across sentences.
Syntactic and semantic constraints guide valid references.
Algorithms like Hobbs and neural models automate resolution.
Session 3: Discourse Structure & Text Coherence
🔍 Example:
“Rina picked up the phone. She heard nothing. She hung up.”
🧪 Classroom Activity
Instructions:
Provide students a jumbled paragraph.
Ask them to rearrange the sentences into a coherent flow.
Example Jumbled Lines:
1. She took the call.
2. Rina’s phone rang.
3. It was her friend.
4. They talked about the event.
➡ Correct Order: 2 → 1 → 3 → 4
✅ Mini Quiz
Q1: What is discourse structure?
Q2: Define coherence with one NLP example.
Q3: What are rhetorical relations? Name two.
Q4: How does coreference resolution contribute to coherence?
📝 Summary
Discourse structure defines how texts are logically connected and organized.
Text coherence ensures meaning flows across sentences.
NLP systems need to model discourse relations, references, and text flow to behave
intelligently in real contexts.
🧪 In-Class Activity
Task: Label the dialogue acts in the following exchange.
A: “Hey, can you help me with this report?”
B: “Sure. What do you need help with?”
A: “I don’t understand the last section.”
B: “Let’s go through it together.”
A: “Thanks a lot.”
✅ Mini Quiz
Q1: What’s the difference between a turn and an utterance?
Q2: Define grounding with a real-life example.
Q3: Name four common dialogue acts with examples.
Q4: Why is grounding important in task-oriented dialogue?
📝 Summary
Dialogues in NLP involve structured turn-taking with clear intentions (acts).
Grounding ensures mutual understanding between participants.
Dialogue acts help machines assign semantic roles to utterances.
NLP dialogue systems must model turns, context, acts, and response generation for
intelligent conversation.
Session 5: Natural Language Generation (NLG) – Concepts &
Architecture
{
"location": "Kathmandu",
"weather": "rain",
"day": "tomorrow"
}
💡 Applications of NLG
Domain Use Case
Weather Reporting Generate city-wise summaries automatically
Chatbots Produce polite, coherent responses from intents
Finance Summarize stock movements or financial data in plain English
News Generation Auto-generate reports for sports, elections, weather, etc.
Medical Reports Translate clinical data into readable summaries
💬 Output (example):
“Tomorrow in Kathmandu, expect rainy weather with moderate temperatures.”
✅ Mini Quiz
Q1: What are the three stages of an NLG pipeline?
Q2: Give one difference between template-based and neural-based NLG.
Q3: How would you generate a response from structured weather data?
Q4: Name two real-world domains where NLG is used effectively.
📝 Summary
NLG enables machines to generate language from data or meaning representations.
It involves content planning, sentence structuring, and surface realization.
NLG powers chatbots, summarizers, AI writers, and reporting systems.
Systems can range from rule-based templates to large neural language models.
Label:
Cause-Effect: slippery roads → accidents
Contrast: accidents vs. quick officer response
✅ Mini Quiz
Q1: What is a text schema? Give one example.
Q2: Name three rhetorical relations and an example of each.
Q3: Why is discourse planning important in NLG?
Q4: How do connectives like “however” or “so” help in text generation?
📝 Final Takeaway
Mastering Module 7 enables students to:
Understand how context, reference, and structure affect language understanding
Build smarter dialogue systems
Implement or evaluate NLG pipelines
Analyze coherence and logic in AI-generated or human-written text
✅ Module 7 – Examination Questions