0% found this document useful (0 votes)
12 views8 pages

ai assignment

The document discusses the relationship between dominant strategy equilibrium and Nash equilibrium, establishing that while every dominant strategy equilibrium is a Nash equilibrium, the reverse is not true. It provides a counterexample using a coordination game to illustrate that Nash equilibria can exist without dominant strategies. Additionally, it addresses concepts related to game theory, such as horizon effect, quiescence, and the applicability of standard game-playing approaches to continuous physical games.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views8 pages

ai assignment

The document discusses the relationship between dominant strategy equilibrium and Nash equilibrium, establishing that while every dominant strategy equilibrium is a Nash equilibrium, the reverse is not true. It provides a counterexample using a coordination game to illustrate that Nash equilibria can exist without dominant strategies. Additionally, it addresses concepts related to game theory, such as horizon effect, quiescence, and the applicability of standard game-playing approaches to continuous physical games.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

5. Show that a dominant strategy equilibrium is a Nash equilibrium, but not vice versa.

Let’s break this down into two parts:

✅ Part 1: Dominant Strategy Equilibrium ⇒ Nash Equilibrium

A dominant strategy is a strategy that gives a better payoff regardless of what others do.
In a dominant strategy equilibrium, each player is playing their dominant strategy.

A Nash equilibrium is a situation where no player has an incentive to deviate, given the other
players' strategies.

Since dominant strategy is always the best choice, it is automatically a best response. Therefore:

Every dominant strategy equilibrium is a Nash equilibrium. ✅

❌ Part 2: Nash Equilibrium ⇏ Dominant Strategy Equilibrium

Let’s show a counterexample:

Example: Coordination Game

L R

U 2,2 0,0

D 0,0 1,1

 (U,L) is a Nash equilibrium: Neither player wants to deviate.

 (D,R) is also a Nash equilibrium.

But there are no dominant strategies — what’s best depends on what the other player does.

Therefore:
Not every Nash equilibrium is a dominant strategy equilibrium. ❌

✅ Final Answer:

A dominant strategy equilibrium is always a Nash equilibrium,


but a Nash equilibrium may not involve dominant strategies.

Let me know if you'd like this typed out neatly in a document or slide-style format!

4o

You said:
ChatGPT said:

Here are the answers to your questions based on the images:

Q1. Define the following in your own words:

a. Multiagent system:
A multiagent system is a system in which multiple autonomous entities (agents) interact with each
other, often cooperating or competing, to achieve individual or common goals.

b. Multibody planning:
Multibody planning refers to planning the movements and interactions of multiple bodies or agents
in a shared environment without collisions and while achieving their objectives.

c. Coordination problem:
A coordination problem occurs when multiple agents must choose their actions in such a way that
they achieve the best collective outcome, often requiring communication or conventions.

d. Agent design:
Agent design is the process of defining how an agent perceives its environment, makes decisions,
and takes actions to achieve its goals effectively in a multiagent setting.

e. Mechanism design:
Mechanism design is a field in economics and game theory that involves designing systems or rules
(mechanisms) to achieve desired outcomes, assuming that agents will act in their own self-interest.

f. Cooperative game:
A cooperative game is a game where players can form coalitions and negotiate binding agreements
to maximize their collective payoffs.

g. Non-cooperative game:
A non-cooperative game is a game where players make decisions independently and cannot form
binding agreements, focusing on strategies that maximize their individual payoffs.

Let me know if you'd like these answers made more concise, or if you want handwritten versions or
examples added!

4o

You said:

yes

ChatGPT said:

Great! Here's a more concise version of each definition, still keeping them clear and accurate. This
would be helpful if you're writing by hand or want a crisp version for quick revision.

Concise Definitions:
a. Multiagent system:
A system with multiple agents interacting to achieve goals, either cooperatively or competitively.

b. Multibody planning:
Planning safe and goal-oriented movement for multiple bodies or agents in a shared space.

c. Coordination problem:
A situation where agents must align their actions to reach a mutually beneficial outcome.

d. Agent design:
Creating how an agent senses, decides, and acts to meet its objectives.

e. Mechanism design:
Designing rules or systems to lead self-interested agents to desired outcomes.

f. Cooperative game:
A game where players can form binding agreements to improve collective outcomes.

g. Non-cooperative game:
A game where each player acts independently to maximize personal payoff.

Let me know if you want example-based explanations or handwritten notes for these.

4o

You said:

1. Give example of map colouring problem in constraint satisfaction. 2. Mom, Dad, Baby, Student,
Teacher, and Guide are lining up next to each other in six linear spots labeled 1 to 6, one to a spots.
Baby needs to line up between Mom and Dad Student and Teacher need to be next to each other.
Guide needs to be at one end, in or 6. Formulate this problem as a CSP: list the variables, their
domains, and the spot 1 constraints. Encode unary constraints as a constraint rather than pruning
the domain. (No need to solve the problem, just provide variables, domains and constraints.)

ChatGPT said:

Here you go—both answers are presented clearly and in a CSP-friendly format.

1. Example of Map Colouring Problem in Constraint Satisfaction

Problem:
Color a map of 4 regions (A, B, C, D) such that no two adjacent regions share the same color. Use 3
colors: Red, Green, Blue.

Variables:

 A, B, C, D (each represents a region)

Domains:

 {Red, Green, Blue} for each region

Constraints:
 A ≠ B (A and B are adjacent)

 A≠C

 B≠C

 C≠D

This is a classical graph coloring problem in CSP, where each region is a node and edges represent
adjacency.

2. Family Lineup Problem as CSP

Given People: Mom, Dad, Baby, Student, Teacher, Guide


Positions: Spots 1 to 6 (linear)

CSP Formulation

Variables:
Each person is a variable.

 V = {Mom, Dad, Baby, Student, Teacher, Guide}

Domains:
Each variable can be assigned to one of the 6 positions:

 D(X) = {1, 2, 3, 4, 5, 6} for all X ∈ V

Constraints:

Unary Constraints (Guide):

→ Guide ∈ {1, 6}
 Guide = 1 OR Guide = 6

Binary Constraints (Student and Teacher):

 |Student - Teacher| = 1
→ They must be adjacent.

Ternary Constraint (Mom, Dad, Baby):

 Baby is between Mom and Dad


→ (Mom < Baby < Dad) OR (Dad < Baby < Mom)

All-different Constraint:

 All positions must be occupied by different people


→ All variables take distinct values.
→ Mom ≠ Dad ≠ Baby ≠ Student ≠ Teacher ≠ Guide
Let me know if you'd like this visualized or need help encoding it into a table or Python code.

4o

You said:

4. What is horizon effect? wins the game on a map that is a tree? 5. What is quiescence? How it
helps in improving minimax search procedure? 6. Discuss how well the standard approach to game
playing would apply to games such as tennis, pool, and croquet, which take place in a continuous
physical state space.

ChatGPT said:

Great set of questions from the topic of game trees and search algorithms. Here's a concise and
clear explanation of each:

4. What is Horizon Effect?

Horizon effect refers to a limitation in game tree search (like minimax with depth-limited lookahead),
where the algorithm cannot see far enough ahead to detect a major outcome (like a win or loss)
beyond the search depth.

Example: A losing move exists 5 steps ahead, but the search is limited to 4 steps. The algorithm
might make a move that delays the loss temporarily (pushing it beyond the “horizon”), incorrectly
thinking it's safe.

It gives a false sense of security by failing to look beyond a certain "horizon" (depth limit).

Wins the Game on a Map That is a Tree?

If you're referring to solving a game (like minimax-based) on a map that is structured as a tree, then:

 Yes, a game with a tree-structured map (no cycles, each state reached in exactly one way) is
simpler to solve.

 In such a case, minimax search with alpha-beta pruning can perform optimally and
guarantee a win (if one exists), since:

o There's no need to check for repeated states.

o No loops or backtracking needed.

o The structure enables efficient pruning.

So, if a player has a winning strategy, it can be discovered efficiently on a tree-structured game
graph.

5. What is Quiescence? How It Helps Minimax?

Quiescence is the idea of avoiding evaluation at volatile positions in a game—i.e., positions where
things are about to change drastically (like major captures in chess).
A quiescent state is one that is stable enough to be accurately evaluated.
Quiescence Search:

 When a search reaches a leaf node at depth limit, if the position is not quiescent, we extend
the search selectively until a stable (quiet) position is reached.

 It avoids "evaluation noise" caused by abrupt tactical changes just beyond the search limit.

Benefit: Reduces horizon effect, by extending the search only in "dangerous" positions.

6. Applicability of Standard Game Playing (Minimax etc.) to Physical Games

Games like tennis, pool, croquet operate in continuous physical state spaces, unlike chess or tic-tac-
toe which are discrete and symbolic.

Challenges with Standard Approach (Minimax, etc.):

 Continuous State Space: Standard minimax assumes finite, discrete moves. These games
have infinite possible shots or moves.

 Imperfect Information: Physical games have uncertainty—wind, surface friction, player


fatigue.

 Real-Time Constraints: Minimax is computationally expensive and too slow for real-time
physical actions.

 Physics-Based Modeling: Need to simulate ball trajectories, collisions, etc., which is beyond
symbolic reasoning.

What Works Better:

 Reinforcement Learning

 Neural Networks (deep learning to estimate outcomes)

 Motion planning and physics engines for precise control

So, standard symbolic AI game-playing doesn’t work well; more robotics-oriented, sensor-driven,
and learning-based approaches are used in such games.

Let me know if you'd like diagrams or code to simulate any of this.

You might also like