ai assignment
ai assignment
A dominant strategy is a strategy that gives a better payoff regardless of what others do.
In a dominant strategy equilibrium, each player is playing their dominant strategy.
A Nash equilibrium is a situation where no player has an incentive to deviate, given the other
players' strategies.
Since dominant strategy is always the best choice, it is automatically a best response. Therefore:
L R
U 2,2 0,0
D 0,0 1,1
But there are no dominant strategies — what’s best depends on what the other player does.
Therefore:
Not every Nash equilibrium is a dominant strategy equilibrium. ❌
✅ Final Answer:
Let me know if you'd like this typed out neatly in a document or slide-style format!
4o
You said:
ChatGPT said:
a. Multiagent system:
A multiagent system is a system in which multiple autonomous entities (agents) interact with each
other, often cooperating or competing, to achieve individual or common goals.
b. Multibody planning:
Multibody planning refers to planning the movements and interactions of multiple bodies or agents
in a shared environment without collisions and while achieving their objectives.
c. Coordination problem:
A coordination problem occurs when multiple agents must choose their actions in such a way that
they achieve the best collective outcome, often requiring communication or conventions.
d. Agent design:
Agent design is the process of defining how an agent perceives its environment, makes decisions,
and takes actions to achieve its goals effectively in a multiagent setting.
e. Mechanism design:
Mechanism design is a field in economics and game theory that involves designing systems or rules
(mechanisms) to achieve desired outcomes, assuming that agents will act in their own self-interest.
f. Cooperative game:
A cooperative game is a game where players can form coalitions and negotiate binding agreements
to maximize their collective payoffs.
g. Non-cooperative game:
A non-cooperative game is a game where players make decisions independently and cannot form
binding agreements, focusing on strategies that maximize their individual payoffs.
Let me know if you'd like these answers made more concise, or if you want handwritten versions or
examples added!
4o
You said:
yes
ChatGPT said:
Great! Here's a more concise version of each definition, still keeping them clear and accurate. This
would be helpful if you're writing by hand or want a crisp version for quick revision.
Concise Definitions:
a. Multiagent system:
A system with multiple agents interacting to achieve goals, either cooperatively or competitively.
b. Multibody planning:
Planning safe and goal-oriented movement for multiple bodies or agents in a shared space.
c. Coordination problem:
A situation where agents must align their actions to reach a mutually beneficial outcome.
d. Agent design:
Creating how an agent senses, decides, and acts to meet its objectives.
e. Mechanism design:
Designing rules or systems to lead self-interested agents to desired outcomes.
f. Cooperative game:
A game where players can form binding agreements to improve collective outcomes.
g. Non-cooperative game:
A game where each player acts independently to maximize personal payoff.
Let me know if you want example-based explanations or handwritten notes for these.
4o
You said:
1. Give example of map colouring problem in constraint satisfaction. 2. Mom, Dad, Baby, Student,
Teacher, and Guide are lining up next to each other in six linear spots labeled 1 to 6, one to a spots.
Baby needs to line up between Mom and Dad Student and Teacher need to be next to each other.
Guide needs to be at one end, in or 6. Formulate this problem as a CSP: list the variables, their
domains, and the spot 1 constraints. Encode unary constraints as a constraint rather than pruning
the domain. (No need to solve the problem, just provide variables, domains and constraints.)
ChatGPT said:
Here you go—both answers are presented clearly and in a CSP-friendly format.
Problem:
Color a map of 4 regions (A, B, C, D) such that no two adjacent regions share the same color. Use 3
colors: Red, Green, Blue.
Variables:
Domains:
Constraints:
A ≠ B (A and B are adjacent)
A≠C
B≠C
C≠D
This is a classical graph coloring problem in CSP, where each region is a node and edges represent
adjacency.
CSP Formulation
Variables:
Each person is a variable.
Domains:
Each variable can be assigned to one of the 6 positions:
Constraints:
→ Guide ∈ {1, 6}
Guide = 1 OR Guide = 6
|Student - Teacher| = 1
→ They must be adjacent.
All-different Constraint:
4o
You said:
4. What is horizon effect? wins the game on a map that is a tree? 5. What is quiescence? How it
helps in improving minimax search procedure? 6. Discuss how well the standard approach to game
playing would apply to games such as tennis, pool, and croquet, which take place in a continuous
physical state space.
ChatGPT said:
Great set of questions from the topic of game trees and search algorithms. Here's a concise and
clear explanation of each:
Horizon effect refers to a limitation in game tree search (like minimax with depth-limited lookahead),
where the algorithm cannot see far enough ahead to detect a major outcome (like a win or loss)
beyond the search depth.
Example: A losing move exists 5 steps ahead, but the search is limited to 4 steps. The algorithm
might make a move that delays the loss temporarily (pushing it beyond the “horizon”), incorrectly
thinking it's safe.
It gives a false sense of security by failing to look beyond a certain "horizon" (depth limit).
If you're referring to solving a game (like minimax-based) on a map that is structured as a tree, then:
Yes, a game with a tree-structured map (no cycles, each state reached in exactly one way) is
simpler to solve.
In such a case, minimax search with alpha-beta pruning can perform optimally and
guarantee a win (if one exists), since:
So, if a player has a winning strategy, it can be discovered efficiently on a tree-structured game
graph.
Quiescence is the idea of avoiding evaluation at volatile positions in a game—i.e., positions where
things are about to change drastically (like major captures in chess).
A quiescent state is one that is stable enough to be accurately evaluated.
Quiescence Search:
When a search reaches a leaf node at depth limit, if the position is not quiescent, we extend
the search selectively until a stable (quiet) position is reached.
It avoids "evaluation noise" caused by abrupt tactical changes just beyond the search limit.
Benefit: Reduces horizon effect, by extending the search only in "dangerous" positions.
Games like tennis, pool, croquet operate in continuous physical state spaces, unlike chess or tic-tac-
toe which are discrete and symbolic.
Continuous State Space: Standard minimax assumes finite, discrete moves. These games
have infinite possible shots or moves.
Real-Time Constraints: Minimax is computationally expensive and too slow for real-time
physical actions.
Physics-Based Modeling: Need to simulate ball trajectories, collisions, etc., which is beyond
symbolic reasoning.
Reinforcement Learning
So, standard symbolic AI game-playing doesn’t work well; more robotics-oriented, sensor-driven,
and learning-based approaches are used in such games.