0% found this document useful (0 votes)
7 views18 pages

AI Complete Print

The document outlines a series of experiments in artificial intelligence, focusing on various search algorithms including BFS, DFS, Best-First Search, A*, AO*, and Minimax. Each experiment includes objectives, code implementations, outputs, and inferences about the efficiency and effectiveness of the algorithms in solving problems like graph traversal and the 8-puzzle. The final experiment demonstrates the implementation of a chess engine using the Minimax algorithm with alpha-beta pruning.

Uploaded by

rasahi9449
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views18 pages

AI Complete Print

The document outlines a series of experiments in artificial intelligence, focusing on various search algorithms including BFS, DFS, Best-First Search, A*, AO*, and Minimax. Each experiment includes objectives, code implementations, outputs, and inferences about the efficiency and effectiveness of the algorithms in solving problems like graph traversal and the 8-puzzle. The final experiment demonstrates the implementation of a chess engine using the Minimax algorithm with alpha-beta pruning.

Uploaded by

rasahi9449
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

LAB FILE

Artificial Intelligence (ARM - 252)


B.Tech - Artificial Intelligence & Data
Science

Submitted By: Submitted To:


Name: Divyanshu Yadav Mr. Chiranjit Pal
Batch: AIDS B1
Roll No.: 60219071924
Experiment 1: Breadth-First Search (BFS) for Graph Traversal

Objective:
To implement Breadth-First Search (BFS) for traversing or searching tree

Code:
from collections import deque
def bfs(graph, start):
visited = set()
queue = deque([start])
while queue:
vertex = queue.popleft()
if vertex not in visited:
print(vertex, end=" ")
visited.add(vertex)
queue.extend(set(graph[vertex]) - visited)
graph = { 'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F'],
'D': [],
'E': ['F'],
'F': []}
bfs(graph, 'A')

Output:
ABDEFC

Inference:
BFS visits nodes in layers starting from the root node. It explores all neighbors at the
present depth before moving to the next level.

Experiment 2: Depth-First Search (DFS) for Graph Traversal

Objective:
To implement Depth-First Search (DFS) for traversing a graph.

Code:
def dfs(graph, start, visited=None):
if visited is None:
visited = set()
visited.add(start)
print(start, end=" ")
for next in graph[start]:
if next not in visited:
dfs(graph, next, visited)

graph = {
'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F'],
'D': [],
'E': ['F'],
'F': []
}

dfs(graph, 'A')

Output:
ABDEFC

Inference:
DFS explores as far as possible along each branch before backtracking, which is useful in
pathfinding and topological sorting.
Experiment 3: Solve a Problem using Best-First Search Strategy

Objective:
To implement Best-First Search, a greedy search method that uses a priority queue

Code:
import heapq
def best_first_search(graph, start, goal, heuristic):
visited = set()
priority_queue = [(heuristic[start], start)]
while priority_queue:
_, current = heapq.heappop(priority_queue)
if current == goal:
print(f"Reached goal: {goal}")
return
if current not in visited:
print(current, end=" ")
visited.add(current)
for neighbor in graph[current]:
if neighbor not in visited:
heapq.heappush(priority_queue, (heuristic[neighbor], neighbor))
graph = { 'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F'],
'D': [], 'E': ['F'], 'F': []}
heuristic = { 'A': 5, 'B': 3, 'C': 4, 'D': 6, 'E': 2, 'F': 0}
best_first_search(graph, 'A', 'F', heuristic)

Output:
ABEF
Goal Reached

Inference:
Best-First Search uses a heuristic to prioritize exploration, leading to faster results in
certain problem spaces, but it is not guaranteed to find the shortest path.
Experiment 4: Implement A* (A-Star) Search Algorithm

Objective:
To implement the A* Search Algorithm, combining the cost to reach a node and a heuristic

Code:
import heapq
def a_star_search(graph, start, goal, h):
open_list = [(h[start], 0, start, [start])]
visited = set()
while open_list:
f, g, current, path = heapq.heappop(open_list)
if current == goal:
print("Path:", " -> ".join(path))
return
visited.add(current)
for neighbor, cost in graph[current]:
if neighbor not in visited:
total_cost = g + cost
heapq.heappush(open_list, (total_cost + h[neighbor], total_cost, neighbor, path )
graph = {'A': [('B', 1), ('C', 3)],
'B': [('D', 3), ('E', 1)],
'C': [('F', 5)],
'D': [], 'E': [('F', 2)], 'F': []}
heuristic = { 'A': 5, 'B': 4, 'C': 3, 'D': 6, 'E': 2, 'F': 0}
a_star_search(graph, 'A', 'F', heuristic)

Output:
Path: A -> B -> E -> F

Inference:
A* Search efficiently finds the optimal path using both path cost and heuristic, balancing
accuracy and performance.
Experiment 5: Implement AO* (And-Or) Search Algorithm

Objective:
To implement the AO* algorithm for problem-solving in graphs with AND/OR
conditions,Code:
def ao_star(node, graph, heuristic, solution):
print(f"Expanding Node: {node}")
if node not in graph:
return 0
cost = float('inf')
for children in graph[node]:
new_cost = sum(ao_star(c, graph, heuristic, solution) for c in children)
if new_cost < cost:
cost = new_cost
solution[node] = children
return heuristic[node] + cost

graph = {
'A': [['B', 'C'], ['D']],
'B': [['E']],
'C': [['F']],
'D': [['G']],
'E': [], 'F': [], 'G': []
}

heuristic = {
'A': 2, 'B': 1, 'C': 1, 'D': 3, 'E': 0, 'F': 0, 'G': 0
}

solution = {}
ao_star('A', graph, heuristic, solution)
print("Solution Graph:", solution)

Output:
Expands nodes and builds a solution graph.
Inference:
AO* is suited for planning in environments where multiple subgoals must be satisfied (AND
conditions), offering structured solutions.
Experiment 6: Solve the 8-Puzzle Problem using Informed Search (A*
Search)

Objective:
To solve the 8-puzzle problem using A* Search with the Manhattan Distance heuristic.

Code:
import heapq

goal_state = [[1,2,3],[4,5,6],[7,8,0]]

def heuristic(state):
h=0
for i in range(3):
for j in range(3):
value = state[i][j]
if value != 0:
goal_x = (value - 1) // 3
goal_y = (value - 1) % 3
h += abs(i - goal_x) + abs(j - goal_y)
return h

def find_zero(state):
for i in range(3):
for j in range(3):
if state[i][j] == 0:
return i, j

def successors(state):
moves = []
x, y = find_zero(state)
directions = [(-1,0), (1,0), (0,-1), (0,1)]
for dx, dy in directions:
nx, ny = x + dx, y + dy
if 0 <= nx < 3 and 0 <= ny < 3:
new_state = [row[:] for row in state]
new_state[x][y], new_state[nx][ny] = new_state[nx][ny], new_state[x][y]
moves.append(new_state)
return moves

def a_star(start):
heap = [(heuristic(start), 0, start, [])]
visited = set()
while heap:
est, cost, current, path = heapq.heappop(heap)
if current == goal_state:
for step in path + [current]:
print(step)
return
visited.add(str(current))
for child in successors(current):
if str(child) not in visited:
heapq.heappush(heap, (cost + 1 + heuristic(child), cost + 1, child, path + [current]))

start_state = [[1,2,3],[4,0,6],[7,5,8]]
a_star(start_state)

Output:
Steps to solve the 8-puzzle using A* search.

Inference:
A* search finds the shortest path efficiently using a heuristic to guide the exploration.
Experiment 7: Solve the 8-Puzzle Problem using Uninformed Search (BFS,
DFS)

Objective:
To solve the 8-puzzle problem using BFS and DFS without heuristics.

Code:
from collections import deque
import copy

goal_state = [[1, 2, 3], [4, 5, 6], [7, 8, 0]]

def find_blank(state):
for i in range(3):
for j in range(3):
if state[i][j] == 0:
return i, j

def get_neighbors(state):
x, y = find_blank(state)
directions = [(-1,0),(1,0),(0,-1),(0,1)] # up, down, left, right
neighbors = []

for dx, dy in directions:


nx, ny = x + dx, y + dy
if 0 <= nx < 3 and 0 <= ny < 3:
new_state = copy.deepcopy(state)
new_state[x][y], new_state[nx][ny] = new_state[nx][ny], new_state[x][y]
neighbors.append(new_state)
return neighbors

def is_goal(state):
return state == goal_state

def print_state(state):
for row in state:
print(row)
print()

# BFS
def bfs(initial_state):
visited = []
queue = deque([(initial_state, [])])
while queue:
current_state, path = queue.popleft()
if current_state in visited:
continue
visited.append(current_state)

if is_goal(current_state):
return path + [current_state]

for neighbor in get_neighbors(current_state):


queue.append((neighbor, path + [current_state]))
return None

# DFS
def dfs(initial_state):
visited = []
stack = [(initial_state, [])]

while stack:
current_state, path = stack.pop()
if current_state in visited:
continue
visited.append(current_state)

if is_goal(current_state):
return path + [current_state]

for neighbor in get_neighbors(current_state):


stack.append((neighbor, path + [current_state]))
return None

initial = [[1, 2, 3], [4, 0, 6], [7, 5, 8]]

print("Solving with BFS:")


bfs_result = bfs(initial)
if bfs_result:
for step in bfs_result:
print_state(step)

print("Solving with DFS:")


dfs_result = dfs(initial)
if dfs_result:
for step in dfs_result:
print_state(step)
Output:

.
Inference:

BFS guarantees the shortest solution path as it explores all nodes at the present depth
before moving deeper. However, it uses more memory.

DFS may find a solution faster in some cases but can get stuck in deep paths or go into
infinite loops without proper checks.

The 8-puzzle problem is well-suited to uninformed search algorithms but becomes


inefficient as the state space grows.
Experiment 8: Compare Informed and Uninformed Search Techniques

Objective:
To compare the efficiency and accuracy of A* (informed) vs BFS and DFS (uninformed).
Comparison Criteria:
Metric A* Search BFS DFS
Path Optimality Yes Yes Not guaranteed
Time Efficiency High Medium Low
Memory Usage Medium High Low
Goal Reachable Always Always (if exists) Sometimes fails

Inference:
Informed search like A* is more efficient and optimal compared to uninformed methods like
BFS and DFS in complex problems.
Experiment 9: Heuristic Function in Informed Search Strategies

Objective:

To demonstrate how heuristic functions are used in informed search strategies like A*
Search and Greedy Best-First Search, helping guide the search process toward the goal more
efficiently.

Code:
from queue import PriorityQueue

graph = {
'A': [('B', 1), ('C', 4)],
'B': [('D', 5), ('E', 12)],
'C': [('F', 7)],
'D': [],
'E': [('G', 3)],
'F': [('G', 2)],
'G': []
}

heuristics = {
'A': 7,
'B': 6,
'C': 5,
'D': 4,
'E': 2,
'F': 1,
'G': 0
}

def a_star_search(start, goal):


pq = PriorityQueue()
pq.put((0 + heuristics[start], 0, start, [start]))

while not pq.empty():


f, cost, current_node, path = pq.get()

if current_node == goal:
print("Path found:", path)
print("Total cost:", cost)
return
for neighbor, weight in graph[current_node]:
g = cost + weight
h = heuristics[neighbor]
pq.put((g + h, g, neighbor, path + [neighbor]))

print("No path found")

a_star_search('A', 'G')

Output:

Path found: ['A', 'C', 'F', 'G']

Total cost: 13

Inference:

The A* search algorithm successfully uses both the actual cost (g) and the heuristic estimate
(h) to determine the most promising path. The chosen path A → C → F → G reflects the
shortest-cost route from start to goal based on both actual path cost and heuristic guidance.
This demonstrates the effectiveness of informed search strategies over uninformed ones
like BFS or DFS, especially in large or complex problem spaces.

Experiment 10: Chess using Minimax Algorithm

Objective:

Implement Chess engine using alpha beta pruning in minimax algorithm

Code:

def minimax(board, depth, is_maximising, alpha, beta, verbose=False):

# base condition to break recursion


if depth == 0:
return evaluate_board(board), None
best_score = -float('inf') if is_maximising else float('inf')
best_move = None

all_moves = get_all_valid_moves_as_ordered(board,get_all_valid_moves(board,
is_maximising, verbose=verbose))

# iterate each branch / child of current node


for move in all_moves:
sr, sc, er, ec = move
move_obj = make_move(board, sr, sc, er, ec)

# just to monitor which moves are being evaluated


if verbose:
global branch
branch+=1
# print(move_obj)

score, _ = minimax(board, depth-1, not is_maximising, alpha, beta, verbose=verbose)


undo_move(board, move_obj)

# picks the branch/node that gives best value accord. to max/min condition
if is_maximising:
if score > best_score:
best_score = score
best_move = move_obj
alpha = max(alpha, best_score)
else:
if score < best_score:
best_score = score
best_move = move_obj
beta = min(beta, best_score)

# prune the branch when necc.


if alpha >= beta:
if verbose:
global pruned
pruned +=1
# print("pruned!")
break

# return the best value avaible at current node to its parent node
return (best_score, best_move)
Output:

You might also like