Lecture 3: Solving Problems by Searching
Why Search Matters¶
Route planning, puzzles, scheduling, games
Same abstract problem: initial state → goal state via actions
Theory: completeness, optimality, complexity
Practice: state space size, heuristic design
Learning Objectives¶
Formulate problems as search problems
Understand uninformed search: BFS, DFS, UCS, iterative deepening
Understand informed search: greedy best-first, A*
Design and analyze heuristic functions
Problem-Solving Agents¶
Goal: Reach a goal state
Formulation: Initial state, actions, transition model, goal test, path cost
Solution: Sequence of actions from initial to goal state
Optimal solution: Lowest path cost
Example: Romania¶

States: Cities
Actions: Drive between adjacent cities
Goal: Bucharest
Path cost: Sum of edge costs (distances)
Search Trees and Graphs¶

Search tree: Expand states, generate successors
State space graph: States + edges (may have cycles)
Explored set: States already expanded
Frontier: States to expand (open list)
Measuring Performance¶
Completeness: Does it find a solution if one exists?
Optimality: Does it find the optimal solution?
Time complexity: Number of nodes expanded
Space complexity: Maximum nodes in memory
Breadth-First Search (BFS)¶
Expand shallowest node first
Frontier = FIFO queue
Complete: Yes (if branching factor finite)
Optimal: Yes (if step cost = 1)
Time: O(b^d)
Space: O(b^d)
Uniform-Cost Search (UCS)¶
Expand lowest-cost node first
Frontier = priority queue (by g(n))
Complete: Yes
Optimal: Yes (if costs non-negative)
Equivalent to Dijkstra’s algorithm
Depth-First Search (DFS)¶
Expand deepest node first
Frontier = LIFO stack
Complete: No (infinite depth)
Optimal: No
Time: O(b^m)
Space: O(bm) — linear!
Depth-Limited Search¶
DFS with depth limit ℓ
Complete: No (if ℓ < d)
Optimal: No
Avoids infinite paths
Iterative Deepening Search¶
Run depth-limited search with ℓ = 0, 1, 2, ...
Complete: Yes
Optimal: Yes (if step cost = 1)
Time: O(b^d) — same as BFS
Space: O(bd) — like DFS!
Bidirectional Search¶
Search forward from start and backward from goal
Meet in the middle
Time: O(b^(d/2)) — much faster!
Space: O(b^(d/2))
Requires explicit goal states
Informed Search: Heuristics¶
Heuristic h(n): Estimated cost from n to goal
Admissible: h(n) ≤ actual cost (never overestimate)
Consistent: h(n) ≤ c(n,a,n’) + h(n’) for successor n’
Greedy Best-First Search¶
Expand node with lowest h(n)
Not complete (can get stuck in loops)
Not optimal
Fast when heuristic is good
A* Search¶

Expand node with lowest f(n) = g(n) + h(n)
Complete: Yes (if finite branching)
Optimal: Yes (if h admissible)
Optimally efficient for given heuristic
A*: Optimality Proof¶
A* expands nodes in order of f(n)
When goal is expanded, its g(n) is optimal
Admissible h ensures we never ignore a node on optimal path
Heuristic Design: Relaxed Problems¶

Relaxation: Remove constraints from problem
Cost of optimal solution to relaxed problem = admissible heuristic
Example: 8-puzzle — ignore “blocking” → Manhattan distance
Heuristic Design: Subproblems¶
Pattern databases: Store exact solution costs for subproblems
Landmarks: Key states, precompute distances
Learning: Learn heuristics from experience
Dominance and Consistency¶
h₂ dominates h₁ if h₂(n) ≥ h₁(n) for all n
Dominating heuristic → fewer nodes expanded
Consistent heuristics are admissible
A* with consistent h never re-expands nodes
Summary¶
| Algorithm | Complete | Optimal | Time | Space |
|---|---|---|---|---|
| BFS | ✓ | ✓* | O(b^d) | O(b^d) |
| UCS | ✓ | ✓ | O(b^d) | O(b^d) |
| DFS | ✗ | ✗ | O(b^m) | O(bm) |
| IDS | ✓ | ✓* | O(b^d) | O(bd) |
| A* | ✓ | ✓ | O(b^d) | O(b^d) |
*With unit step cost
References¶
Russell & Norvig, AIMA 4e, Ch. 3
Chapter PDF:
chapters/chapter-03.pdfaima-python: search4e.ipynb
Pseudocode: aima
.cs .berkeley .edu /algorithms .pdf