Graph Algorithms for Coding Interviews: BFS, DFS, Dijkstra & More
Graph problems are among the most frequently tested and most feared topics in coding interviews. They appear at every major tech company, and for good reason: graphs model real-world systems like social networks, maps, dependency chains, and network routing. This guide gives you the theory, the code, and the patterns you need to solve graph problems confidently in any interview.

Table of Contents
- Graph Fundamentals
- Graph Representations
- Breadth-First Search (BFS)
- Depth-First Search (DFS)
- Topological Sort
- Shortest Path: Dijkstra's Algorithm
- Shortest Path: Bellman-Ford Algorithm
- Minimum Spanning Tree: Kruskal's Algorithm
- Minimum Spanning Tree: Prim's Algorithm
- Cycle Detection
- Union-Find (Disjoint Set Union)
- Common Interview Patterns
- Practice Strategy
- Conclusion
Graph Fundamentals
Before diving into algorithms, make sure you have a solid understanding of graph terminology.
A graph consists of a set of vertices (also called nodes) and a set of edges that connect pairs of vertices. Graphs can be:
- Directed or undirected: In a directed graph, edges have a direction (A to B is different from B to A). In an undirected graph, edges are bidirectional.
- Weighted or unweighted: In a weighted graph, each edge has a numerical cost or distance. In an unweighted graph, all edges are treated equally.
- Cyclic or acyclic: A cyclic graph contains at least one cycle (a path that starts and ends at the same vertex). A Directed Acyclic Graph (DAG) is a directed graph with no cycles and is especially important for dependency problems.
- Connected or disconnected: In an undirected graph, a connected graph has a path between every pair of vertices. A directed graph is strongly connected if there is a path from every vertex to every other vertex.
Understanding these properties helps you select the right algorithm for each problem.
Graph Representations
There are two primary ways to represent a graph in code. Choosing the right representation affects both the clarity and performance of your solution.
Adjacency List
The adjacency list represents each vertex as a key in a dictionary (or array), with the value being a list of its neighbors. This is the most common representation in interviews.
# Unweighted directed graph
graph = {
'A': ['B', 'C'],
'B': ['D'],
'C': ['D', 'E'],
'D': ['E'],
'E': []
}
# Weighted directed graph
weighted_graph = {
'A': [('B', 4), ('C', 2)],
'B': [('D', 3)],
'C': [('D', 1), ('E', 5)],
'D': [('E', 2)],
'E': []
}
# Building from an edge list (common interview input format)
def build_adjacency_list(edges, directed=True):
graph = {}
for u, v in edges:
if u not in graph:
graph[u] = []
if v not in graph:
graph[v] = []
graph[u].append(v)
if not directed:
graph[v].append(u)
return graphTime complexity: O(V + E) space. Checking if an edge exists takes O(degree of vertex).
Adjacency Matrix
The adjacency matrix uses a 2D array where matrix[i][j] indicates whether an edge exists from vertex i to vertex j.
# Unweighted graph with vertices 0-3
matrix = [
[0, 1, 1, 0],
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 0]
]
# Checking if an edge exists: O(1)
has_edge = matrix[0][1] == 1 # True: edge from 0 to 1Time complexity: O(V^2) space. Edge lookup is O(1), but iterating over all neighbors of a vertex takes O(V).
Interview tip: Use an adjacency list unless the problem specifically benefits from O(1) edge lookups (rare in interviews). Most interviewers expect adjacency lists.
Breadth-First Search (BFS)
BFS explores a graph level by level, visiting all neighbors of the current vertex before moving to the next level. It uses a queue and is the go-to algorithm for finding the shortest path in an unweighted graph.
Implementation
from collections import deque
def bfs(graph, start):
visited = set()
queue = deque([start])
visited.add(start)
result = []
while queue:
vertex = queue.popleft()
result.append(vertex)
for neighbor in graph[vertex]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
return resultShortest Path in Unweighted Graph
def bfs_shortest_path(graph, start, target):
if start == target:
return [start]
visited = set([start])
queue = deque([(start, [start])])
while queue:
vertex, path = queue.popleft()
for neighbor in graph[vertex]:
if neighbor == target:
return path + [neighbor]
if neighbor not in visited:
visited.add(neighbor)
queue.append((neighbor, path + [neighbor]))
return [] # No path foundMulti-Source BFS
Some problems require starting BFS from multiple sources simultaneously. This is common in problems like "rotting oranges" or "walls and gates."
def multi_source_bfs(grid, sources):
rows, cols = len(grid), len(grid[0])
queue = deque()
visited = set()
for r, c in sources:
queue.append((r, c, 0)) # row, col, distance
visited.add((r, c))
directions = [(0, 1), (0, -1), (1, 0), (-1, 0)]
while queue:
r, c, dist = queue.popleft()
for dr, dc in directions:
nr, nc = r + dr, c + dc
if (0 <= nr < rows and 0 <= nc < cols
and (nr, nc) not in visited
and grid[nr][nc] != -1): # -1 represents walls
visited.add((nr, nc))
grid[nr][nc] = dist + 1
queue.append((nr, nc, dist + 1))
return gridTime complexity: O(V + E) for all BFS variants.
When to use BFS:
- Shortest path in unweighted graphs
- Level-order traversal of trees
- Finding all nodes within a given distance
- Problems involving "minimum number of steps"
Depth-First Search (DFS)
DFS explores as far as possible along each branch before backtracking. It uses a stack (or recursion, which implicitly uses the call stack). DFS is the foundation for many graph algorithms including cycle detection, topological sort, and finding connected components.
Recursive Implementation
def dfs_recursive(graph, vertex, visited=None):
if visited is None:
visited = set()
visited.add(vertex)
result = [vertex]
for neighbor in graph[vertex]:
if neighbor not in visited:
result.extend(dfs_recursive(graph, neighbor, visited))
return resultIterative Implementation
def dfs_iterative(graph, start):
visited = set()
stack = [start]
result = []
while stack:
vertex = stack.pop()
if vertex not in visited:
visited.add(vertex)
result.append(vertex)
# Add neighbors in reverse order for consistent traversal
for neighbor in reversed(graph[vertex]):
if neighbor not in visited:
stack.append(neighbor)
return resultDFS on a Grid
Many interview problems represent the graph as a 2D grid. Here is a template for grid-based DFS:
def dfs_grid(grid, row, col, visited):
rows, cols = len(grid), len(grid[0])
if (row < 0 or row >= rows or col < 0 or col >= cols
or (row, col) in visited or grid[row][col] == 0):
return 0
visited.add((row, col))
count = 1
for dr, dc in [(0, 1), (0, -1), (1, 0), (-1, 0)]:
count += dfs_grid(grid, row + dr, col + dc, visited)
return count
# Example: Count islands (number of connected components of 1s)
def count_islands(grid):
visited = set()
islands = 0
for r in range(len(grid)):
for c in range(len(grid[0])):
if grid[r][c] == 1 and (r, c) not in visited:
dfs_grid(grid, r, c, visited)
islands += 1
return islandsTime complexity: O(V + E) for adjacency list, O(V^2) for adjacency matrix.
When to use DFS:
- Detecting cycles
- Topological sorting
- Finding connected components
- Path existence problems
- Exhaustive search (all paths, all combinations)
Topological Sort
Topological sort produces a linear ordering of vertices in a DAG such that for every directed edge (u, v), vertex u comes before v. This is essential for dependency resolution problems.
Kahn's Algorithm (BFS-based)
from collections import deque
def topological_sort_kahn(graph, num_vertices):
# Calculate in-degrees
in_degree = {v: 0 for v in graph}
for vertex in graph:
for neighbor in graph[vertex]:
in_degree[neighbor] = in_degree.get(neighbor, 0) + 1
# Start with vertices that have no incoming edges
queue = deque([v for v in in_degree if in_degree[v] == 0])
result = []
while queue:
vertex = queue.popleft()
result.append(vertex)
for neighbor in graph[vertex]:
in_degree[neighbor] -= 1
if in_degree[neighbor] == 0:
queue.append(neighbor)
# If result contains all vertices, the graph has a valid topological order
if len(result) == num_vertices:
return result
else:
return [] # Graph has a cycleDFS-based Topological Sort
def topological_sort_dfs(graph):
visited = set()
stack = []
def dfs(vertex):
visited.add(vertex)
for neighbor in graph[vertex]:
if neighbor not in visited:
dfs(neighbor)
stack.append(vertex)
for vertex in graph:
if vertex not in visited:
dfs(vertex)
return stack[::-1] # Reverse the post-orderCommon interview problems using topological sort:
- Course Schedule (can you finish all courses given prerequisites?)
- Build order for software packages
- Task scheduling with dependencies
- Alien dictionary (determine character ordering from a sorted list of words)
Time complexity: O(V + E).
Shortest Path: Dijkstra's Algorithm
Dijkstra's algorithm finds the shortest path from a source vertex to all other vertices in a weighted graph with non-negative edge weights. It is one of the most important algorithms for interviews.
Implementation with Min-Heap
import heapq
def dijkstra(graph, start):
# graph: {vertex: [(neighbor, weight), ...]}
distances = {vertex: float('inf') for vertex in graph}
distances[start] = 0
previous = {vertex: None for vertex in graph}
min_heap = [(0, start)] # (distance, vertex)
while min_heap:
current_dist, current_vertex = heapq.heappop(min_heap)
# Skip if we have already found a shorter path
if current_dist > distances[current_vertex]:
continue
for neighbor, weight in graph[current_vertex]:
distance = current_dist + weight
if distance < distances[neighbor]:
distances[neighbor] = distance
previous[neighbor] = current_vertex
heapq.heappush(min_heap, (distance, neighbor))
return distances, previous
def reconstruct_path(previous, start, target):
path = []
current = target
while current is not None:
path.append(current)
current = previous[current]
path.reverse()
return path if path[0] == start else []Example Usage
graph = {
'A': [('B', 4), ('C', 2)],
'B': [('D', 3), ('C', 1)],
'C': [('D', 1), ('E', 5)],
'D': [('E', 2)],
'E': []
}
distances, previous = dijkstra(graph, 'A')
# distances: {'A': 0, 'B': 4, 'C': 2, 'D': 3, 'E': 5}
path = reconstruct_path(previous, 'A', 'E')
# path: ['A', 'C', 'D', 'E']Time complexity: O((V + E) log V) with a binary min-heap.
Key constraint: Dijkstra does not work with negative edge weights. Use Bellman-Ford instead.
Shortest Path: Bellman-Ford Algorithm
Bellman-Ford finds the shortest path from a source vertex to all other vertices and works with negative edge weights. It can also detect negative cycles.
Implementation
def bellman_ford(vertices, edges, start):
# edges: list of (u, v, weight)
distances = {v: float('inf') for v in vertices}
distances[start] = 0
# Relax all edges V-1 times
for _ in range(len(vertices) - 1):
for u, v, weight in edges:
if distances[u] != float('inf') and distances[u] + weight < distances[v]:
distances[v] = distances[u] + weight
# Check for negative cycles
for u, v, weight in edges:
if distances[u] != float('inf') and distances[u] + weight < distances[v]:
raise ValueError("Graph contains a negative cycle")
return distancesTime complexity: O(V * E), which is slower than Dijkstra but handles negative weights.
When to use Bellman-Ford over Dijkstra:
- The graph has negative edge weights
- You need to detect negative cycles
- The problem asks for shortest path with at most K stops (modify the outer loop to run K times)
Shortest Path with at Most K Stops
def shortest_path_k_stops(vertices, edges, start, target, k):
distances = {v: float('inf') for v in vertices}
distances[start] = 0
for _ in range(k + 1):
# Use a copy to ensure we only use paths from the previous iteration
temp = distances.copy()
for u, v, weight in edges:
if distances[u] != float('inf') and distances[u] + weight < temp[v]:
temp[v] = distances[u] + weight
distances = temp
return distances[target] if distances[target] != float('inf') else -1Minimum Spanning Tree: Kruskal's Algorithm
A Minimum Spanning Tree (MST) is a subset of edges that connects all vertices with the minimum total edge weight, forming a tree (no cycles). Kruskal's algorithm builds the MST by sorting all edges by weight and adding them greedily, using Union-Find to avoid cycles.
Implementation
def kruskal(vertices, edges):
# edges: list of (weight, u, v)
edges.sort() # Sort by weight
parent = {v: v for v in vertices}
rank = {v: 0 for v in vertices}
def find(x):
if parent[x] != x:
parent[x] = find(parent[x]) # Path compression
return parent[x]
def union(x, y):
root_x, root_y = find(x), find(y)
if root_x == root_y:
return False # Already connected, adding would create a cycle
if rank[root_x] < rank[root_y]:
root_x, root_y = root_y, root_x
parent[root_y] = root_x
if rank[root_x] == rank[root_y]:
rank[root_x] += 1
return True
mst = []
total_weight = 0
for weight, u, v in edges:
if union(u, v):
mst.append((u, v, weight))
total_weight += weight
if len(mst) == len(vertices) - 1:
break # MST is complete
return mst, total_weightTime complexity: O(E log E) due to sorting.
Minimum Spanning Tree: Prim's Algorithm
Prim's algorithm builds the MST by starting from any vertex and repeatedly adding the cheapest edge that connects a vertex in the MST to one outside it.
Implementation
import heapq
def prim(graph, start):
# graph: {vertex: [(neighbor, weight), ...]}
mst = []
total_weight = 0
visited = set([start])
# Push all edges from start into the heap
edges = [(weight, start, neighbor) for neighbor, weight in graph[start]]
heapq.heapify(edges)
while edges and len(visited) < len(graph):
weight, u, v = heapq.heappop(edges)
if v in visited:
continue
visited.add(v)
mst.append((u, v, weight))
total_weight += weight
for neighbor, edge_weight in graph[v]:
if neighbor not in visited:
heapq.heappush(edges, (edge_weight, v, neighbor))
return mst, total_weightTime complexity: O(E log V) with a binary heap.
Kruskal vs Prim:
- Kruskal works well with edge lists and sparse graphs
- Prim works well with adjacency lists and dense graphs
- In interviews, Kruskal is often easier to implement because Union-Find is a clean, reusable structure
Cycle Detection
Detecting cycles is a fundamental graph operation that appears in many interview problems.
Cycle Detection in Undirected Graphs (DFS)
def has_cycle_undirected(graph):
visited = set()
def dfs(vertex, parent):
visited.add(vertex)
for neighbor in graph[vertex]:
if neighbor not in visited:
if dfs(neighbor, vertex):
return True
elif neighbor != parent:
return True # Found a cycle
return False
for vertex in graph:
if vertex not in visited:
if dfs(vertex, None):
return True
return FalseCycle Detection in Directed Graphs (Three-Color DFS)
def has_cycle_directed(graph):
WHITE, GRAY, BLACK = 0, 1, 2
color = {v: WHITE for v in graph}
def dfs(vertex):
color[vertex] = GRAY # Currently being processed
for neighbor in graph[vertex]:
if color[neighbor] == GRAY:
return True # Back edge found, cycle exists
if color[neighbor] == WHITE and dfs(neighbor):
return True
color[vertex] = BLACK # Fully processed
return False
for vertex in graph:
if color[vertex] == WHITE:
if dfs(vertex):
return True
return FalseTime complexity: O(V + E) for both approaches.
Union-Find (Disjoint Set Union)
Union-Find is a data structure that efficiently tracks a collection of disjoint sets. It supports two operations: finding the root of a set and merging two sets. With path compression and union by rank, both operations run in near-constant time.
Complete Implementation
class UnionFind:
def __init__(self, n):
self.parent = list(range(n))
self.rank = [0] * n
self.count = n # Number of connected components
def find(self, x):
if self.parent[x] != x:
self.parent[x] = self.find(self.parent[x]) # Path compression
return self.parent[x]
def union(self, x, y):
root_x, root_y = self.find(x), self.find(y)
if root_x == root_y:
return False # Already in the same set
# Union by rank
if self.rank[root_x] < self.rank[root_y]:
root_x, root_y = root_y, root_x
self.parent[root_y] = root_x
if self.rank[root_x] == self.rank[root_y]:
self.rank[root_x] += 1
self.count -= 1
return True
def connected(self, x, y):
return self.find(x) == self.find(y)
def get_count(self):
return self.countExample: Number of Connected Components
def count_components(n, edges):
uf = UnionFind(n)
for u, v in edges:
uf.union(u, v)
return uf.get_count()Time complexity: O(alpha(n)) per operation, where alpha is the inverse Ackermann function (effectively O(1) for all practical purposes).
Common Union-Find problems:
- Number of connected components
- Detecting cycles in undirected graphs
- Kruskal's MST algorithm
- Accounts merge
- Redundant connection
- Number of provinces
Common Interview Patterns
Understanding these recurring patterns will help you recognize which algorithm to apply.
Pattern 1: Shortest Path in Unweighted Graph
Use BFS. If the graph is a grid, treat each cell as a node with edges to its four (or eight) neighbors.
Example problems: Word Ladder, Shortest Bridge, Open the Lock.
Pattern 2: Shortest Path in Weighted Graph
Use Dijkstra if all weights are non-negative. Use Bellman-Ford if there are negative weights or if you need to detect negative cycles.
Example problems: Network Delay Time, Cheapest Flights Within K Stops, Path with Maximum Probability.
Pattern 3: Dependency Ordering
Use topological sort. Check for cycles first (if a cycle exists, no valid ordering is possible).
Example problems: Course Schedule, Course Schedule II, Alien Dictionary.
Pattern 4: Connected Components
Use BFS, DFS, or Union-Find to identify connected components.
Example problems: Number of Islands, Number of Provinces, Accounts Merge.
Pattern 5: Cycle Detection
Use DFS with three-color marking for directed graphs. Use DFS with parent tracking or Union-Find for undirected graphs.
Example problems: Course Schedule (cycle in prerequisites), Redundant Connection, Graph Valid Tree.
Pattern 6: Minimum Cost to Connect
Use Kruskal or Prim for MST problems.
Example problems: Min Cost to Connect All Points, Optimize Water Distribution in a Village.
Pattern 7: Grid as Graph
Treat the grid as an implicit graph where each cell is a node and edges connect adjacent cells. Apply BFS or DFS depending on the problem.
Example problems: Number of Islands, Rotting Oranges, Flood Fill, Surrounded Regions, Pacific Atlantic Water Flow.
Pattern 8: Bipartite Check
Use BFS or DFS with two-coloring to check if a graph is bipartite.
def is_bipartite(graph):
color = {}
def bfs(start):
queue = deque([start])
color[start] = 0
while queue:
vertex = queue.popleft()
for neighbor in graph[vertex]:
if neighbor not in color:
color[neighbor] = 1 - color[vertex]
queue.append(neighbor)
elif color[neighbor] == color[vertex]:
return False
return True
for vertex in graph:
if vertex not in color:
if not bfs(vertex):
return False
return TrueExample problems: Is Graph Bipartite?, Possible Bipartition.
Practice Strategy
Graph problems can feel overwhelming because of the number of algorithms involved. Here is a structured approach to mastering them.
Week 1: Foundations
- Implement BFS and DFS from scratch without looking at references
- Solve five grid-based problems using both BFS and DFS
- Practice building adjacency lists from different input formats (edge list, adjacency matrix, grid)
Week 2: Topological Sort and Cycle Detection
- Implement both Kahn's algorithm and DFS-based topological sort
- Solve Course Schedule I and II
- Implement cycle detection for both directed and undirected graphs
Week 3: Shortest Path
- Implement Dijkstra's algorithm with a min-heap
- Implement Bellman-Ford
- Solve Network Delay Time, Cheapest Flights Within K Stops, and Path with Minimum Effort
Week 4: Union-Find and MST
- Implement Union-Find with path compression and union by rank
- Solve Number of Connected Components, Redundant Connection, and Accounts Merge
- Implement Kruskal's algorithm using your Union-Find implementation
Use Phantom Code to practice these problems in a realistic interview environment. The AI-powered feedback will help you identify gaps in your approach and optimize your solutions in real time.
Complexity Reference Table
| Algorithm | Time Complexity | Space Complexity | Key Constraint | | ---------------- | ------------------ | ---------------- | ------------------------ | | BFS | O(V + E) | O(V) | Unweighted shortest path | | DFS | O(V + E) | O(V) | Path exploration, cycles | | Topological Sort | O(V + E) | O(V) | DAG only | | Dijkstra | O((V + E) log V) | O(V) | No negative weights | | Bellman-Ford | O(V * E) | O(V) | Handles negative weights | | Kruskal | O(E log E) | O(V) | MST, uses Union-Find | | Prim | O(E log V) | O(V) | MST, uses min-heap | | Union-Find | O(alpha(n)) per op | O(V) | Near-constant time |
Conclusion
Graph algorithms are a core part of the coding interview toolkit. The key to mastering them is not memorizing every algorithm but understanding when to apply each one. When you see a problem, ask yourself these questions:
- Is this a graph problem? Look for relationships between entities, grids, networks, or dependencies.
- What type of graph is it? Directed or undirected, weighted or unweighted, cyclic or acyclic.
- What am I looking for? Shortest path, connected components, ordering, cycles, or minimum cost.
The answers to these three questions will point you toward the right algorithm every time.
Practice implementing each algorithm from scratch until you can write them without reference. Then focus on recognizing patterns in problems so you can quickly map a new problem to a known algorithm. Phantom Code provides an excellent platform for practicing graph problems with AI-guided feedback, helping you build the pattern recognition skills that make the difference in a real interview.
Graphs are challenging, but they are also deeply rewarding to master. Every graph problem you solve strengthens your ability to model and reason about complex systems, which is exactly what top tech companies are looking for.