Graph problems are among the most frequently tested and most feared topics in coding interviews. They appear at every major tech company, and for good reason: graphs model real-world systems like social networks, maps, dependency chains, and network routing. This guide gives you the theory, the code, and the patterns you need to solve graph problems confidently in any interview.

Before diving into algorithms, make sure you have a solid understanding of graph terminology.
A graph consists of a set of vertices (also called nodes) and a set of edges that connect pairs of vertices. Graphs can be:
Understanding these properties helps you select the right algorithm for each problem.
There are two primary ways to represent a graph in code. Choosing the right representation affects both the clarity and performance of your solution.
The adjacency list represents each vertex as a key in a dictionary (or array), with the value being a list of its neighbors. This is the most common representation in interviews.
# Unweighted directed graph
graph = {
'A': ['B', 'C'],
'B': ['D'],
'C': ['D', 'E'],
'D': ['E'],
'E': []
}
# Weighted directed graph
weighted_graph = {
'A': [('B', 4), ('C', 2)],
'B': [('D', 3)],
'C': [('D', 1), ('E', 5)],
'D': [('E', 2)],
'E': []
}
# Building from an edge list (common interview input format)
def build_adjacency_list(edges, directed=True):
graph = {}
for u, v in edges:
if u not in graph:
graph[u] = []
if v not in graph:
graph[v] = []
graph[u].append(v)
if not directed:
graph[v].append(u)
return graphTime complexity: O(V + E) space. Checking if an edge exists takes O(degree of vertex).
The adjacency matrix uses a 2D array where matrix[i][j] indicates whether an edge exists from vertex i to vertex j.
# Unweighted graph with vertices 0-3
matrix = [
[0, 1, 1, 0],
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 0]
]
# Checking if an edge exists: O(1)
has_edge = matrix[0][1] == 1 # True: edge from 0 to 1Time complexity: O(V^2) space. Edge lookup is O(1), but iterating over all neighbors of a vertex takes O(V).
Interview tip: Use an adjacency list unless the problem specifically benefits from O(1) edge lookups (rare in interviews). Most interviewers expect adjacency lists.
BFS explores a graph level by level, visiting all neighbors of the current vertex before moving to the next level. It uses a queue and is the go-to algorithm for finding the shortest path in an unweighted graph.
from collections import deque
def bfs(graph, start):
visited = set()
queue = deque([start])
visited.add(start)
result = []
while queue:
vertex = queue.popleft()
result.append(vertex)
for neighbor in graph[vertex]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
return resultdef bfs_shortest_path(graph, start, target):
if start == target:
return [start]
visited = set([start])
queue = deque([(start, [start])])
while queue:
vertex, path = queue.popleft()
for neighbor in graph[vertex]:
if neighbor == target:
return path + [neighbor]
if neighbor not in visited:
visited.add(neighbor)
queue.append((neighbor, path + [neighbor]))
return [] # No path foundSome problems require starting BFS from multiple sources simultaneously. This is common in problems like "rotting oranges" or "walls and gates."
def multi_source_bfs(grid, sources):
rows, cols = len(grid), len(grid[0])
queue = deque()
visited = set()
for r, c in sources:
queue.append((r, c, 0)) # row, col, distance
visited.add((r, c))
directions = [(0, 1), (0, -1), (1, 0), (-1, 0)]
while queue:
r, c, dist = queue.popleft()
for dr, dc in directions:
nr, nc = r + dr, c + dc
if (0 <= nr < rows and 0 <= nc < cols
and (nr, nc) not in visited
and grid[nr][nc] != -1): # -1 represents walls
visited.add((nr, nc))
grid[nr][nc] = dist + 1
queue.append((nr, nc, dist + 1))
return gridTime complexity: O(V + E) for all BFS variants.
When to use BFS:
DFS explores as far as possible along each branch before backtracking. It uses a stack (or recursion, which implicitly uses the call stack). DFS is the foundation for many graph algorithms including cycle detection, topological sort, and finding connected components.
def dfs_recursive(graph, vertex, visited=None):
if visited is None:
visited = set()
visited.add(vertex)
result = [vertex]
for neighbor in graph[vertex]:
if neighbor not in visited:
result.extend(dfs_recursive(graph, neighbor, visited))
return resultdef dfs_iterative(graph, start):
visited = set()
stack = [start]
result = []
while stack:
vertex = stack.pop()
if vertex not in visited:
visited.add(vertex)
result.append(vertex)
# Add neighbors in reverse order for consistent traversal
for neighbor in reversed(graph[vertex]):
if neighbor not in visited:
stack.append(neighbor)
return resultMany interview problems represent the graph as a 2D grid. Here is a template for grid-based DFS:
def dfs_grid(grid, row, col, visited):
rows, cols = len(grid), len(grid[0])
if (row < 0 or row >= rows or col < 0 or col >= cols
or (row, col) in visited or grid[row][col] == 0):
return 0
visited.add((row, col))
count = 1
for dr, dc in [(0, 1), (0, -1), (1, 0), (-1, 0)]:
count += dfs_grid(grid, row + dr, col + dc, visited)
return count
# Example: Count islands (number of connected components of 1s)
def count_islands(grid):
visited = set()
islands = 0
for r in range(len(grid)):
for c in range(len(grid[0])):
if grid[r][c] == 1 and (r, c) not in visited:
dfs_grid(grid, r, c, visited)
islands += 1
return islandsTime complexity: O(V + E) for adjacency list, O(V^2) for adjacency matrix.
When to use DFS:
Topological sort produces a linear ordering of vertices in a DAG such that for every directed edge (u, v), vertex u comes before v. This is essential for dependency resolution problems.
from collections import deque
def topological_sort_kahn(graph, num_vertices):
# Calculate in-degrees
in_degree = {v: 0 for v in graph}
for vertex in graph:
for neighbor in graph[vertex]:
in_degree[neighbor] = in_degree.get(neighbor, 0) + 1
# Start with vertices that have no incoming edges
queue = deque([v for v in in_degree if in_degree[v] == 0])
result = []
while queue:
vertex = queue.popleft()
result.append(vertex)
for neighbor in graph[vertex]:
in_degree[neighbor] -= 1
if in_degree[neighbor] == 0:
queue.append(neighbor)
# If result contains all vertices, the graph has a valid topological order
if len(result) == num_vertices:
return result
else:
return [] # Graph has a cycledef topological_sort_dfs(graph):
visited = set()
stack = []
def dfs(vertex):
visited.add(vertex)
for neighbor in graph[vertex]:
if neighbor not in visited:
dfs(neighbor)
stack.append(vertex)
for vertex in graph:
if vertex not in visited:
dfs(vertex)
return stack[::-1] # Reverse the post-orderCommon interview problems using topological sort:
Time complexity: O(V + E).
Dijkstra's algorithm finds the shortest path from a source vertex to all other vertices in a weighted graph with non-negative edge weights. It is one of the most important algorithms for interviews.
import heapq
def dijkstra(graph, start):
# graph: {vertex: [(neighbor, weight), ...]}
distances = {vertex: float('inf') for vertex in graph}
distances[start] = 0
previous = {vertex: None for vertex in graph}
min_heap = [(0, start)] # (distance, vertex)
while min_heap:
current_dist, current_vertex = heapq.heappop(min_heap)
# Skip if we have already found a shorter path
if current_dist > distances[current_vertex]:
continue
for neighbor, weight in graph[current_vertex]:
distance = current_dist + weight
if distance < distances[neighbor]:
distances[neighbor] = distance
previous[neighbor] = current_vertex
heapq.heappush(min_heap, (distance, neighbor))
return distances, previous
def reconstruct_path(previous, start, target):
path = []
current = target
while current is not None:
path.append(current)
current = previous[current]
path.reverse()
return path if path[0] == start else []graph = {
'A': [('B', 4), ('C', 2)],
'B': [('D', 3), ('C', 1)],
'C': [('D', 1), ('E', 5)],
'D': [('E', 2)],
'E': []
}
distances, previous = dijkstra(graph, 'A')
# distances: {'A': 0, 'B': 4, 'C': 2, 'D': 3, 'E': 5}
path = reconstruct_path(previous, 'A', 'E')
# path: ['A', 'C', 'D', 'E']Time complexity: O((V + E) log V) with a binary min-heap.
Key constraint: Dijkstra does not work with negative edge weights. Use Bellman-Ford instead.
Bellman-Ford finds the shortest path from a source vertex to all other vertices and works with negative edge weights. It can also detect negative cycles.
def bellman_ford(vertices, edges, start):
# edges: list of (u, v, weight)
distances = {v: float('inf') for v in vertices}
distances[start] = 0
# Relax all edges V-1 times
for _ in range(len(vertices) - 1):
for u, v, weight in edges:
if distances[u] != float('inf') and distances[u] + weight < distances[v]:
distances[v] = distances[u] + weight
# Check for negative cycles
for u, v, weight in edges:
if distances[u] != float('inf') and distances[u] + weight < distances[v]:
raise ValueError("Graph contains a negative cycle")
return distancesTime complexity: O(V * E), which is slower than Dijkstra but handles negative weights.
When to use Bellman-Ford over Dijkstra:
def shortest_path_k_stops(vertices, edges, start, target, k):
distances = {v: float('inf') for v in vertices}
distances[start] = 0
for _ in range(k + 1):
# Use a copy to ensure we only use paths from the previous iteration
temp = distances.copy()
for u, v, weight in edges:
if distances[u] != float('inf') and distances[u] + weight < temp[v]:
temp[v] = distances[u] + weight
distances = temp
return distances[target] if distances[target] != float('inf') else -1A Minimum Spanning Tree (MST) is a subset of edges that connects all vertices with the minimum total edge weight, forming a tree (no cycles). Kruskal's algorithm builds the MST by sorting all edges by weight and adding them greedily, using Union-Find to avoid cycles.
def kruskal(vertices, edges):
# edges: list of (weight, u, v)
edges.sort() # Sort by weight
parent = {v: v for v in vertices}
rank = {v: 0 for v in vertices}
def find(x):
if parent[x] != x:
parent[x] = find(parent[x]) # Path compression
return parent[x]
def union(x, y):
root_x, root_y = find(x), find(y)
if root_x == root_y:
return False # Already connected, adding would create a cycle
if rank[root_x] < rank[root_y]:
root_x, root_y = root_y, root_x
parent[root_y] = root_x
if rank[root_x] == rank[root_y]:
rank[root_x] += 1
return True
mst = []
total_weight = 0
for weight, u, v in edges:
if union(u, v):
mst.append((u, v, weight))
total_weight += weight
if len(mst) == len(vertices) - 1:
break # MST is complete
return mst, total_weightTime complexity: O(E log E) due to sorting.
Prim's algorithm builds the MST by starting from any vertex and repeatedly adding the cheapest edge that connects a vertex in the MST to one outside it.
import heapq
def prim(graph, start):
# graph: {vertex: [(neighbor, weight), ...]}
mst = []
total_weight = 0
visited = set([start])
# Push all edges from start into the heap
edges = [(weight, start, neighbor) for neighbor, weight in graph[start]]
heapq.heapify(edges)
while edges and len(visited) < len(graph):
weight, u, v = heapq.heappop(edges)
if v in visited:
continue
visited.add(v)
mst.append((u, v, weight))
total_weight += weight
for neighbor, edge_weight in graph[v]:
if neighbor not in visited:
heapq.heappush(edges, (edge_weight, v, neighbor))
return mst, total_weightTime complexity: O(E log V) with a binary heap.
Kruskal vs Prim:
Detecting cycles is a fundamental graph operation that appears in many interview problems.
def has_cycle_undirected(graph):
visited = set()
def dfs(vertex, parent):
visited.add(vertex)
for neighbor in graph[vertex]:
if neighbor not in visited:
if dfs(neighbor, vertex):
return True
elif neighbor != parent:
return True # Found a cycle
return False
for vertex in graph:
if vertex not in visited:
if dfs(vertex, None):
return True
return Falsedef has_cycle_directed(graph):
WHITE, GRAY, BLACK = 0, 1, 2
color = {v: WHITE for v in graph}
def dfs(vertex):
color[vertex] = GRAY # Currently being processed
for neighbor in graph[vertex]:
if color[neighbor] == GRAY:
return True # Back edge found, cycle exists
if color[neighbor] == WHITE and dfs(neighbor):
return True
color[vertex] = BLACK # Fully processed
return False
for vertex in graph:
if color[vertex] == WHITE:
if dfs(vertex):
return True
return FalseTime complexity: O(V + E) for both approaches.
Union-Find is a data structure that efficiently tracks a collection of disjoint sets. It supports two operations: finding the root of a set and merging two sets. With path compression and union by rank, both operations run in near-constant time.
class UnionFind:
def __init__(self, n):
self.parent = list(range(n))
self.rank = [0] * n
self.count = n # Number of connected components
def find(self, x):
if self.parent[x] != x:
self.parent[x] = self.find(self.parent[x]) # Path compression
return self.parent[x]
def union(self, x, y):
root_x, root_y = self.find(x), self.find(y)
if root_x == root_y:
return False # Already in the same set
# Union by rank
if self.rank[root_x] < self.rank[root_y]:
root_x, root_y = root_y, root_x
self.parent[root_y] = root_x
if self.rank[root_x] == self.rank[root_y]:
self.rank[root_x] += 1
self.count -= 1
return True
def connected(self, x, y):
return self.find(x) == self.find(y)
def get_count(self):
return self.countdef count_components(n, edges):
uf = UnionFind(n)
for u, v in edges:
uf.union(u, v)
return uf.get_count()Time complexity: O(alpha(n)) per operation, where alpha is the inverse Ackermann function (effectively O(1) for all practical purposes).
Common Union-Find problems:
Understanding these recurring patterns will help you recognize which algorithm to apply.
Use BFS. If the graph is a grid, treat each cell as a node with edges to its four (or eight) neighbors.
Example problems: Word Ladder, Shortest Bridge, Open the Lock.
Use Dijkstra if all weights are non-negative. Use Bellman-Ford if there are negative weights or if you need to detect negative cycles.
Example problems: Network Delay Time, Cheapest Flights Within K Stops, Path with Maximum Probability.
Use topological sort. Check for cycles first (if a cycle exists, no valid ordering is possible).
Example problems: Course Schedule, Course Schedule II, Alien Dictionary.
Use BFS, DFS, or Union-Find to identify connected components.
Example problems: Number of Islands, Number of Provinces, Accounts Merge.
Use DFS with three-color marking for directed graphs. Use DFS with parent tracking or Union-Find for undirected graphs.
Example problems: Course Schedule (cycle in prerequisites), Redundant Connection, Graph Valid Tree.
Use Kruskal or Prim for MST problems.
Example problems: Min Cost to Connect All Points, Optimize Water Distribution in a Village.
Treat the grid as an implicit graph where each cell is a node and edges connect adjacent cells. Apply BFS or DFS depending on the problem.
Example problems: Number of Islands, Rotting Oranges, Flood Fill, Surrounded Regions, Pacific Atlantic Water Flow.
Use BFS or DFS with two-coloring to check if a graph is bipartite.
def is_bipartite(graph):
color = {}
def bfs(start):
queue = deque([start])
color[start] = 0
while queue:
vertex = queue.popleft()
for neighbor in graph[vertex]:
if neighbor not in color:
color[neighbor] = 1 - color[vertex]
queue.append(neighbor)
elif color[neighbor] == color[vertex]:
return False
return True
for vertex in graph:
if vertex not in color:
if not bfs(vertex):
return False
return TrueExample problems: Is Graph Bipartite?, Possible Bipartition.
Graph problems can feel overwhelming because of the number of algorithms involved. Here is a structured approach to mastering them.
Use Phantom Code to practice these problems in a realistic interview environment. The AI-powered feedback will help you identify gaps in your approach and optimize your solutions in real time.
| Algorithm | Time Complexity | Space Complexity | Key Constraint | | ---------------- | ------------------ | ---------------- | ------------------------ | | BFS | O(V + E) | O(V) | Unweighted shortest path | | DFS | O(V + E) | O(V) | Path exploration, cycles | | Topological Sort | O(V + E) | O(V) | DAG only | | Dijkstra | O((V + E) log V) | O(V) | No negative weights | | Bellman-Ford | O(V * E) | O(V) | Handles negative weights | | Kruskal | O(E log E) | O(V) | MST, uses Union-Find | | Prim | O(E log V) | O(V) | MST, uses min-heap | | Union-Find | O(alpha(n)) per op | O(V) | Near-constant time |
Graph algorithms are a core part of the coding interview toolkit. The key to mastering them is not memorizing every algorithm but understanding when to apply each one. When you see a problem, ask yourself these questions:
The answers to these three questions will point you toward the right algorithm every time.
Practice implementing each algorithm from scratch until you can write them without reference. Then focus on recognizing patterns in problems so you can quickly map a new problem to a known algorithm. Phantom Code provides an excellent platform for practicing graph problems with AI-guided feedback, helping you build the pattern recognition skills that make the difference in a real interview.
Graphs are challenging, but they are also deeply rewarding to master. Every graph problem you solve strengthens your ability to model and reason about complex systems, which is exactly what top tech companies are looking for.
Phantom Code provides real-time AI assistance during technical interviews. Solve DSA problems, system design questions, and more with instant AI-generated solutions.
Get StartedA comprehensive guide to Apple's software engineer interview process, covering technical rounds, behavioral interviews, system design, and the most common DSA topics tested at Apple.
Master the behavioral interview with 30 real questions and sample answers tailored for software engineers. Learn the STAR method, company-specific tips for FAANG, and strategies to stand out.
A detailed comparison of the top AI-powered tools for coding interview preparation and assistance in 2026. We evaluate Phantom Code, Interview Coder, Final Round AI, UltraCode AI, Parakeet AI, ShadeCoder, and CodeRank across features, accuracy, pricing, and user experience.