Topic 15: Advanced Graphs
Table of Contents
Key Intuition: Advanced graph problems include lexicographically smallest topological orders, Eulerian paths, MSTs, and weighted shortest paths, critical connectivity and graph augmentations.
Canonical Question Types #
Lexicographic Topological Sorting #
- Find smallest topo order by lex order handling ties
Problems #
Alien Dictionary (269) (neetcode link)
This is a modified topo sort. The key part is to understand how to construct the adjacency graph, which is to consider the pairwise elements and make judgements. The interesting part for this is just step 2.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46from collections import defaultdict, deque from typing import List class Solution: def alienOrder(self, words: List[str]) -> str: # Step 1: Initialize data structures # Graph: character -> set of characters it precedes [char that comes after it] (adjacency list) graph = defaultdict(set) # Indegree count of each node (character) indegree = {c:0 for word in words for c in word} # because not all the characters may be in use # Step 2: Build graph edges (and indegree array) from adjacent pairs for i in range(len(words) - 1): w1, w2 = words[i], words[i+1] # Invalid edge case: Check for prefix case where w1 is longer but w2 is prefix -> invalid if len(w1) > len(w2) and w1.startswith(w2): return "" # Find first differing character for c1, c2 in zip(w1, w2): if c1 != c2: # Add edge c1 -> c2 if not already added if c2 not in graph[c1]: graph[c1].add(c2) indegree[c2] += 1 break # Only first differing character per pair is relevant # Step 3: Topological sort (Kahn's Algorithm) # Start with nodes that have zero indegree queue = deque([c for c in indegree if indegree[c] == 0]) output = [] while queue: c = queue.popleft() output.append(c) for nei in graph[c]: indegree[nei] -= 1 if indegree[nei] == 0: queue.append(nei) # Step 4: If output length differs from unique chars count, cycle detected if len(output) != len(indegree): return "" return "".join(output)TODO Course Schedule III (630) — Advanced scheduling with optimization, extending topological and greedy concepts
Eulerian Path & Circuit #
- Reconstruct path covering all edges exactly once. We need to be able to analyse the question and realise that self-loops can be tolerated and that the objective is to determine an ordering of the edges that we wish to traverse. That’s why it’s Eulerian path/circuit.
Problems #
What are we traversing? It’s the edges because tickets represent edges and we’re interesting in utilisation of tickets. We also know from the question that loops between nodes exist. This aligns with Eulerian path problems where cycles or loops are allowed / common and we must cover every edge, even if path must revisit vertices multiple times along the way. Also careful on the tie-breaking rules (the lexical order).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25from collections import defaultdict, deque class Solution: def findItinerary(self, tickets: List[List[str]]) -> List[str]: graph = defaultdict(list) for src, dest in tickets: graph[src].append(dest) # sort to abide by the lexicographical ordering: for src in graph: graph[src].sort(reverse=True) # TRICK: this also just allows us to directly pop out based on the order we wish. itinerary = [] def dfs(airport): # while we have outgoing edges, entertain them all: while graph[airport]: # explore depth first! dfs(graph[airport].pop()) # post-order, we can add it in now itinerary.append(airport) dfs('JFK') return itinerary[::-1]
Minimum Spanning Tree (MST) #
- Build MST with Kruskal or Prim algorithms
Problems #
Min Cost to Connect All Points (1584) ;; DONE
it’s all Manhattan distances so that’s nice. We also know that it’s a completely connected graph and so it’s going to be dense and we can’t employ the usual measures. We have to do MST finding but lazily, so we pick the best midpoint (u) for every node (v) that we try to relax.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27class Solution: def minCostConnectPoints(self, points: List[List[int]]) -> int: n = len(points) visited = [False] * n dist = [float('inf')] * n dist[0] = 0 # start from point 0 total_cost = 0 for _ in range(n): u = -1 # Determine u: # Pick the unvisited node with the smallest distance as the midpoint (u) for i in range(n): is_first_or_is_closer = (u == -1 or dist[i] < dist[u]) if not visited[i] and is_first_or_is_closer: u = i visited[u] = True total_cost += dist[u] # Update distances to ALL other UNVISITED points (for all v) for v in range(n): if not visited[v]: cost = abs(points[u][0] - points[v][0]) + abs(points[u][1] - points[v][1]) if cost < dist[v]: dist[v] = cost return total_cost
Weighted Shortest Paths #
- Use Dijkstra’s or Bellman-Ford for weighted graph shortest paths
Problems #
Network Delay Time (743) ;; DONE
This is just a classic dijkstra (because edge weights are non-negative), single source we want to find the longest branch from this single source to every other node. Applies for network related problem domains as well.
Note the use of the 1-idx shimming and splicing here.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26from collections import defaultdict import heapq class Solution: def networkDelayTime(self, times: List[List[int]], n: int, k: int) -> int: graph = defaultdict(list) for u, v, w in times: graph[u].append((v, w)) # we're going to shim the 0th idx to handle the 1-indexing, then later ignore it dist = [float('inf')] * (n + 1) dist[k] = 0 heap = [(0, k)] # current dist, node (min-pq by distance) while heap: curr_dist, node = heapq.heappop(heap) if curr_dist > dist[node]: continue for nei, w in graph[node]: # if can relax (better alt route): if curr_dist + w < dist[nei]: dist[nei] = curr_dist + w heapq.heappush(heap, (dist[nei], nei)) max_dist = max(dist[1:]) # accounts for the 1-idx shimming return max_dist if max_dist != float('inf') else -1Cheapest Flights Within K Stops (787) ;; DONE
This problem is one where we are just given edges and we need to find k-hop cheapest flights from single source to single destination. We could do BellmanFord for K + 1 relaxations or we could do a modified BFS (that includes the current stop count instead of rejecting it on first-visit), either works.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52## M1: BFS Approach (more performant) from collections import defaultdict, deque class Solution: def findCheapestPrice(self, n: int, flights: List[List[int]], src: int, dst: int, k: int) -> int: graph = defaultdict(list) # Build the adjacency list: u -> (v, cost) for u, v, cost in flights: graph[u].append((v, cost)) # Distance array to keep the minimum cost to reach each node dist = [float('inf')] * n dist[src] = 0 q = deque() q.append((0, src, 0)) #(stops, current node, current cost) while q: stops, node, curr_cost = q.popleft() # If the number of stops exceeds the limit, skip further processing if stops > k: continue for neighbor, price in graph[node]: new_cost = curr_cost + price if new_cost < dist[neighbor]: dist[neighbor] = new_cost q.append((stops + 1, neighbor, new_cost)) return -1 if dist[dst] == float('inf') else dist[dst] ## M2: using bellman ford for k + 1 relaxations and snapshotting method class Solution: def findCheapestPrice(self, n: int, flights: List[List[int]], src: int, dst: int, k: int) -> int: # single src to single dest dist = [float('inf')] * n dist[src] = 0 # we relax the edges for k + 1 times: for _ in range(k + 1): tmp = dist.copy() for u, v, w in flights: can_relax = dist[u] != float('inf') and dist[u] + w < tmp[v] if can_relax: tmp[v] = dist[u] + w dist = tmp if dist[dst] == float('inf'): return -1 return dist[dst]TODO Path With Minimum Effort (1631) — Weighted shortest path with state expansions similar to Swim in Rising Water
TODO Network Flow & Critical Connections #
- Determine max flow, min cut, critical edges, or articulation points affecting connectivity
Problems #
Graph Enumeration & Counting #
- Count specific graph structures: paths, spanning trees, subgraphs, islands
Problems #
- TODO Count Sub Islands (1905) — Counting subgraph embeddings
this is not right yet.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50from collections import deque, defaultdict class Solution: def countSubIslands(self, grid1: List[List[int]], grid2: List[List[int]]) -> int: """ I think the idea here is to just do 2x flood fill searches """ m, n = len(grid1), len(grid1[0]) dirs = [(0,1), (0, -1), (1, 0), (-1, 0)] islands_one = defaultdict(list) count_one = 0 visited_one = set() for r, c in ((r, c) for r in range(m) for c in range(n)): if (r, c) not in visited_one and grid1[r][c] : # start the flood fill search here count_one += 1 q = deque([(r,c)]) # islands_one.append([(r, c)]) islands_one[count_one].append((r, c)) grid1[r][c] = count_one while q: node = q.popleft() for nr, nc in ((r + dr, c + dc) for dr, dc in dirs if 0 <= r + dr < m and 0 <= c + dc < n and (r + dr, c + dc) not in visited_one and grid1[r + dr][c + dc]): visited_one.add((nr, nc)) q.append((nr, nc)) islands_one[count_one].append((nr, nc)) grid1[nr][nc] = count_one # now we find grid 2 islands: islands_two = [] visited_two = set() for r, c in ((r, c) for r in range(m) for c in range(n)): if (r, c) not in visited_two and grid2[r][c] : # start the flood fill search here q = deque([(r,c)]) islands_two.append([(r, c)]) while q: node = q.popleft() for nr, nc in ((r + dr, c + dc) for dr, dc in dirs if 0 <= r + dr < m and 0 <= c + dc < n and (r + dr, c + dc) not in visited_two and grid2[r + dr][c + dc]): visited_two.add((nr, nc)) q.append((nr, nc)) islands_two[-1].append((nr, nc)) num_subislands = 0 for island in islands_two: # all the cells making up island in grid2 must be within the same island in grid 1: num_subislands += 1 if len(set(grid1[r][c] for r, c in island)) == 1 else 0 return num_subislands - TODO Number of Distinct Islands (694) (paid)
Graph Augmentation and Supergraphs #
- Create or augment super graphs by adding vertices or edges to model constraints or extended problems
- Typical use includes adding virtual nodes, modelling state spaces, or building layered graphs for dynamic programming on graphs
Problems #
This is a shortest path problem that feels like an edit distance kind, but it’s a graph traversal. We need to be able to use wildcards and we can create a lookup table for pattern to words which we build by replacing each character in a word with
*and mapping those patterns to possible words. Doing so reduces the adjacency connection lookup from \(O(N^{2})\) to \(O(L N)\), where L is word length. Also take note that we only want to reach each node (pattern) once in the overall procedure, so we silence it after we reach it.1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34from collections import defaultdict, deque class Solution: def ladderLength(self, beginWord: str, endWord: str, wordList: List[str]) -> int: if endWord not in wordList: return 0 L = len(beginWord) # better mapping: pattern_to_words = defaultdict(list) for word in wordList: for i in range(L): pattern = word[:i] + "*" + word[i + 1:] pattern_to_words[pattern].append(word) # now we run BFS: queue = deque([(beginWord, 1)]) # word, level visited = {beginWord} while queue: word, level = queue.popleft() for i in range(L): pattern = word[:i] + "*" + word[i + 1:] for nei in pattern_to_words[pattern]: if nei == endWord: return level + 1 if nei not in visited: visited.add(nei) queue.append((nei, level + 1)) # NB: prevents repeat lookups, we silence that pattern check so that each pattern is used only once per search. pattern_to_words[pattern] = [] return 0In some ways, this question involves a graph expansion / modification as well.
Swim in Rising Water (778) ;; done, modelling state+height as graph expansion
There’s a few ways to do this. For one, we can view this as a modification to the classic dijkstra approach where we prioritise the lowest elevations first (since we can teleport as we wish) It turns Dijkstra’s into a “min-max” path problem. Else we could also see it in a kruskal-like approach where the max height between two points on the grid is seen as the edge weight and we search for an MST such that first and last cells are connected. If imagination fails us, we can also exploit the monotonicity of the answer space and do a binary search like approach.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150# MODIFIED DIJKSTRA APPROACH: prioritise the lowest elevation ones to handle first from collections import defaultdict, deque import heapq class Solution: def swimInWater(self, grid: List[List[int]]) -> int: # start from S, end at D, fastest hops possible # prioritise the smallest elevation so far ROWS, COLS = len(grid), len(grid[0]) DIRS = [(0, 1), (0, -1), (1, 0), (-1, 0)] visited = [[False] * COLS for _ in range(ROWS)] # GOTCHA: we can't assume that the first will be elevation = 0, so we have to read it from the grid. heap = [(grid[0][0], (0,0))] # (elevation, coordinate) visited[0][0] = True while heap: t, (r, c) = heapq.heappop(heap) # first reach of the target if r == ROWS - 1 and c == COLS - 1: return t for row, col in ((r + dr, c + dc) for dr, dc in DIRS if 0 <= r + dr < ROWS and 0 <= c + dc < COLS and not visited[r + dr][c + dc] ): # visit it: visited[row][col] = True # NOTE: if we're accessing this later in time (e.g. cuz it was an island or something) then we need to respect the current time that's why we need to max this like so: new_time = max(t, grid[row][col]) heapq.heappush(heap, (new_time,(row, col))) # this runs in O(n^{2} \log n) time because each heap ops is log n time and each cell in grid is considered once (hence the n^{2}). Space use is just O(n^{2}) # M2: KRUSKAL-LIKE APPROACH: class UnionFind: def __init__(self, n): # Initialize parent list where each node is its own parent (representative) self.parent = list(range(n)) def find(self, x): # Find the root parent (representative) of x with path compression # Path compression flattens the tree by making each node point directly to the root if x != self.parent[x]: self.parent[x] = self.find(self.parent[x]) return self.parent[x] def union(self, x, y): # Union the sets containing x and y # Returns False if x and y already belong to the same set px, py = self.find(x), self.find(y) if px == py: return False # already connected self.parent[px] = py # attach one tree under the other return True class Solution: def swimInWater(self, grid): n = len(grid) if n == 1: # handle the trivial return case return 0 # Helper to convert 2D grid coordinates (r, c) into 1D index for Union-Find def idx(r, c): return r * n + c edges = [] # list of edges in the graph as (weight, node1, node2) # Generate edges between all pairs of adjacent cells (right and down neighbors) # Weight of edge = max elevation of the two nodes (cells) for r in range(n): for c in range(n): if r + 1 < n: weight = max(grid[r][c], grid[r+1][c]) edges.append((weight, idx(r, c), idx(r+1, c))) if c + 1 < n: weight = max(grid[r][c], grid[r][c+1]) edges.append((weight, idx(r, c), idx(r, c+1))) # Sort all edges by ascending weight (height) edges.sort() uf = UnionFind(n * n) # initialize union-find for all nodes # Process edges in order of increasing weight for height, u, v in edges: uf.union(u, v) # connect the two nodes in the union-find structure # After each union, check if start (top-left cell) and end (bottom-right) are connected if uf.find(0) == uf.find(n * n - 1): # The earliest time when the start and end belong to the same connected component # is the required minimal water level return height # M3: Binary Search approach. from collections import deque class Solution: def swimInWater(self, grid): n = len(grid) def can_reach(t): """ Helper function to check if it's possible to swim from (0,0) to (n-1,n-1) if water level is 't' (i.e., you can only enter cells where elevation <= t). Uses BFS for reachability. """ # If the starting cell is higher than t, cannot start swimming yet if grid[0][0] > t: return False visited = [[False]*n for _ in range(n)] queue = deque([(0, 0)]) visited[0][0] = True # BFS to explore reachable cells under water level t while queue: r, c = queue.popleft() # Check if we've reached the target cell if r == n - 1 and c == n - 1: return True # Explore 4-directional neighbors for dr, dc in [(-1, 0), (1, 0), (0, -1), (0, 1)]: nr, nc = r + dr, c + dc # Check boundaries and if we can move into the neighbor cell if (0 <= nr < n and 0 <= nc < n and not visited[nr][nc] and grid[nr][nc] <= t): visited[nr][nc] = True queue.append((nr, nc)) # If BFS finishes without reaching end, return False return False # The minimum time to wait is at least the max of start and end elevations left, right = max(grid[0][0], grid[-1][-1]), n * n - 1 # Binary search for the minimum water level t that allows swimming through while left < right: mid = left + (right - left) // 2 if can_reach(mid): # If reachable, try lower water levels right = mid else: # If not, increase water level left = mid + 1 return leftShortest Path in a Grid with Obstacles Elimination (1293) — Graph augmentation using state expansions
Minimum Cost to Make at Least One Valid Path in a Grid (1368) — Edge cost modifications and augmentations
Minimum Cost Path with Edge Reversals (3650)
See essay plan for this
Revised Essay Plan: Minimum Cost Path with Edge Reversals
Problem Context: We are given a directed, weighted graph with nodes labeled from `0` to `n-1` and edges `(u, v, w)` representing a directed edge from `u` to `v` with cost `w`. Each node has a “switch” that can be used at most once to reverse an incoming edge upon arrival at that node, enabling a traversal along the reversed edge at double the original cost (`2 * w`).
Intuition and Approach
This problem extends shortest path computation (like Dijkstra’s algorithm) with the added complexity of edge reversals that can only be used once per node, and only on arrival.
Key Ideas
Directed Graph and Reversals:
- The graph is directed and weighted.
- Reversed edges can be traversed at double cost but only by activating the node’s switch, which can be used at most once per node.
Modelling Reversals as Extra Edges:
- We build two graphs:
- `graph[u]` for normal (forward) edges `(u → v)`.
- `rev_graph[v]` for incoming edges `(u → v)`, reversed as `(v → u)` with cost doubled (`2 * w`).
- This enables considering traversing reversed edges from node `v` if you choose to activate that node’s switch.
- We build two graphs:
Dijkstra Without Explicit State Tracking:
- Instead of modeling per-node switch usage explicitly, we merge these edges and run a standard Dijkstra algorithm over the combined graph (normal + reversed edges).
- The algorithm computes the minimal cost to reach each node considering the possibility of using reversed edges anywhere in the path.
- This works efficiently and solves the problem correctly under the problem constraints or test data.
Tradeoff:
- This approach does not explicitly enforce the “only once per node” switch usage constraint.
- However, due to cost structure and graph traversal, the shortest path computed often naturally respects the constraint or finds the true minimum.
- For full rigor, a stateful Dijkstra tracking per-node switch usage (`dist[node][switch_used_flag]`) can be used but increases complexity.
Simplification Benefits:
- The simplified approach is easier to implement, faster in practice, and suffices for large input constraints.
- It effectively handles both normal traversals and reversal moves as edges with different costs.
Summary of Solution Steps
- Build `graph` and `rev_graph` from input edges.
- Initialize distance array with infinities.
- Perform Dijkstra’s algorithm on combined edges:
- Traverse normal edges at cost `w`.
- Traverse reversed incoming edges at cost `2 * w`.
- Return minimal cost to reach node `n-1` or `-1` if unreachable.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39import heapq from collections import defaultdict class Solution: def minCost(self, n: int, edges: List[List[int]]) -> int: graph = defaultdict(list) rev_graph = defaultdict(list) # just need to have this extension. for u, v, w in edges: graph[u].append((v, w)) rev_graph[v].append((u, w)) # store incoming edges dist = [float('inf')] * n dist[0] = 0 heap = [(0, 0)] # (cost, node) while heap: cost, u = heapq.heappop(heap) if u == n - 1: return cost if cost > dist[u]: # ignore continue # Normal outgoing edges for v, w in graph[u]: ncost = cost + w # if ncost < dist[v]: dist[v] = ncost heapq.heappush(heap, (ncost, v)) # Reversed incoming edges (switch at u) for v, w in rev_graph[u]: ncost = cost + 2 * w if ncost < dist[v]: dist[v] = ncost heapq.heappush(heap, (ncost, v)) return -1
SOE #
- Incorrect lex order tie handling
- Lack of path compression in union-find
- PQ misuse in shortest path calculations
- Comprehension errors :
- mistaking longest source to multiple destination shortest path vs MST finding (e.g. like in “Network Delay Time (743)”).
Bellman Ford #
Here’s some gotchas related to bellman ford.
# of iterations associated with the number of edges #
When we run bellman ford, we are relaxing edges. At the end of the \(k^{ th }\) iteration, we have \(k-1\) edges that have been relaxed (and \(k\) edges with the right distance).
TRICK/GOTCHA: Use the Snapshotting Trick to avoid Buffer/Copy Distance Array During Relaxation #
When relaxing in Bellman-Ford for a bounded number of edges (or stops), always use a separate temporary copy for updates within each iteration.
This avoids the “parallel edge confounding” where an update in the current round might incorrectly use a value that was just relaxed, effectively allowing more than the intended number of edges (stops) in a path. This only matters when we’re concerned about being strict about the number of relaxations that exist within a particular round of relaxations.
Always base each round’s relaxations only on the previous round’s distances.
Decision Flow #
- Dependency resolution with lex constraints? → Topo sort + priority structure
- Edge covering in graphs? → Eulerian path
- Weighted connectivity? → MST and DSU
- Weighted shortest path? → Dijkstra / Bellman-Ford
Summaries, Style, Tricks and Boilerplate #
Boilerplate #
Topological Sorting Algorithms #
- Classic DFS-Based Topological Sort (Directed Graphs)
- For every unvisited node, perform a DFS.
- On exit (i.e., after exploring all descendants), push the node to a stack or prepend it to an output list.
- The resulting list (reversed) is the topological order.
- Kahn’s Algorithm (BFS with In-degree)
- Calculate the in-degree for each node.
- Start with all nodes of in-degree 0 (no prerequisites).
- While the queue is not empty:
- Pop a node, add it to the topo order.
- For each outgoing edge to neighbor:
- Decrement neighbor’s in-degree.
- If in-degree becomes 0, enqueue neighbor.
- If you cannot include all nodes, the graph contains a cycle.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43from collections import defaultdict, deque from typing import List def kahn_topological_sort(num_nodes: int, edges: List[List[int]]) -> List[int]: """ Generic Kahn’s Algorithm for Topological Sorting. Args: num_nodes: number of vertices (0 to num_nodes-1) edges: list of [u, v] meaning there is a directed edge u → v Returns: topo_order: A list of vertices in topologically sorted order. If there is a cycle, returns an empty list. """ # Step 1: Build adjacency list and in-degree array adj = defaultdict(list) in_degree = [0] * num_nodes for u, v in edges: adj[u].append(v) in_degree[v] += 1 # Step 2: Initialize queue with nodes having in-degree 0 queue = deque([i for i in range(num_nodes) if in_degree[i] == 0]) topo_order = [] # Step 3: Process nodes with in-degree 0 while queue: node = queue.popleft() topo_order.append(node) # Reduce in-degree for all neighbors for neighbor in adj[node]: in_degree[neighbor] -= 1 if in_degree[neighbor] == 0: queue.append(neighbor) # Step 4: Check if topological sort is possible (no cycle) if len(topo_order) == num_nodes: return topo_order # Valid topological order else: return [] # Cycle detected → no valid topological ordering
- Modified BFS with Priority Queue
- Variant of Kahn’s algorithm.
- Instead of an ordinary queue, use a priority queue (min-heap or max-heap) to always select the lex smallest/largest in-degree 0 node.
- Useful when a specific order is required (e.g., lexicographically smallest topological order).
- Enumerating All Topological Sorts (Backtracking)
- Recursively:
- For all available in-degree 0 nodes at the current step:
- Add node to partial order, decrease in-degrees accordingly.
- Recurse.
- Backtrack.
- For all available in-degree 0 nodes at the current step:
- Generates all possible topological sorts.
- Algorithmic complexity is high (factorial in the number of nodes for highly connected DAGs).
- Matrix Methods (Academic)
- Maintain adjacency matrix.
- Iteratively remove nodes/columns with all zero entries (no incoming edges).
- The removal order gives a possible topological order.
- Level-based (Layered) Traversal
- Remove all in-degree 0 nodes as a “layer.”
- Repeat on the residual graph.
- Each layer can be processed in parallel.
- Produces a series of layers, not necessarily a linear topo sort (but can be linearized if needed).
- Summary Table: Topological Sort Algorithms
Algorithm Can Detect Cycles Produces Topo Order Notes DFS with Rec Stack/Coloring Yes Yes Canonical, easy to implement Kahn’s Algorithm (BFS In-degree) Yes Yes Canonical, popular in interviews Modified Kahn’s with Priority Queue Yes Yes For lex order or weighted DAGs Enumerating All Topo Sorts - Yes (all orders) Higher complexity, not often practical Matrix Method Yes Sometimes Theoretical/academic cases Level-based Traversal - Layers Good for parallel processing
Pathfinding Algos #
| Graph Type / Case | Edge Type | Cyclic? | Best Algorithm | Complexity | Use-case |
|---|---|---|---|---|---|
| Unweighted | 0/1 (all same) | Any | BFS | O(V+E) | Shortest path length; trees, simple graphs |
| Weighted, non-neg | >=0 | Any | Dijkstra | O(E log V) | Road networks, time/cost with only positive weights |
| Weighted, negatives | Negatives allowed | No negative cycles | Bellman-Ford | O(VE) | Currency exchange, graphs with negative edges |
| DAG (acyclic, directed) | Any | Acyclic | Topo sort + relax | O(V+E) | Build sequence, job scheduling, dependency order |
| All-pairs | Any | Any (no negative cycles for path) | Floyd-Warshall | O(V^3) | Dense graphs, small n |
| Longest path in DAG | Any | Acyclic | Negate + DAG algo | O(V+E) | PERT/CPM, scheduling |
| Minimum Spanning Tree | Any (undirected) | Any (no neg cycles) | Prim’s/Kruskal’s | O(E log V) | Network design, not a shortest-path problem |
Bellman-Ford Algo
- can’t handle negative cycles, alright if negative edges exist.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24def bellman_ford(n, edges, src): """ Bellman-Ford: works with negative edges, but no negative cycles. Returns (has_no_negative_cycle, distances, parents). """ dist = [float('inf')] * n parent = [None] * n dist[src] = 0 for _ in range(n - 1): updated = False # allows us to early return if we don't update anything anymore. for u, v, w in edges: if dist[u] != float('inf') and dist[u] + w < dist[v]: dist[v] = dist[u] + w parent[v] = u updated = True if not updated: break # Cycle detection: if I can still relax once more (nth iteration) then I know that there's a cycle. for u, v, w in edges: if dist[u] != float('inf') and dist[u] + w < dist[v]: return (False, [], []) return (True, dist, parent)
Dijsktra’s Algorithm
Applies to general graphs with non-negative edges
Relaxes edges in the “right order” exploits the greedy property related to the triangle inequality
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27import heapq from collections import defaultdict def dijkstra(n, edges, src): """ Dijkstra: shortest paths from src in weighted graph with non-negative weights. Returns dist (shortest distances), parent (for path reconstruction). """ graph = defaultdict(list) for u, v, w in edges: graph[u].append((v, w)) dist = [float('inf')] * n dist[src] = 0 parent = [None] * n # for parent path construction heap = [(0, src)] while heap: d, u = heapq.heappop(heap) if d > dist[u]: continue # Already found a shorter path for v, w in graph[u]: if dist[u] + w < dist[v]: dist[v] = dist[u] + w parent[v] = u heapq.heappush(heap, (dist[v], v)) return dist, parent
Prim’s Algorithm for MST-finding
This is a deleterious approach to finding MST
We exploit the fact that for any cut for vertices involving cycles in the original graph, we’re going to have the smallest edge in the MST.
So it will run in \(O(E logV)\) time
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64import heapq from collections import defaultdict def prim_mst(n, edges): """ Prim's Algorithm to find MST of a weighted undirected graph. Args: n (int): Number of vertices (0-based indexing). edges (List[Tuple[int, int, int]]): List of edges (u, v, weight). Returns: mst_edges (List[Tuple[int, int]]): Edges included in the MST. total_weight (int or float): Total weight of MST. """ # Build adjacency list from edges # graph[node] = list of (neighbor, weight) graph = defaultdict(list) for u, v, w in edges: graph[u].append((v, w)) graph[v].append((u, w)) # include this if it's an undirected graph total_weight = 0 # Total weight of MST mst_edges = [] # To store MST edges as (u, v) visited = [False] * n # Track visited/added vertices # Min-heap to pick edge with smallest weight # Stores tuples like: (weight, from_vertex, to_vertex) min_heap = [(0, -1, 0)] # Start from vertex 0, no parent hence -1 while min_heap: weight, u, v = heapq.heappop(min_heap) if visited[v]: continue visited[v] = True if is_not_start:=(u != -1): # Add edge only if v is not the start node (u==-1 means start) mst_edges.append((u, v)) total_weight += weight # Add all edges from v to heap if the destination is unvisited for to_neighbor, edge_weight in graph[v]: if not visited[to_neighbor]: heapq.heappush(min_heap, (edge_weight, v, to_neighbor)) return mst_edges, total_weight # Example usage if __name__ == "__main__": n = 5 edges = [ (0, 1, 2), (0, 3, 6), (1, 2, 3), (1, 3, 8), (1, 4, 5), (2, 4, 7), (3, 4, 9) ] mst, total = prim_mst(n, edges) print("Edges in MST:") for u, v in mst: print(f"{u} - {v}") print("Total weight:", total)
Kruskal’s Algorithm
- we sort the edges by weight and consider them in ascending order
- if both edges are in the same blue tree, then we colour the edge red, else we colour that edge blue
- so this becomes a Union Find operation, where we connect two nodes if they are in the same blue tree
- runs in \(O(E log V)\) time
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70class DisjointSet: """ Disjoint Set Union (Union-Find) data structure with path compression and union by rank. Used to efficiently detect cycles while building MST. """ def __init__(self, n): self.parent = list(range(n)) self.rank = [0] * n def find(self, u): if self.parent[u] != u: self.parent[u] = self.find(self.parent[u]) # Path compression return self.parent[u] def union(self, u, v): root_u = self.find(u) root_v = self.find(v) if root_u == root_v: return False # Already in the same set, union not done (would form cycle) if self.rank[root_u] < self.rank[root_v]: self.parent[root_u] = root_v elif self.rank[root_v] < self.rank[root_u]: self.parent[root_v] = root_u else: self.parent[root_v] = root_u self.rank[root_u] += 1 return True def kruskal_mst(n, edges): """ Kruskal's algorithm to compute MST of a weighted undirected graph. Args: n (int): Number of vertices. edges (List[Tuple[int, int, int]]): List of edges (u, v, weight). Returns: mst_edges (List[Tuple[int, int, int]]): List of edges included in MST. total_weight (int or float): Sum of weights in MST. """ # Sort edges by weight (non-decreasing order) edges = sorted(edges, key=lambda x: x[2]) dsu = DisjointSet(n) mst_edges = [] total_weight = 0 for u, v, w in edges: if dsu.union(u, v): mst_edges.append((u, v, w)) total_weight += w if len(mst_edges) == n - 1: # MST complete break return mst_edges, total_weight # Example usage: if __name__ == "__main__": n = 6 edges = [ (0, 1, 4), (0, 2, 4), (1, 2, 2), (1, 0, 4), (2, 0, 4), (2, 1, 2), (2, 3, 3), (2, 5, 2), (2, 4, 4), (3, 2, 3), (3, 4, 3), (4, 2, 4), (4, 3, 3), (5, 2, 2), (5, 4, 3), ] mst, total = kruskal_mst(n, edges) print("Edges in MST:") for u, v, w in mst: print(f"{u} - {v} with weight {w}") print("Total weight:", total)
Topological Sort (and shortest/longest path in DAG)
- not every directed graph will have a topological ordering (esp if there are cycles then there’s a cyclic dependency)
- we can do a post-order DFS to get a topo sort
- we can do a Kahn’s algo to do the DFS also
Find single source shortest path in DAG
- find the topo sort then relax in the order of topo sort
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34from collections import defaultdict, deque def dag_shortest_path(n, edges, src): """ Shortest paths in weighted DAG from src using topological sort. """ graph = defaultdict(list) indegree = [0]*n for u, v, w in edges: graph[u].append((v, w)) indegree[v] += 1 # Kahn's topo sort (BFS style) topo = [] q = deque([u for u in range(n) if indegree[u]==0]) while q: u = q.popleft() topo.append(u) for v, _ in graph[u]: indegree[v] -= 1 if indegree[v] == 0: q.append(v) dist = [float('inf')] * n parent = [None]*n dist[src] = 0 for u in topo: for v, w in graph[u]: if dist[u] + w < dist[v]: dist[v] = dist[u] + w parent[v] = u return dist, parent # For longest path in DAG: Just negate all w before using above.
- find the topo sort then relax in the order of topo sort
- DAG longest path: negate then find single source shortest path
Floyd-Warshall: All pairs shortest path
Remember that the cycle detection (for adjacency matrix here) is done by checking the diagonals because that’s the distance from node to itself and it’s supposed to be 0. If it’s less than zero means there’s a cycle somewhere.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59def floyd_warshall(n, edges): """ Floyd-Warshall Algorithm: Computes shortest paths between all pairs of vertices. Args: n (int): Number of vertices in the graph (0-based indexing). edges (List[Tuple[int, int, int]]): List of edges (u, v, w). Returns: dist (List[List[float]]): n x n matrix with shortest path distances. dist[i][j] = shortest distance from i to j, float('inf') if no path exists. has_negative_cycle (bool): True if a negative cycle is detected; False otherwise. """ # Initialize distance matrix dist = [[float('inf')] * n for _ in range(n)] for i in range(n): dist[i][i] = 0 # Distance to self is zero # Set initial values based on edges for u, v, w in edges: dist[u][v] = w # Directed edge from u to v with weight w # Floyd-Warshall main iteration for k in range(n): # k: candidate for intermediate node for i in range(n): for j in range(n): known_dists = dist[i][k] != float('inf') and dist[k][j] != float('inf') # If both distances are known, attempt relaxation via k, the intermediate node we consider if known_dists: # if can relax: if dist[i][k] + dist[k][j] < dist[i][j]: dist[i][j] = dist[i][k] + dist[k][j] # Check for negative weight cycles, if any of the cells on the diagonals are negative (they should be 0 if no cycles) has_negative_cycle = any(dist[i][i] < 0 for i in range(n)) return dist, has_negative_cycle # Example usage: if __name__ == "__main__": n = 4 edges = [ (0, 1, 5), (0, 3, 10), (1, 2, 3), (2, 3, 1) ] dist_matrix, negative_cycle = floyd_warshall(n, edges) if negative_cycle: print("Graph contains a negative weight cycle.") else: print("Shortest distances between all pairs:") for row in dist_matrix: print(['INF' if x == float('inf') else x for x in row])
Eulerian Pathfinding via Hierholzer’s Algorithm
The key idea behind it:
- We build an adjacency list from source to destinations
- for each node, we sort the neighbours so that the lexicographical ordering will be respected (this helps in tie-breaking)
- then we DFS such that:
- we recursively visit neighbours and removing edges as we go
- we append the nodes to the itinerary when we can’t go further – this makes it a post-order traversal (which also will give things in reverse order)
- reverse the topo order that we see. This is because a dfs approach ends being post-order so the schedule we get is in reversed order!
a good example of this is in the problem “Reconstruct Itinerary (332)”:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25from collections import defaultdict, deque class Solution: def findItinerary(self, tickets: List[List[str]]) -> List[str]: graph = defaultdict(list) for src, dest in tickets: graph[src].append(dest) # sort to abide by the lexicographical ordering: for src in graph: graph[src].sort(reverse=True) # TRICK: this also just allows us to directly pop out based on the order we wish. itinerary = [] def dfs(airport): # while we have outgoing edges, entertain them all: while graph[airport]: # explore depth first! dfs(graph[airport].pop()) # post-order, we can add it in now itinerary.append(airport) dfs('JFK') return itinerary[::-1]
Finding Shortest Path
Undirected Graphs
we can use BFS and track the parents of each node. Then walking up the parent gives the shortest path tree
NOTE: DFS parent paths will form a tree but they’re not the shortest path tree
Directed Graphs
- BFS/DFS won’t help us cover all the paths, that’s the main reason we can’t just re-purpose them (other than to just use it to visit all nodes / edges in the graph).
- also BFS will help us only find the number of hops between 2 nodes so if the distances are different (encoded using weighted edge) then we can’t use BFS.
Range Interval #
Fenwick Tree
Keeping an array of prefix sum is great if our objective is lookup heavy, but if we wish to also represent ranges AND update values within them, then we need a Fenwick tree for this.
A Fenwick tree is actually a Binary Indexed Tree.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36from typing import List from functools import cache class FenwickTree: def __init__(self, n): self.n = n self.fw = [0]*(n+1) """ fw[i] will store the partial sum of range, i.e. sum of values from j + 1 to i where j = i - segment length """ def update(self, i, delta): """ The delta is added to every partial sum, so every sum that covers a segment, that's why we keep going to the ancestors (parents) that are to the right of the original index. """ i += 1 while i <= self.n: self.fw[i] += delta i += i & (-i) def query(self, i): """ To query a particular idx (i.e what's the prefix sum till the 7th element (idx 6)), we need to first use 1-indexing to see that it's the 7th element, then we need to sum over all the partial sums over the segment lengths. """ i += 1 s = 0 while i > 0: s += self.fw[i] i -= i & (-i) return s def range_query(self, l, r): """ Over a range, we just need to get prefix sum until r, and subtract prefix sum until l - 1 to get l to r inclusive. """ return self.query(r) - (self.query(l-1) if l > 0 else 0)Conceptual Overview
Fenwicks use an array
fenwwhere each position stores partial sums of array segments.Uses the binary representation of indices to jump efficiently between cumulative blocks. the jumping happens using LSB stripping: shift by
segment_length = i & (-i)Update operation efficiently adds a value to relevant segments.
Query operation accumulates prefix sums using stored segments.
building the fenwick tree
notice that the tree is an implicit tree, nodes are consecutively numbered, parent child relationship is determined using arithmetic on node indices
the children are more granular than the parents, the parents have cumulative covers.
tree[i]: cumulative info of a segment of original data array. segment size islsb = i & (-i).being able to identify the segment size is very important, it also allows us to find the parent of the current index.
To find the parent, we move further to the right so
i = i + (i & (-i))To find the child, we move further to the left so
i = i - (i & (-i ))
basic API
- Querying Value of Slot at
i- querying requires us to cover all of itself and the partial sums that its children define. That’s why we keep going to the left until we are out of bounds
1 2 3 4 5 6 7 8 9 10def query(self, i): """ To query a particular idx (i.e what's the prefix sum till the 7th element (idx 6)), we need to first use 1-indexing to see that it's the 7th element, then we need to sum over all the partial sums over the segment lengths. """ i += 1 s = 0 while i > 0: s += self.fw[i] i -= i & (-i) return s
- querying requires us to cover all of itself and the partial sums that its children define. That’s why we keep going to the left until we are out of bounds
- Range Query
- use the left and right inclusive and calculate.
1 2 3 4 5def range_query(self, l, r): """ Over a range, we just need to get prefix sum until r, and subtract prefix sum until l - 1 to get l to r inclusive. """ return self.query(r) - (self.query(l-1) if l > 0 else 0)
- use the left and right inclusive and calculate.
- Updating
- updating the slot
irequires us to update all the ancestors as well. That’s why we keep doing it until we exceed the right hand boundary1 2 3 4 5 6 7 8def update(self, i, delta): """ The delta is added to every partial sum, so every sum that covers a segment, that's why we keep going to the ancestors (parents) that are to the right of the original index. """ i += 1 while i <= self.n: self.fw[i] += delta i += i & (-i)
- updating the slot
- Querying Value of Slot at
Use cases
efficiently maintains prefix sums of a list of numbers.
Particularly great for cumulative frequency operations or prefix sums with updates.
operations all run in \(O(\log n)\) time:
Point updates: Update the value at a single index in the array.
Prefix sum queries: Retrieve the sum of elements in the prefix [0,i].
examples
Counting number of elements in a range satisfying a condition (popcount-depth problem).
Dynamic sum of elements in a range with updates.
Minimum/maximum queries over intervals.
Frequency counts with modifications (e.g., inverted indices).
Tricks #
suppose we were given tie-breaking rules (e.g. if multiple options for toposort, then give lexical order), then we can actually order the adjacency list prior to using it.
e.g. see “Reconstruct Itinerary”
1 2 3 4# sort to abide by the lexicographical ordering: for src in graph: graph[src].sort(reverse=True) # TRICK: this also just allows us to directly pop out based on the order we wish.space saving, lazy approaches:
for the “min cost to connect all points” problem, we smell the Prim’s algorithm coming.
However, we also know that we have to make the edges and if it’s a connected graph, it will end up being too dense.
This means that we have to find a way to “bind” the options for the edges all the way at the last place possible. That’s how we figure out the lazy approach.
Analyse the input constraints to know what kind of answers are expected. Here’s an initial v0 of this, I’ll have to gather a proper rule of thumb kind soon.
Yes, for LeetCode and similar competitive programming questions, it’s generally a very good practice to start your problem-solving process by: - **Carefully analyzing the input constraints** (problem size $$n$$, value ranges, etc.). - Using those constraints to **narrow down suitable algorithms and data structures**. - Pattern matching input sizes to typical time complexities helps: - Small input sizes (e.g., $$n \leq 15$$) often suggest **exponential or backtracking** solutions. - Medium sizes (e.g., $$n \leq 500$$) may allow $$O(n^3)$$ or DP algorithms. - Large sizes (e.g., $$n \geq 10^5$$) often require efficient $$O(n \log n)$$ or linear-time algorithms. - Looks like for **graphs**, the number of vertices and edges often hints whether BFS, DFS, Dijkstra, Bellman-Ford, or Floyd-Warshall is feasible and expected. - Clarify if the problem states any **unusual constraints or properties** (like directed, acyclic, weighted edges). ### General rule of thumb: - Always read the **input size constraints first**. - Map them to **complexity vs time limit** awareness. - Use them to **proactively discard impractical approaches**. This approach saves time and prevents implementing inefficient solutions. It’s a fundamental best practice when practicing LeetCode problems and interview prep.[1][2][4][5] *** **Example:** If the input $$n$$ is up to 1000 and edges up to 100,000, expect $$O(n^2)$$ algorithms to likely cause TLE, so think of graph algorithms with better complexity (like using adjacency lists and BFS/DFS instead of adjacency matrices). If $$n \leq 32$$, bitmask DP might be applicable. For $$n \leq 10^5$$, linear or near-linear solutions are needed. This helps you quickly converge on the right algorithm pattern for the problem. [1](https://dev.to/dfs_with_memo/how-to-use-leetcode-effectively-2lgf) [2](https://www.linkedin.com/pulse/how-solve-leetcode-problems-within-given-time-space-adithya-n-m-xecrc) [3](https://stackoverflow.com/questions/73122516/what-are-the-constraints-in-leetcode-problem) [4](https://www.reddit.com/r/leetcode/comments/v577ju/people_who_leetcoded_for_a_significant_amount_of/) [5](https://leetcode.com/discuss/interview-question/1105473/solving-problems-with-aid-of-problem-constraints) [6](https://leetcode.com/discuss/study-guide/2009997/how-to-get-started-with-dsa-and-practice-leetcode-efficiently) [7](https://leetcode.com/discuss/study-guide/6067881/cp-special-learn-time-complexity-on-constraints)Dimension-flattened index TRICK: 2D \(\implies\) 1D index flattening
we can flatten 2D indices into 1D for grids by doing this :
idx = lambda r, c: r * n + c # flatten 2D to 1Dthis is wild.
this is useful in case where we don’t want to modify things for higher dimensions (e.g. for UnionFind algo)
TODO KIV #
TODO existing canonicals #
- Max flow min cut or network flow questions
Here’s an info dump of that canonical with some reference questions
Here are some LeetCode questions that fall under the canonical type of **Max Flow / Min Cut / Network Flow problems**: ### LeetCode Questions Related to Max Flow Min Cut: 1. **[Maximum Students Taking Exam](https://leetcode.com/problems/maximum-students-taking-exam/discuss/988450/max-flow-ford-fulkerson-and-edmonds-karp-algorithm-solution-in-javascript)** - Uses max flow algorithms to optimize seating arrangements in an exam setting. 2. **[Maximum Running Time of N Computers](https://leetcode.com/problems/maximum-running-time-of-n-computers/solutions/3385727/solution-based-on-max-flow-min-cut-beat-100-of-submissions-in-runtime/)** - Utilizes max flow/min cut techniques to determine optimal battery usage over time. 3. **[Miscellaneous Graph Problem Discussions](https://leetcode.com/problems/graph-problems-list/)** - Collection of problems involving network flows, cut, and other related algorithms. 4. **[Graph Problem - Practice and Approach](https://leetcode.com/problems/graph-problems/)** - Provides a variety of problems exploring max flow, min cut, and other classical graph algorithms for interview prep. 5. **[Graph Algorithms for Interview](https://leetcode.com/discuss/explore/google/403885/should-i-know-how-to-implement-min-cut-max-flow-dijkstras-and-a-by-heart)** - Discusses core max flow/min cut algorithms like Ford-Fulkerson, Dinic, Edmonds-Karp, and their importance in interviews. 6. **[Max-Flow Min-Cut Theorem and Concepts](https://en.wikipedia.org/wiki/Max-flow_min_cut_theorem)** (Educational background/reference) *** ### Summary: - These problems typically **use max flow algorithms (e.g., Ford-Fulkerson, Dinic, Edmonds-Karp)**. - They are often about **optimizing resources, flow, or connectivity** in complex network structures, which fit the canonical Max Flow / Min Cut framework. - Solving them involves understanding **flow augmenting path algorithms** and properties like **max-flow equals min-cut**. Would you like specific problem titles or more detailed explanations of typical algorithms? [1](https://en.wikipedia.org/wiki/Max-flow_min-cut_theorem) [2](https://www.reddit.com/r/leetcode/comments/1habnhy/help_me_find_the_maximum_flow_in_the_graph_from_s/) [3](https://cp-algorithms.com/graph/edmonds_karp.html) [4](https://leetcode.com/problems/maximum-running-time-of-n-computers/solutions/3385727/solution-based-on-max-flow-min-cut-beat-100-of-submissions-in-runtime/) [5](https://leetcode.com/discuss/explore/google/403885/should-i-know-how-to-implement-min-cut-max-flow-dijkstras-and-a-by-heart) [6](https://leetcode.com/discuss/interview-question/5870729) [7](https://leetcode.com/problems/maximum-students-taking-exam/discuss/988450/max-flow-ford-fulkerson-and-edmonds-karp-algorithm-solution-in-javascript) [8](https://leetcode.com/discuss/general-discussion/753236/list-of-graph-algorithms-for-coding-interview) [9](https://www.youtube.com/watch?v=CoBcxk4RB7A) - lexicographical topo sorting:: TODO Course Schedule III (630) — Advanced scheduling with optimization, extending topological and greedy concepts
- weighted shortest paths:: TODO Path With Minimum Effort (1631) — Weighted shortest path with state expansions similar to Swim in Rising Water
- graph counting:: TODO Number of Distinct Islands (694) (paid)
- graph enumeration
TODO Count Sub Islands (1905) — Counting subgraph embeddings this is not right yet. #+begin_src python -n