Algorithm Complexity
Method1. Computational Complexity Overview
Computational complexity quantifies the resources required by an algorithm—primarily time and space—as a function of input size. Complexity analysis provides theoretical bounds on algorithm performance independent of hardware or implementation details, enabling objective comparison across algorithms and prediction of scalability bottlenecks.
Why complexity matters: For large inputs, algorithmic efficiency dominates constant factors. An O(n log n) algorithm remains practical at n=10^9, while an O(n²) algorithm becomes prohibitively slow. Complexity analysis guides algorithm selection, identifies optimization opportunities, and characterizes problem difficulty.
Input size: Denoted n, representing relevant problem dimension (array length, graph vertices, string length, etc.). Some problems depend on multiple dimensions—for instance, matrix algorithms often reference both dimensions m and n.
2. Asymptotic Notation
Asymptotic analysis abstracts away constant factors and low-order terms to characterize growth rate as n → ∞.
Big O (Upper Bound)
Big O provides a worst-case upper bound. An algorithm is O(g(n)) if execution time grows at most proportionally to g(n). Example: Linear search is O(n) because no more than n comparisons occur.
Big Omega (Lower Bound)
Big Omega provides a worst-case or average-case lower bound, indicating minimum growth rate. Any comparison-based sorting requires Ω(n log n) comparisons in the worst case.
Big Theta (Tight Bound)
Big Theta means the function grows precisely at rate g(n), up to constant factors. Merge sort has time complexity Θ(n log n) in all cases. Theta is the strongest statement.
Little o and Little ω
Little o and little ω represent strict inequalities: f(n) = o(g(n)) means f(n)/g(n) → 0 as n → ∞, and f(n) = ω(g(n)) means f(n)/g(n) → ∞. These are rarely used in algorithm analysis.
3. Common Time Complexities
| Notation | Name | Example Algorithm | Scalability at n=10⁶ |
|---|---|---|---|
| O(1) | Constant | Hash table lookup | Instant |
| O(log n) | Logarithmic | Binary search | ~20 operations |
| O(n) | Linear | Linear scan, sequential search | 1 million operations |
| O(n log n) | Linearithmic | Merge sort, quicksort (avg) | ~20 million operations |
| O(n²) | Quadratic | Bubble sort, insertion sort | 10¹² operations (slow) |
| O(n³) | Cubic | Matrix multiplication (naive) | 10¹⁸ operations (impractical) |
| O(2ⁿ) | Exponential | Brute-force subset enumeration | 10,300,000 digits (impossible) |
| O(n!) | Factorial | Brute-force permutations | Infeasible |
O(1) — Constant: Execution time independent of input size. Direct array access, hash table operations with no collisions, fixed arithmetic.
O(log n) — Logarithmic: Halves input repeatedly. Binary search reduces sorted array of 10⁶ elements to answer in ~20 steps. Balanced tree operations inherit this efficiency.
O(n) — Linear: Proportional to input size. Every element examined once. Linear search, single pass through array, simple iteration.
O(n log n) — Linearithmic: Optimal for comparison-based sorting. Merge sort (guaranteed), quicksort (average case), heapsort (guaranteed). Emerges from divide-and-conquer on n elements.
O(n²) — Quadratic: Nested loops, each iterating n times. Bubble sort compares each element with every other. Practical for n < 10⁴ only.
O(2ⁿ) — Exponential: Doubles with each additional input element. Brute-force subset problems, certain recursive algorithms without memoization. Becomes intractable at n ~30.
O(n!) — Factorial: Brute-force all permutations. Traveling salesman by exhaustive search. Never acceptable for large inputs.
4. Space Complexity
Space complexity quantifies memory usage during algorithm execution.
Auxiliary space: Memory used beyond the input itself. Merge sort requires O(n) auxiliary arrays; quicksort uses O(log n) recursion stack in-place.
Total space: Input plus auxiliary. If input size dominates, total space ≈ input size.
Space-time tradeoffs: Increasing memory often decreases time. Hash tables trade space O(n) for O(1) lookup; memoization trades space O(n) for time O(n) instead of O(2ⁿ). Optimal tradeoff depends on constraints.
In-place algorithms: Auxiliary space O(1). Quicksort and heapsort achieve O(n) time with minimal extra memory, though recursion depth can require O(log n) stack space.
5. Recurrence Relations and Master Theorem
Algorithm recursion produces recurrence relations describing T(n) in terms of smaller subproblems.
Master Theorem
For recurrences of form where , :
- Case 1: If for some , then
- Case 2: If , then
- Case 3: If and , then
Common Recurrences
| Recurrence | Solution | Example |
|---|---|---|
| T(n) = T(n-1) + O(1) | O(n) | Sequential iteration |
| T(n) = 2T(n/2) + O(n) | O(n log n) | Merge sort |
| T(n) = T(n/2) + O(n) | O(n) | Weighted binary search |
| T(n) = T(n-1) + O(n) | O(n²) | Selection sort |
| T(n) = nT(n-1) | O(n!) | Generating permutations |
6. Amortized Analysis
Amortized analysis computes average cost per operation over a sequence of operations, providing better bounds when some operations are cheap and occasional operations are expensive.
Aggregate analysis: Sum total cost over n operations, divide by n. Example: dynamic array resize costs n copies amortized O(1) per insertion.
Accounting method: Assign “credits” to each operation; expensive operations consume credits accumulated by cheap ones. Each resize costs 1 unit; insertions cost 2 units (1 for insertion, 1 credit saved). Amortized O(1) per operation.
Potential method: Define potential function φ tracking accumulated resources. Amortized cost = actual cost + Δφ. For dynamic arrays: φ = 2m - n where m is capacity, n is size. Insertions increase potential by 2; resize costs n but decreases φ to 0. Amortized O(1).
Applications: Dynamic arrays, hash table resizing, union-find data structure, splay trees. Amortized bounds are tighter and more realistic than worst-case bounds for interactive systems.
7. Complexity Classes and Computability
P (Polynomial Time)
Problems solvable in O(n^k) time for some constant k on a deterministic Turing machine. Includes sorting, shortest path, maximum flow. Practical algorithms are typically in P.
NP (Nondeterministic Polynomial Time)
Problems verifiable in polynomial time. A proposed solution can be checked in O(n^k). Includes Satisfiability (SAT), Hamiltonian cycle, subset sum. All P problems are in NP.
NP-Complete
Problems in NP to which all NP problems reduce in polynomial time. If any NP-complete problem is solved in polynomial time, P = NP. Examples: 3-SAT, traveling salesman decision, clique, vertex cover.
NP-Hard
Problems at least as hard as NP-complete; may not be in NP. Optimization versions (e.g., “find minimum TSP tour”) are NP-hard.
Polynomial Reduction
Problem A reduces to B if instances of A transform to B instances in polynomial time. If A reduces to B and B is in P, so is A. Reductions establish hardness hierarchy.
8. Algorithm Paradigms and Typical Complexities
| Paradigm | Approach | Typical Complexity | Example |
|---|---|---|---|
| Brute Force | Enumerate all possibilities | O(2ⁿ) or worse | Subset enumeration |
| Greedy | Local optimal choice | O(n log n) or O(n²) | Huffman coding, activity selection |
| Divide and Conquer | Split, solve, combine | O(n log n) | Merge sort, quicksort |
| Dynamic Programming | Memoized subproblems | O(n²) or O(n³) typical | Longest common subsequence, knapsack |
| Backtracking | Prune search tree | Exponential (with pruning) | N-queens, subset sum decision |
| Search (BFS/DFS) | Systematic exploration | O(V + E) on graphs | Connected components |
Divide and Conquer: Partition input, recursively solve subproblems, combine results. Master theorem applies. Merge sort O(n log n), strassen matrix mult O(n^2.8).
Dynamic Programming: Cache solutions to overlapping subproblems, build bottom-up. Fibonacci: naive O(2^n), memoized O(n). 0/1 knapsack O(nW) where W is capacity.
Greedy: Select locally optimal choice at each step, hope for global optimum. Huffman coding O(n log n), minimal spanning tree (Kruskal) O(m log m). Fails when local optima don’t compose.
Backtracking: Build solution incrementally, backtrack when constraints violated. Search tree has exponential worst-case but pruning reduces average case. N-queens O(n!) without pruning.
Summary Table: Complexity at Scale
| Complexity | n=10 | n=100 | n=1000 | n=10⁶ |
|---|---|---|---|---|
| O(1) | 1 | 1 | 1 | 1 |
| O(log n) | 3 | 7 | 10 | 20 |
| O(n) | 10 | 100 | 1000 | 10⁶ |
| O(n log n) | 33 | 664 | 10,000 | 2×10⁷ |
| O(n²) | 100 | 10⁴ | 10⁶ | 10¹² |
| O(2ⁿ) | 1024 | 10³⁰ | impossible | impossible |
Cross-Links
- [[data-structures]] — Fundamental structures (lists, trees, graphs) with inherent complexity profiles
- [[sorting-algorithms]] — Paradigmatic case study of complexity analysis
- [[graph-algorithms]] — Breadth and depth-first search, shortest path (Dijkstra, Bellman-Ford)
- [[dynamic-programming]] — Memoization transforms exponential into polynomial
- [[cryptography]] — Security depends on computational hardness assumptions