Sorting Algorithms
"Sorting is the physics of search. Master the trade-offs between stability, memory, and raw speed."
Sorting is the physics of search. Once data is sorted, binary search, two-pointer, and sliding-window techniques become unlocked. Almost every data-processing pipeline in the real world begins with a sort. Interviewers test sorting to see if you understand trade-offs: stability vs. speed, memory vs. in-place, worst-case guarantees vs. average-case performance. Knowing when to use Merge Sort vs. Quick Sort vs. Counting Sort is the difference between an O(N log N) solution and a TLE.
📖 Before We Start — The Big Picture
Sorting is the problem of rearranging N items so they are in a defined order (ascending, descending, or by some key). There are fundamentally two families of sorting algorithms:
- Comparison-based sorts: These compare pairs of elements to decide their relative order. The theoretical minimum time for any comparison-based sort is O(N log N) — this is a mathematical proof, not just an observation. Algorithms in this family: Bubble, Selection, Insertion, Merge, Quick, Heap Sort.
- Non-comparison sorts: These exploit additional knowledge about the data (e.g., all values are integers in a known range). They can beat O(N log N). Examples: Counting Sort, Radix Sort, Bucket Sort.
A sort is called stable if equal elements maintain their original relative order. This matters when sorting objects by multiple keys (e.g., sort employees by salary, then by name — you first sort by name stably, then sort by salary stably).
🌐 Sorting Overview
Sorting mechanics generally divide into heavily nested loops for smaller caches ($O(N^2)$ bounds) and recursive Divide & Conquer structures ($O(N \log N)$ limits).
- $O(N^2)$ Baseline: Bubble, Selection, and Insertion Sorts. Best for tiny datasets or nearly perfectly sorted data.
- $O(N \log N)$ Divide & Conquer: Merge Sort and Quick Sort. The industry standards for generalized scaling.
- $O(N)$ Non-Comparison: Counting Sort and Radix Sort. Requires strict bounded constraints on dataset properties (like integers bounding).
- $O(N \log N)$ Native: Heap Sort. In-place mapping without recursion.
🔬 Algorithms Deep Dive
O(N²) Baseline Sorts
Bubble Sort: Propagates maximum values to the end iteratively via pair swaps. Often optimized to break early if no swaps occur.
public void bubbleSort(int[] arr) { for (int i = 0; i < arr.length; i++) { boolean swapped = false; for (int j = 1; j < arr.length - i; j++) { if (arr[j] < arr[j - 1]) { swap(arr, j, j - 1); swapped = true; } } if (!swapped) break; } }
Selection Sort: Locates absolute current minimum and injects it upfront. Always performs exact $O(N)$ swaps.
public void selectionSort(int[] arr) { for (int i = 0; i < arr.length - 1; i++) { int minIdx = i; for (int j = i + 1; j < arr.length; j++) { if (arr[j] < arr[minIdx]) minIdx = j; } swap(arr, i, minIdx); } }
Insertion Sort: Builds the output string progressively. Highly adaptive and routinely utilized inside complex frameworks (like Timsort) natively for handling $N < 64$ sub-chunks due to strict $O(N)$ speed on nearly sorted vectors.
public void insertionSort(int[] arr) { for (int i = 1; i < arr.length; ++i) { int key = arr[i], j = i - 1; while (j >= 0 && arr[j] > key) { arr[j + 1] = arr[j]; j--; } arr[j + 1] = key; } }
Merge Sort (Divide & Conquer)
Splits cleanly down to base elements and merges components together recursively upward guaranteeing stable sorted arrays natively in $O(N \log N)$ constraints.
Caveat - The $O(N)$ Space Limit: The merge phase strictly enforces allocating external memory to preserve stability cleanly. It is the gold standard for Native Objects and Linked Lists precisely due to this stability factor.
void sort(int arr[], int l, int r) { if (l < r) { int m = l + (r - l) / 2; sort(arr, l, m); sort(arr, m + 1, r); merge(arr, l, m, r); // External $O(N)$ combination } }
Quick Sort
An in-place paradigm separating the domain physically into domains left recursively smaller than a central Pivot and physically greater.
The $O(N^2)$ Adversary Hazard: Standard Quick Sort choosing the exact highest element dynamically breaks out if data is aggressively inverse-sorted natively. We enforce randomized pivoting immediately to sustain average $O(N \log N)$ bounds strictly!
public void quickSort(int[] arr, int low, int high) { if (low < high) { int pi = partition(arr, low, high); quickSort(arr, low, pi - 1); quickSort(arr, pi + 1, high); } }
Non-Comparison Sorts & Heap
Algorithms mapping domains directly to index spaces or structural constraints.
Counting Sort: Hashes bounding integers optimally, heavily consuming memory dynamically proportional to the highest integer natively.
public void countingSort(int[] arr) { int max = Arrays.stream(arr).max().getAsInt(); int[] count = new int[max + 1]; for (int num : arr) count[num]++; for (int i = 0, j = 0; i <= max; i++) { while (count[i]-- > 0) arr[j++] = i; } }
Radix Sort: Cascades Counting Sort digit by digit safely preventing space exhaustion reaching $O(d \cdot (N+K))$ outputs perfectly.
public void radixSort(int[] arr) { if (arr.length == 0) return; int max = Arrays.stream(arr).max().getAsInt(); for (int exp = 1; max / exp > 0; exp *= 10) { countSortByDigit(arr, exp); } }
Heap Sort: Forces elements into a Max/Min Binary Heap. Reaches strictly $O(N \log N)$ time natively with an optimal $O(1)$ embedded space boundary dynamically but lacks the sequential mapping speeds Merge Sort has.
public void heapSort(int[] arr) { int n = arr.length; // Build heap natively phase for (int i = n / 2 - 1; i >= 0; i--) { heapify(arr, n, i); } // Extract elements globally for (int i = n - 1; i > 0; i--) { swap(arr, 0, i); heapify(arr, i, 0); } }
🔄 Algorithm Variations
Cyclic Sort
Given parameters constraining elements rigidly inside $1$ to $N$,
you assign targets manually by calculating
targetIdx = nums[i] - 1 strictly bypassing comparison
logic for direct index containment bounded strictly to $O(N)$.
int i = 0; while (i < nums.length) { int correct = nums[i] - 1; // The index it SHOULD belong to if (nums[i] > 0 && nums[i] <= nums.length && nums[i] != nums[correct]) { swap(nums, i, correct); } else { i++; } }
Dutch National Flag (3-Way Partitioning)
When domains heavily replicate properties (e.g. 0s, 1s, and 2s purely). Using 3 pivot limits natively organizes elements centrally without mapping auxiliary constraints natively dynamically sorting in a single $O(N)$ pass structurally.
int low = 0, mid = 0, high = nums.length - 1; while (mid <= high) { if (nums[mid] == 0) swap(nums, low++, mid++); else if (nums[mid] == 1) mid++; else swap(nums, mid, high--); }
QuickSelect
A Quick Sort recursion tree cut fundamentally in half. Instead of sorting both pivot sides, you analyze if the exact Kth pivot sits bounded dynamically processing strictly in amortized $O(N)$ time finding structural Medians optimally natively!
// Find k-th largest implies index (len - k) int pivotIdx = partition(nums, left, right); if (pivotIdx == targetK) return nums[targetK]; else if (pivotIdx < targetK) return quickSelect(nums, pivotIdx + 1, right, targetK); else return quickSelect(nums, left, pivotIdx - 1, targetK);
⏱️ Complexity Analysis
| Algorithm | Time (Best) | Time (Avg / Worst) | Space Constraints | Stability |
|---|---|---|---|---|
| Insertion Sort | $O(N)$ | $O(N^2)$ | $O(1)$ | Stable |
| Merge Sort | $O(N \log N)$ | $O(N \log N)$ | $O(N)$ | Stable |
| Quick Sort | $O(N \log N)$ | $O(N^2)$ | $O(\log N)$ | Unstable |
| Heap Sort | $O(N \log N)$ | $O(N \log N)$ | $O(1)$ | Unstable |
| Counting Sort | $O(N+K)$ | $O(N+K)$ | $O(K)$ Max | Stable |
🗺️ 2D Sorting Constraints
Interviewers reliably ask candidates to organize unstructured 2D domains dynamically mapping arrays (e.g. Time Intervals).
// Given coordinates [[start, end], [start, end]] // Sort dynamically by Start Time rigorously, then break ties via End Time natively! Arrays.sort(intervals, (a, b) -> { if (a[0] != b[0]) { return Integer.compare(a[0], b[0]); } else { return Integer.compare(a[1], b[1]); } });
💪 Practice Problems
📝 Topic 9 Assignment
Complete the assignment before moving to Topic 10. It includes Easy, Medium, and Hard level problems covering O(N²) sorts, O(N log N) partitioned sorts, Linear Sorting variants, and Cyclic Sort patterns.
📄 Open Assignment →
✅ Topic Completion Checklist
Check each item before advancing to Topic 10.
Don't just write code; master the boundaries of algorithmic execution. Binary Search is the definitive key to navigating large, structured arrays efficiently.