Progress
🌍
Why does this topic matter?
Sorting is the physics of search. Once data is sorted, binary search, two-pointer, and sliding-window techniques become unlocked. Almost every data-processing pipeline in the real world begins with a sort. Interviewers test sorting to see if you understand trade-offs: stability vs. speed, memory vs. in-place, worst-case guarantees vs. average-case performance. Knowing when to use Merge Sort vs. Quick Sort vs. Counting Sort is the difference between an O(N log N) solution and a TLE.

📖 Before We Start — The Big Picture

Sorting is the problem of rearranging N items so they are in a defined order (ascending, descending, or by some key). There are fundamentally two families of sorting algorithms:

A sort is called stable if equal elements maintain their original relative order. This matters when sorting objects by multiple keys (e.g., sort employees by salary, then by name — you first sort by name stably, then sort by salary stably).

🔗
How it connects: Lecture 5 (Recursion) is the backbone of Merge Sort and Quick Sort — both use divide and conquer recursion. Lecture 10 (Searching) requires sorted input to work — Binary Search only works on sorted arrays. Many DSA problems say "sort first, then..." — this lecture teaches you why and how.

🌐 Sorting Overview

Sorting mechanics generally divide into heavily nested loops for smaller caches ($O(N^2)$ bounds) and recursive Divide & Conquer structures ($O(N \log N)$ limits).

  • $O(N^2)$ Baseline: Bubble, Selection, and Insertion Sorts. Best for tiny datasets or nearly perfectly sorted data.
  • $O(N \log N)$ Divide & Conquer: Merge Sort and Quick Sort. The industry standards for generalized scaling.
  • $O(N)$ Non-Comparison: Counting Sort and Radix Sort. Requires strict bounded constraints on dataset properties (like integers bounding).
  • $O(N \log N)$ Native: Heap Sort. In-place mapping without recursion.
💡
Stable Sort property: If two elements exhibit the same sorting key, their initial relative order is preserved. Essential for cascaded object sorting by multiple sequential criteria.

🔬 Algorithms Deep Dive

O(N²) Baseline Sorts

Bubble Sort: Propagates maximum values to the end iteratively via pair swaps. Often optimized to break early if no swaps occur.

☕ Java · Bubble Sort (Optimized)
public void bubbleSort(int[] arr) {
    for (int i = 0; i < arr.length; i++) {
        boolean swapped = false;
        for (int j = 1; j < arr.length - i; j++) {
            if (arr[j] < arr[j - 1]) {
                swap(arr, j, j - 1);
                swapped = true;
            }
        }
        if (!swapped) break;
    }
}

Selection Sort: Locates absolute current minimum and injects it upfront. Always performs exact $O(N)$ swaps.

☕ Java · Selection Sort
public void selectionSort(int[] arr) {
    for (int i = 0; i < arr.length - 1; i++) {
        int minIdx = i;
        for (int j = i + 1; j < arr.length; j++) {
            if (arr[j] < arr[minIdx]) minIdx = j;
        }
        swap(arr, i, minIdx);
    }
}

Insertion Sort: Builds the output string progressively. Highly adaptive and routinely utilized inside complex frameworks (like Timsort) natively for handling $N < 64$ sub-chunks due to strict $O(N)$ speed on nearly sorted vectors.

☕ Java · Insertion Sort
public void insertionSort(int[] arr) {
    for (int i = 1; i < arr.length; ++i) {
        int key = arr[i], j = i - 1;
        while (j >= 0 && arr[j] > key) {
            arr[j + 1] = arr[j];
            j--;
        }
        arr[j + 1] = key;
    }
}

Merge Sort (Divide & Conquer)

Splits cleanly down to base elements and merges components together recursively upward guaranteeing stable sorted arrays natively in $O(N \log N)$ constraints.

Caveat - The $O(N)$ Space Limit: The merge phase strictly enforces allocating external memory to preserve stability cleanly. It is the gold standard for Native Objects and Linked Lists precisely due to this stability factor.

☕ Java · Standard Merge (Div/Conq)
void sort(int arr[], int l, int r) {
    if (l < r) {
        int m = l + (r - l) / 2;
        sort(arr, l, m);
        sort(arr, m + 1, r);
        merge(arr, l, m, r); // External $O(N)$ combination
    }
}

Quick Sort

An in-place paradigm separating the domain physically into domains left recursively smaller than a central Pivot and physically greater.

The $O(N^2)$ Adversary Hazard: Standard Quick Sort choosing the exact highest element dynamically breaks out if data is aggressively inverse-sorted natively. We enforce randomized pivoting immediately to sustain average $O(N \log N)$ bounds strictly!

☕ Java · Quick Sort Pattern
public void quickSort(int[] arr, int low, int high) {
    if (low < high) {
        int pi = partition(arr, low, high);
        quickSort(arr, low, pi - 1);
        quickSort(arr, pi + 1, high);
    }
}

Non-Comparison Sorts & Heap

Algorithms mapping domains directly to index spaces or structural constraints.

Counting Sort: Hashes bounding integers optimally, heavily consuming memory dynamically proportional to the highest integer natively.

☕ Java · Counting Sort Baseline
public void countingSort(int[] arr) {
    int max = Arrays.stream(arr).max().getAsInt();
    int[] count = new int[max + 1];
    for (int num : arr) count[num]++;
    for (int i = 0, j = 0; i <= max; i++) {
        while (count[i]-- > 0) arr[j++] = i;
    }
}

Radix Sort: Cascades Counting Sort digit by digit safely preventing space exhaustion reaching $O(d \cdot (N+K))$ outputs perfectly.

☕ Java · Radix Sort Cascade
public void radixSort(int[] arr) {
    if (arr.length == 0) return;
    int max = Arrays.stream(arr).max().getAsInt();
    for (int exp = 1; max / exp > 0; exp *= 10) {
        countSortByDigit(arr, exp);
    }
}

Heap Sort: Forces elements into a Max/Min Binary Heap. Reaches strictly $O(N \log N)$ time natively with an optimal $O(1)$ embedded space boundary dynamically but lacks the sequential mapping speeds Merge Sort has.

☕ Java · In-Place Heap Sort
public void heapSort(int[] arr) {
    int n = arr.length;
    // Build heap natively phase
    for (int i = n / 2 - 1; i >= 0; i--) {
        heapify(arr, n, i);
    }
    // Extract elements globally
    for (int i = n - 1; i > 0; i--) {
        swap(arr, 0, i);
        heapify(arr, i, 0);
    }
}

🔄 Algorithm Variations

Cyclic Sort

Given parameters constraining elements rigidly inside $1$ to $N$, you assign targets manually by calculating targetIdx = nums[i] - 1 strictly bypassing comparison logic for direct index containment bounded strictly to $O(N)$.

☕ Java · Cyclic Sorting Logic
int i = 0;
while (i < nums.length) {
    int correct = nums[i] - 1; // The index it SHOULD belong to
    if (nums[i] > 0 && nums[i] <= nums.length && nums[i] != nums[correct]) {
        swap(nums, i, correct);
    } else {
        i++;
    }
}

Dutch National Flag (3-Way Partitioning)

When domains heavily replicate properties (e.g. 0s, 1s, and 2s purely). Using 3 pivot limits natively organizes elements centrally without mapping auxiliary constraints natively dynamically sorting in a single $O(N)$ pass structurally.

☕ Java · 3-Way Partition Bounds
int low = 0, mid = 0, high = nums.length - 1;
while (mid <= high) {
    if (nums[mid] == 0) swap(nums, low++, mid++);
    else if (nums[mid] == 1) mid++;
    else swap(nums, mid, high--);
}

QuickSelect

A Quick Sort recursion tree cut fundamentally in half. Instead of sorting both pivot sides, you analyze if the exact Kth pivot sits bounded dynamically processing strictly in amortized $O(N)$ time finding structural Medians optimally natively!

☕ Java · QuickSelect (Kth Largest)
// Find k-th largest implies index (len - k)
int pivotIdx = partition(nums, left, right);
if (pivotIdx == targetK) return nums[targetK];
else if (pivotIdx < targetK) return quickSelect(nums, pivotIdx + 1, right, targetK);
else return quickSelect(nums, left, pivotIdx - 1, targetK);

⏱️ Complexity Analysis

Algorithm Time (Best) Time (Avg / Worst) Space Constraints Stability
Insertion Sort $O(N)$ $O(N^2)$ $O(1)$ Stable
Merge Sort $O(N \log N)$ $O(N \log N)$ $O(N)$ Stable
Quick Sort $O(N \log N)$ $O(N^2)$ $O(\log N)$ Unstable
Heap Sort $O(N \log N)$ $O(N \log N)$ $O(1)$ Unstable
Counting Sort $O(N+K)$ $O(N+K)$ $O(K)$ Max Stable

🗺️ 2D Sorting Constraints

Interviewers reliably ask candidates to organize unstructured 2D domains dynamically mapping arrays (e.g. Time Intervals).

☕ Java · Interval Lambda Filtering
// Given coordinates [[start, end], [start, end]]
// Sort dynamically by Start Time rigorously, then break ties via End Time natively!
Arrays.sort(intervals, (a, b) -> {
    if (a[0] != b[0]) {
        return Integer.compare(a[0], b[0]);
    } else {
        return Integer.compare(a[1], b[1]);
    }
});

💪 Practice Problems

Problem 01 · In-Place Merge
Merge Sorted Array
Easy AmazonFacebook
Problem 02 · Three-Way Partition
Sort Colors (Dutch National Flag)
Medium MicrosoftAdobe
Problem 03 · Cyclic Sort Secret
First Missing Positive
Hard GoogleAmazon Cyclic Sort
Problem 04 · Optimization Variants
Kth Largest Element in an Array
Medium Facebook QuickSelect / Heap
Problem 05 · Radical Performance
Maximum Gap
Hard Apple Radix / Bucket Sort
Problem 06 · Bucket Sort Approach
Top K Frequent Elements
Medium Amazon Bucket Sort
Problem 07 · Greedy Reordering
Wiggle Sort
Medium Google Greedy Sort
Problem 08 · Sort + Min-Heap
Meeting Rooms II
Medium GoogleAmazon Sort + Heap

📝 Topic 9 Assignment

📋
Assignment — 25 Problems
Complete the assignment before moving to Topic 10. It includes Easy, Medium, and Hard level problems covering O(N²) sorts, O(N log N) partitioned sorts, Linear Sorting variants, and Cyclic Sort patterns.

📄 Open Assignment →

✅ Topic Completion Checklist

Check each item before advancing to Topic 10.

I understand the mechanics of $O(N^2)$ sorts (Selection, Insertion, Bubble).
I can implement Merge Sort and Quick Sort (Partitioning) from scratch.
I recognize when to apply $O(N)$ special sorts like Counting and Radix Sort.
I know how to solve advanced interview problems using Custom Comparators (Largest Number, Wiggle Sort).
I have mastered the Cyclic Sort pattern for range problems.
🧠
You're ready for Topic 10: Binary Search
Don't just write code; master the boundaries of algorithmic execution. Binary Search is the definitive key to navigating large, structured arrays efficiently.