🌍
Why does this topic matter?
Linked Lists are the first true pointer-based data structure you will encounter. Mastering
them teaches you how to manipulate heap objects by reference — the foundational skill behind Trees,
Graphs, and advanced caches. Linked List problems are common at Google, Meta, and Uber because they
test pointer logic and visual memory reasoning simultaneously. A solid Linked List foundation makes
Trees feel trivial.
📖 What is a Linked List?
An array stores elements in contiguous (side-by-side) memory cells. A Linked
List stores elements in scattered memory. Each element (called a
Node) is an independent heap object containing:
- The actual data (e.g. an
int value)
- A pointer (reference) to the next Node in the chain
The last node's next field is null, marking the end. There is no random access
— to reach the Kth node you must traverse from the head, following next pointers one by
one. This is why indexing a Linked List is O(N), not O(1).
Array vs Linked List — Memory Model:
ARRAY ┌────┬────┬────┬────┬────┐
│ 10 │ 20 │ 30 │ 40 │ 50 │ ← contiguous, same block
└────┴────┴────┴────┴────┘
LINKED ┌──────────┐ ┌──────────┐ ┌──────────┐
LIST: │ val: 10 │────▶│ val: 20 │────▶│ val: 30 │──▶ null
│ next: ● │ │ next: ● │ │ next:null│
└──────────┘ └──────────┘ └──────────┘
[0x1A4 heap] [0x7B2 heap] [0x3F9 heap] ← scattered
💡
Array vs Linked List — Key Trade-offs:
✅ O(1) insertion/deletion at head — just redirect one pointer
✅ Dynamic size — no pre-allocation, grows as needed
❌ O(N) random access — must traverse from the head
❌ Extra memory per node for the next pointer
❌ Poor cache performance — nodes are scattered in memory
🔑 How Pointer Manipulation Works
All Linked List operations — insert, delete, reverse — reduce to reassigning next (and
prev) pointers. The chain exists only because of these references. Lose a
reference and you permanently lose access to the rest of the list.
INSERT node C between A and B:
BEFORE: A ──▶ B ──▶ null
STEP 1: C.next = B // C now points to B ← DO THIS FIRST
STEP 2: A.next = C // A now points to C
AFTER: A ──▶ C ──▶ B ──▶ null ✓
⚠️ If you did A.next = C FIRST, you lose the reference to B — memory leak!
DELETE node B from: A ──▶ B ──▶ C ──▶ null
STEP 1: A.next = B.next // Skip B entirely
AFTER: A ──▶ C ──▶ null ✓ (B is now unreferenced → GC)
🔗
How it connects: Lecture 8 (Arrays) showed contiguous memory — this lecture shows
the contrast with non-contiguous memory. Lecture 5 (Recursion) is used in many Linked List
algorithms (reversal in k-groups, flattening multi-level lists). Lecture 2 (JVM Memory) explains why
every ListNode lives on the heap and why setting node = null makes it
eligible for garbage collection.
🏗️ Anatomy & Types
Unlike Arrays, which store elements in contiguous memory, a Linked List stores
elements in
scattered memory. Each element (Node) contains data and a
Pointer (reference) to the next node.
1.1 — The ListNode blueprint
class ListNode {
int val;
ListNode next;
ListNode prev; // For Doubly Linked List only
ListNode(int x) {
this.val = x;
this.next = null;
}
}
1.2 — The Three Fundamental Types
- Singly Linked List: Forward only. Last node points to
null.
- Doubly Linked List (DLL): Forward & Backward. Allows $O(1)$ deletion if node is
given.
- Circular: Last node points to the Head. Essential for buffers and round-robin
scheduling.
🧠 Core Algorithms
2.1 — The Sentinel Strategy (Dummy Head)
A "Sentinel" node is a dummy node that points to the actual head. It eliminates special `if (head ==
null)`
cases for insertion/deletion.
public ListNode removeElements(ListNode head, int val) {
ListNode dummy = new ListNode(0);
dummy.next = head;
ListNode curr = dummy;
while (curr.next != null) {
if (curr.next.val == val) curr.next = curr.next.next;
else curr = curr.next;
}
return dummy.next;
}
2.2 — In-place Reversal
The bread and butter of interview questions. The trick is to maintain three pointers: `prev`, `curr`,
and
`nextNode`.
public ListNode reverseList(ListNode head) {
ListNode prev = null;
ListNode curr = head;
while (curr != null) {
ListNode nextNode = curr.next; // Step 1: Save future
curr.next = prev; // Step 2: Reverse connection
prev = curr; // Step 3: Advance prev
curr = nextNode; // Step 4: Advance curr
}
return prev;
}
🔄 Advanced Variations
3.1 — Copy List with Random Pointers
💡
The Interweaving Pattern: Instead of using a HashMap for $O(N)$ space,
interweave copy nodes
between originals: `A -> A' -> B -> B'`. This allows random assignment in $O(1)$ space.
// 1. Interweave copies: curr.next = new Node(curr.val, curr.next)
// 2. Assign Randoms: curr.next.random = curr.random.next
// 3. Detach: original.next = original.next.next
3.2 — Flattening Multilevel DLL
Treat the `child` pointer like a "branch". Use a recursive approach or an iterative stack to attach
children
between `curr` and `curr.next`.
public Node flatten(Node head) {
if (head == null) return head;
Node curr = head;
while (curr != null) {
if (curr.child != null) {
Node next = curr.next;
Node childHead = flatten(curr.child);
// Link curr & child
curr.next = childHead;
childHead.prev = curr;
curr.child = null;
// Link child-tail & next
Node tail = childHead;
while (tail.next != null) tail = tail.next;
if (next != null) { tail.next = next; next.prev = tail; }
}
curr = curr.next;
}
return head;
}
⚡ Cache Design
4.1 — Least Recently Used (LRU) Cache
Requires $O(1)$ operations for both `get` and `put`. Achieved by combining a HashMap
(for
lookups) and a Doubly Linked List (for ordering).
class LRUCache {
Map<Integer, Node> map;
DoubleList list; // move accessed nodes to Head
public int get(int key) {
if (!map.containsKey(key)) return -1;
Node n = map.get(key);
list.moveToHead(n); // Mark as most recent
return n.val;
}
}
4.2 — Least Frequently Used (LFU) Logic
LFU is the "hard mode" variation. Instead of one list, it uses multiple lists mapped
by
frequency. If total capacity is reached, evict the LRU item from the minFrequency list.
// 1. Use Frequency Map: freq -> DoublyLinkedList of nodes
// 2. Use Cache Map: key -> Node (for O(1) lookup)
// 3. Track minFrequency for O(1) eviction
updateFreq(node) {
freqMap.get(node.freq).remove(node);
if (node.freq == minFreq && empty) minFreq++;
node.freq++;
freqMap.get(node.freq).add(node);
}
⏱️ Complexity Analysis
| Operation |
Singly List |
Doubly List |
Array (Standard) |
| Indexing |
$O(N)$ |
$O(N)$ |
$O(1)$ |
| Prepend |
$O(1)$ |
$O(1)$ |
$O(N)$ |
| Append |
$O(1)^*$ |
$O(1)$ |
$O(1)$ avg |
| Deletion |
$O(N)$ |
$O(1)$ |
$O(N)$ |
*Singular Append is $O(1)$ only if maintaining a tail
pointer.
💪 Practice Library
Problem
Reverse nodes of a linked list $k$ at a time and return the modified list. If the number
of nodes is not a multiple of $k$, the left-out nodes at the end should remain as they
are.
Full solution with complexity
public ListNode reverseKGroup(ListNode head, int k) {
ListNode curr = head;
int count = 0;
while (curr != null && count != k) { curr = curr.next; count++; }
if (count == k) {
curr = reverseKGroup(curr, k);
while (count-- > 0) {
ListNode tmp = head.next;
head.next = curr;
curr = head;
head = tmp;
}
head = curr;
}
return head;
}
Problem
Check if a linked list is a palindrome in $O(1)$ extra space. Requires finding the
middle, reversing the second half, and comparing.
Full solution with complexity
public boolean isPalindrome(ListNode head) {
ListNode fast = head, slow = head;
while (fast != null && fast.next != null) {
fast = fast.next.next; slow = slow.next;
}
if (fast != null) slow = slow.next; // odd case
slow = reverse(slow); fast = head;
while (slow != null) {
if (fast.val != slow.val) return false;
fast = fast.next; slow = slow.next;
}
return true;
}
Problem
Remove the $n$-th node from the end of the list and return its head. Solve in one pass
using a dummy head and a constant gap.
Full solution with complexity
public ListNode removeNthFromEnd(ListNode head, int n) {
ListNode start = new ListNode(0);
ListNode slow = start, fast = start;
slow.next = head;
for(int i=1; i<=n+1; i++) fast = fast.next;
while (fast != null) {
slow = slow.next; fast = fast.next;
}
slow.next = slow.next.next;
return start.next;
}
Problem
Sort a linked list in $O(n \log n)$ time and $O(1)$ memory. Divide and conquer approach
is best.
Full solution with complexity
public ListNode sortList(ListNode head) {
if (head == null || head.next == null) return head;
ListNode prev = null, slow = head, fast = head;
while (fast != null && fast.next != null) {
prev = slow; slow = slow.next; fast = fast.next.next;
}
prev.next = null; // Cut list
ListNode l1 = sortList(head);
ListNode l2 = sortList(slow);
return merge(l1, l2);
}
TimeO(N log N)
SpaceO(log N) stack
Problem
Merge $k$ sorted linked lists and return it as one sorted list. Use a PriorityQueue to
maintain the smallest current head.
Full solution with complexity
public ListNode mergeKLists(ListNode[] lists) {
PriorityQueue<ListNode> pq = new PriorityQueue<>((a,b)->a.val-b.val);
for (ListNode node : lists) if (node != null) pq.add(node);
ListNode dummy = new ListNode(0), tail = dummy;
while (!pq.isEmpty()) {
tail.next = pq.poll(); tail = tail.next;
if (tail.next != null) pq.add(tail.next);
}
return dummy.next;
}
TimeO(N log k)
SpaceO(k) for heap
Problem
Design and implement a data structure for a Least Frequently Used (LFU) cache. It should
support get and put operations in $O(1)$ time complexity.
Full solution with complexity
class LFUCache {
Map<Integer, Node> cache;
Map<Integer, DoubleLinkedList> freqMap;
int size, capacity, minFreq;
public int get(int key) {
if (!cache.containsKey(key)) return -1;
Node node = cache.get(key);
updateFreq(node);
return node.val;
}
private void updateFreq(Node node) {
DoubleLinkedList oldList = freqMap.get(node.freq);
oldList.remove(node);
if (node.freq == minFreq && oldList.size == 0) minFreq++;
node.freq++;
freqMap.computeIfAbsent(node.freq, k -> new DoubleLinkedList()).add(node);
}
}
Problem
Given head of a linked list, return true if it contains a cycle (a node
whose next points back to a previous node).
1 → 2 → 3 → 4
↑ ↓
└── 6 ← 5 ← cycle → true
1 → 2 → 3 → null ← no cycle → false
Think First
Floyd's Tortoise & Hare: slow moves 1 step,
fast moves 2 steps. In a cycle, fast laps slow and they meet. No cycle →
fast reaches null. Uses O(1) space vs O(N) for a HashSet approach.
▶ Solution + Proof Sketch
public boolean hasCycle(ListNode head) {
ListNode slow = head, fast = head;
while (fast != null && fast.next != null) {
slow = slow.next;
fast = fast.next.next;
if (slow == fast) return true;
}
return false;
}
// In a cycle of length L, fast gains 1 step/iter on slow.
// After ≤ L iterations, fast catches slow. ✓
// Bonus: Reset slow=head, advance both 1 step → they meet at cycle entry.
Problem
Two linked lists represent integers in reverse digit order. Add them,
return the sum as a linked list.
l1=2→4→3 (342) l2=5→6→4 (465)
342 + 465 = 807 → Output: 7→0→8
Think First
Simulate grade-school addition: sum = l1.val + l2.val + carry, node value =
sum%10, new carry = sum/10. Treat exhausted list nodes as 0.
Add a final carry node if needed.
▶ Solution + Trace
public ListNode addTwoNumbers(ListNode l1, ListNode l2) {
ListNode dummy = new ListNode(0), curr = dummy;
int carry = 0;
while (l1 != null || l2 != null || carry != 0) {
int sum = carry;
if (l1 != null) { sum += l1.val; l1 = l1.next; }
if (l2 != null) { sum += l2.val; l2 = l2.next; }
carry = sum / 10;
curr.next = new ListNode(sum % 10);
curr = curr.next;
}
return dummy.next;
}
// 2+5=7,c=0→node(7) 4+6=10,c=1→node(0) 3+4+1=8,c=0→node(8)
// Result: 7→0→8 ✓
TimeO(max(M,N))
SpaceO(max(M,N))
📝 Topic Assignment
📋
Topic 11 Assignment — 28 Problems
Complete the assignment before moving to Stacks & Queues. Includes pointer rerouting, DLL
logic,
and
Cache designs.
📄
Open Assignment →
✅ Topic Completion Checklist
Check each item before advancing to Topic 12.
✓
I can reverse a list iteratively in $O(N)$ time and $O(1)$ space
✓
I understand why Sentinels are used to avoid null-checks
✓
I can explain Floyd's Cycle detection logic (Tortoise-Hare)
✓
I know the 3-step Interweaving pattern for Copying Random Lists
✓
I understand the HashMap + DLL trade-off for $O(1)$ Cache design
✓
I've completed successfully at least 20 problems from the Topic 11
Assignment
🧠
You're ready for Topic 12: Stacks & Queues
Linked Lists were about pointers and memory. Stacks and Queues are about
Constraints.
Mastering these restricted data structures is key to Graph Algorithms (BFS/DFS).