Remove Duplicates from Sorted List

Data Structures
Easy
Amazon
100.9K views

Given the head of a sorted linked list, delete all duplicates such that each element appears only once. The resulting list should also be sorted.

Why Interviewers Ask This

Amazon interviewers ask this to evaluate your mastery of pointer manipulation and edge-case handling within linked lists. They specifically test if you can traverse a sorted structure efficiently while modifying it in-place without allocating extra memory, reflecting their 'Bias for Action' and focus on resource-constrained optimization.

How to Answer This Question

1. Clarify the input: Confirm if the list is strictly sorted and handle null or single-node cases immediately. 2. Propose an iterative solution: Explain that since the list is sorted, duplicates are adjacent, allowing a single pass with O(1) space complexity. 3. Define the logic: Describe using two pointers, one tracking the current node and another checking the next. If values match, bypass the duplicate by updating the next pointer; otherwise, move forward. 4. Validate constraints: Explicitly state why recursion is avoided due to stack overflow risks on large inputs. 5. Walk through an example: Trace the algorithm with a concrete list like [1, 1, 2] to demonstrate the pointer shifts visually before writing code.

Key Points to Cover

  • Explicitly stating O(n) time and O(1) space complexity upfront
  • Demonstrating understanding that sorting implies adjacent duplicates
  • Handling null or single-node edge cases before the main loop
  • Explaining pointer manipulation logic clearly without just coding blindly
  • Avoiding auxiliary data structures like HashSets to show optimization skills

Sample Answer

To solve this problem efficiently, I would first verify the base cases. If the head is null or contains only one node, we simply return the head as no duplicates can exist. Since the list is already sorted, any duplicates will be adjacent to each other, which allows us to solve this in a single linear pass with constant space complexity, aligning with Amazon's preference for optimized solutions. I will use a single pointer approach. Starting at the head, I'll iterate through the list comparing the current node's value with the next node's value. If they are identical, it means we have found a duplicate. Instead of creating a new node or using a hash set, I will simply bypass the duplicate by pointing the current node's next reference to the node after the duplicate. This effectively removes the duplicate from the chain. If the values differ, I simply advance my pointer to the next node to continue the check. For example, given the list 1 -> 1 -> 2, the pointer starts at the first 1. It sees the next node is also 1. I update the first node's next pointer to skip the second 1 and point directly to 2. The list becomes 1 -> 2. Moving forward, 1 and 2 are different, so I advance. The loop ends, and I return the modified head. This ensures the result remains sorted and contains unique elements without extra memory overhead.

Common Mistakes to Avoid

  • Using a HashSet to track seen values, which violates the O(1) space constraint expected for sorted inputs
  • Failing to handle the edge case where the head is null or the list has only one element
  • Creating a new list instead of modifying the existing list in-place, wasting memory
  • Getting stuck in an infinite loop by not correctly advancing the pointer when skipping duplicates

Practice This Question with AI

Answer this question orally or via text and get instant AI-powered feedback on your response quality, structure, and delivery.

Start Practicing

Related Interview Questions

Browse all 154 Data Structures questionsBrowse all 73 Amazon questions