Advanced Sorting

You’ve logged 5,000 workouts over five years. Your fitness app needs to display them sorted by calorie burn—highest first. Using bubble sort or insertion sort would take noticeable seconds, maybe minutes. Using Quicksort? Milliseconds.

In Chapter 4, we explored insertion sort, bubble sort, and selection sort—all performing with a time complexity of O(n²). While valuable for understanding sorting fundamentals and useful for small datasets like this week’s workouts, these algorithms are seldom used in production code when dealing with larger collections like years of fitness history.

The Quicksort algorithm, however, has broad practical application and usage. This commonly used algorithm appears in code libraries, production systems, and even forms part of many standard library sorting implementations. Quicksort features a time complexity of O(n log n) in average cases and applies a divide & conquer strategy that results in superior performance.

How Quicksort works

Quicksort applies a series of rules to swap pairs of values. Swap events are performed in place so no additional structures are needed to process data. In our implementation, special variables named wall and pivot will help manage the swapping process.

At the start of the sorting process, the algorithm begins by comparing each index value to a comparison value called the pivot. The wall represents the position of values that have been swapped or evaluated. Since we’ve just started the sorting process, the current and wall indices are shown as the same value.

Making comparisons

The next step compares the current and pivot values. Since the current value (such as 7) is greater than the pivot (such as 4), no swap occurs. However, the current index advances to the next position.

In this step, the current value (such as 2) is again compared with the pivot. Since 2 is less than 4, the wall is swapped with the current index value. Once complete, the wall advances to the next index position.

Conquering the divide

The process of comparing and swapping occurs until the current index meets the pivot. Once complete, the pivot is swapped with the wall index value. Interestingly, once the pivot is swapped, it’s considered sorted. Even though most values remain unsorted at this phase, all values less than 4 are now positioned to the left. Conversely, all values greater than 4 are positioned to the right.

The initial pivot value of 4 has been used to show how Quicksort can divide a collection into relatively equal segments. This process is called partitioning. To sort the remaining values, each value left or right of the initial pivot is also treated as a pivot and the process is repeated.

Divide and Conquer: The process of deconstructing a single, complex system into a series of sub-systems. In computer science, divide & conquer strategies can be found in search and sorting algorithms, graphs, and dynamic programming.

Implementing Quicksort in Swift

In Swift, the complete algorithm is expressed as two functions. The main quickSort function manages the overall execution, specifically the selection of each pivot value. Similar to our other sorting algorithms, the quickSort implementation is written as an Array extension. However, the nested function applies recursion as its main control structure:

extension Array where Element: Comparable {

    // Sort array using quicksort with in-place partitioning
    mutating func quickSort() -> Array<Element> {

        // Recursively partition array segments around pivot values
        func qSort(start startIndex: Int, _ pivot: Int) {

            if (startIndex < pivot) {
                let iPivot = qPartition(start: startIndex, pivot)
                qSort(start: startIndex, iPivot - 1)
                qSort(start: iPivot + 1, pivot)
            }
        }

        qSort(start: 0, self.endIndex - 1)
        return self

    }
}

//execute sort
var sequence: Array<Int> = [7, 2, 1, 6, 8, 5, 3, 4]
let results = sequence.quickSort()

Finally, the qPartition process sorts a selected pivot value to its correct index position:

// Partition array segment around pivot, returning final pivot position
mutating func qPartition(start startIndex: Int, _ pivot: Int) -> Int {

    var wallIndex: Int = startIndex

    //compare range with pivot
    for currentIndex in wallIndex..<pivot {

        if self[currentIndex] <= self[pivot] {
            if wallIndex != currentIndex {
                self.swapAt(currentIndex, wallIndex)
            }

            //advance wall
            wallIndex += 1
        }
    }

    //move pivot to final position
    if wallIndex != pivot {
        self.swapAt(wallIndex, pivot)
    }

    return wallIndex
}

Performance characteristics

Quicksort achieves O(n log n) time complexity in its average case through the power of partitioning. Each partitioning operation divides the array into two segments, ideally of roughly equal size. This creates a recursion tree with log n levels, where each level halves the problem size. At each level, the algorithm examines every element once during partitioning, resulting in n operations per level. The total complexity is therefore log n levels multiplied by n operations per level, yielding O(n log n).

However, Quicksort can degrade to O(n²) in its worst case when the pivot selection consistently creates unbalanced partitions. This occurs when the array is already sorted or contains all identical values, causing each partition to contain only one element. Modern implementations use techniques like random pivot selection or median-of-three to mitigate this risk.

The space complexity of Quicksort is O(log n) due to the recursion stack. Each recursive call stores information about the current partition being processed. In a balanced partition scenario, the maximum depth of recursion is log n, which determines the space required.

Visualizing the performance difference

To understand why O(n log n) is dramatically better than O(n²), consider sorting 10,000 workouts (about 3 years of daily training):

  • Basic sorting (O(n²)): ~100,000,000 operations (several seconds of lag)
  • Quicksort (O(n log n)): ~140,000 operations (instant)

This difference of over 700× explains why Quicksort dominates production code. The performance gap widens further as data grows—for 100,000 heart rate readings (a month of continuous monitoring), the difference becomes approximately 7,500×. This is why the Health app can instantly sort years of data without freezing.

Best, average, and worst cases

Best case: O(n log n) occurs when each pivot divides the array into perfectly balanced halves. This produces the shallowest recursion tree possible.

Average case: O(n log n) still holds even with imperfect pivots, as long as partitions remain reasonably balanced. Random data typically produces this behavior.

Worst case: O(n²) happens when pivots consistently create maximally unbalanced partitions (e.g., always choosing the smallest or largest element as pivot). Sorted or reverse-sorted arrays trigger this behavior with naive pivot selection.

Pivot selection strategies

The choice of pivot significantly impacts performance:

Last element (our implementation): Simple but vulnerable to worst-case behavior on sorted data.

Random element: Avoids worst-case patterns in pre-sorted data, achieving O(n log n) expected performance.

Median-of-three: Selects the median of first, middle, and last elements. Balances simplicity with improved pivot quality.

True median: Guarantees balanced partitions but requires extra computation to find, potentially negating the benefits.

Practical applications

Quicksort excels in situations where memory is limited, since it uses minimal additional space through its in-place partitioning strategy. On iOS devices where memory management matters, this in-place sorting is particularly valuable. When a fitness app sorts thousands of workouts by duration, Quicksort modifies the array directly rather than creating copies, conserving precious memory.

The algorithm typically runs faster than other O(n log n) algorithms in practice when average-case performance matters more than worst-case guarantees. Workout data, step counts, and calorie burns are essentially random—they have no particular pattern. This randomness produces the balanced partitions that allow Quicksort to achieve its optimal performance.

Many programming languages use Quicksort or hybrid algorithms based on it. Swift’s standard library uses Introsort, which begins with Quicksort and switches to heapsort if recursion depth exceeds a threshold. This approach guarantees O(n log n) worst-case performance while maintaining Quicksort’s excellent average-case speed. When you call .sorted() on an array in Swift, you’re likely using a Quicksort-based algorithm. The algorithm’s widespread adoption in production systems—from the Health app to Strava to AllTrails—demonstrates its practical value despite the theoretical possibility of worst-case behavior.

Divide and conquer in action

Quicksort exemplifies the divide and conquer strategy by breaking the sorting problem into smaller subproblems. The partitioning phase divides the array around a pivot, creating two segments where all elements in the left segment are smaller than the pivot, and all elements in the right segment are larger. Each segment then becomes its own sorting problem, solved recursively using the same strategy.

Unlike some divide and conquer algorithms that require significant work to combine solutions, Quicksort’s combination phase is trivial. Once both segments are sorted, the entire array is sorted with no additional work needed. This efficient combination is possible because the partitioning phase does the heavy lifting of positioning elements relative to the pivot.

The divide and conquer approach appears throughout computer science, from binary search to the graph algorithms we’ll explore later in this book. Understanding how Quicksort applies this strategy provides a foundation for recognizing the pattern in other contexts. The key insight is that many complex problems become tractable when broken into smaller instances of the same problem.

We’ll explore recursion more deeply in Chapter 6, where we’ll see how functions that call themselves enable elegant solutions to complex problems. Quicksort serves as an excellent introduction to recursive thinking, demonstrating how a few lines of code can express a powerful algorithm through self-reference.

When to use Quicksort

Quicksort achieves O(n log n) average time complexity with excellent practical performance:

Characteristic Value
Average case O(n log n)
Worst case O(n²)
Space complexity O(log n)
In-place Yes
Stable No

Quicksort is ideal when:

  • Memory is limited (in-place sorting)
  • Average-case performance is sufficient
  • Stability (preserving order of equal elements) is not required
  • Working with random or unpredictable data
  • General-purpose sorting for large datasets

Swift’s standard library uses Introsort, which starts with Quicksort and switches to heapsort if recursion depth becomes excessive. This hybrid approach combines Quicksort’s speed with guaranteed O(n log n) worst-case performance.

Building algorithmic intuition

Quicksort demonstrates the power of divide-and-conquer thinking—breaking problems into smaller subproblems appears throughout computer science, from binary search (Chapter 3) to graph algorithms. The in-place partitioning strategy shows how algorithms can optimize both space and time complexity simultaneously. Understanding quicksort provides context for recursion (Chapter 6) and performance trade-offs, illustrating why real-world algorithms often balance average-case performance against worst-case guarantees.