Binary Heap

Why explore a data structure in a website about concurrency?
Because timers/schedulers rely on them to be efficient.

In this article:

  1. Why timers/schedulers require a binary heap.
  2. How they work internally.
  3. How to implement a working one from absolute scratch.

The working thread of a scheduler (i.e. a timer) has one responsibility: To run the tasks submitted to it.
To do that though, that thread needs to find the next task due to run. What if millions of tasks have been submitted? If it needs to sort millions of tasks to find the next, the working thread would spend more time sorting tasks than running them.

When we developed a Java Timer from scratch on the previous article, we required a data structure to hold the tasks to run. This data structure acts as buffer between the client threads submitting (i.e inserting) new tasks and the working thread responsible for removing tasks and running them.

Figure 1. The data structure holds the tasks, and is shared amongst several threads

Shamefully, we ended up using an existing java.util.PriorityQueue. There was too much to talk about as it was. Now we atone.
Originally, the main concern was thread safety. The data structure needs to handle multiple threads mutating it concurrently. And so, we overlooked what kind of data structure was feasible from the efficiency point of view.

Imagine a scenario where clients submit millions of tasks for execution. These tasks are not sorted. The order in which they are submitted is not the order by which they are to be executed:

Timer timer;
timer.schedule(<task-1>, 5.seconds)
timer.schedule(<task-2>, 6.seconds)
timer.schedule(<task-3>, 1.seconds)

// expected order of execution: [task-3, task-1, task-2]

On the receiving end, the worker thread needs to correctly pick the task to execute next. Sorting through millions of elements each time means it would spend more time sorting than executing. This would inevitably lead to tasks executing past their due date because the worker thread was busy sorting.

Alternatively, you could move the responsibility of sorting to the calling thread, but that would mean the client calls would block for an unreasonable amount of time.

insertremoveMinwho pays
sorted “list”O(n)O(1)client threads
unsorted listO(1)O(n)worker thread

For reference, here is they time it takes to sort an array list in Java:

nsort time (ms)
10000.2
10,0001
100,00014
1,000,000206
10,000,000357
Time it takes to sort an array list of integers in Java as function of its size. AMD Ryzen 9 3900X. Single thread.

As expected, sorting time grows linearly with the number of elements. It becomes unfeasible to use a scheduler backed by an array list when many tasks are expected.

Overview

With the scheduler diagram in mind, there are only two important operations the data structure has to perform efficiently:

  1. Inserting an element (any element)
  2. Removing the least element

All other operations are irrelevant for schedulers. Namely, it is irrelevant:

  1. How fast we can sort all the elements.
    Critically, notice this is different from finding the least element.
  2. Determine if a particular element exists.
  3. Finding the last element inserted.

Important

Schedulers do 2 operations on the underlying data-structure. They insert new elements and remove the least element. Therefore, they can sacrifice performance on all other operations.

Given the above, two data structures stand out. Binary Heaps and Binary Search Trees (BST) :

Binary HeapBinary Search Tree
insert averageO(1) 🔥O(log2 n)
insert worstO(log2 n)O(log2 n)
findMinO(1) 🔥O(log2 n)
findAnyO(n) ☠️O(log2 n)
removeMinO(log2 n)O(log2 n)
commentsalways balanced 🔥requires explicit balancing

Both do a good job for removing the least element O(log2n). Additionally, Binary Search Trees are good at fast lookup and removal of any element. But schedulers are not interested in finding just any element. They only need the next. That is why the binary heap wins. It provides constant time averaged insertions.
Additionally, binary heaps are simpler than BSTs, as the former are always balanced trees. Vanilla BSTs can degenerate, losing their performance. They required balanced algorithms which introduce overhead and add complexity.

Fundamentals

Binary heaps come in 2 flavours. The figure below illustrates a minHeap, whereby the least element is at the top. There is a maxHeap version which maintains the greatest element at the top. They are identical.

This visual representation is extremely useful. Most arguments are explained and understood by analysing it. Even scientific papers on the topic use them. So have a good look:

Visual representation of a Binary Heap (minHeap in this case). There are 18 elements, which dictates it has 4 levels, with the last being incomplete but filled left to right.

Note

The diagram exemplifies a binary heap where the elements are integers. This is for simplicity. They can contain any type of element that can be compared. For schedulers, they will be a tuple with a task (in the JVM world, a Runnable instance) to run and a timestamp for when such task should run. In that case, the comparison of the tuples is determined solely by the timestamp.

There are only two properties for a binary heap:

  1. The value of each node must be less or equal than the values at both its children.
  2. All nodes have both children. Except possibly the nodes on the bottom level. In which case the tree must be filled left to right.

Corollary of the 1st property

The least element is always the first (i.e. at the top). As we will see, that is why we can find the minimum so fast.

Corollary of the 2nd property

The binary heap is always balanced. This is unlike what happens in Binary Search Tree. These can degenerate and loose all its advantages.

Key observations

From the 2nd property, it follows that the number of elements of a tree deterministically determines both 1) its height, and 2) how many elements exist on the last level. The reverse is also true. Its height and number of elements on the last level uniquely determine the total number of elements.

The most important observation is that the total number of elements within a binary heap grows exponentially fast with its height.
Because every node on a particular level has 2 children, then it is always the case the number of elements doubles with each level. If some level has p elements, then the next has 2*p, and the one after has 2*2*p and the next after that 2*2*2*p, and then 2*2*2*2*p.
Hopefully this is enough to convince you the number of elements on each level is given by 2h, where h is the height of that level, and where the first level is h=0.

heightelems
020 = 1
121 = 2
222 = 4
10210 = 1024
15215 = 32,768
20220 = 1,048,576
30230 = 1,073,741,824

Important

The number of elements in a tree grows exponentially with its height.

The brilliancy of the algorithm
The core idea of the data structure is to keep the insert/deleteMin operations grow linearly with the height of the tree, which means they grow logarithmically with the number of elements.

Operations

As stressed before, the 2 operations binary heaps are good at, which coincides with the only 2 operations schedulers are interested in, are:

  1. Insertion of some random element
  2. Removal of the least element.

This is achieved if the operations scale linearly with the height of the tree, which means scaling logarithmically with the number of elements.

Insert

In order to respect the second property, inserting a new element implies the creation of a new node/element on the next available (i.e. leftmost) “slot” of the last level. Or, if the current level is already full, on leftmost slot of a new level.

At the same time, in order to respect the first property, the above does not mean the new element will occupy that slot. We might have to re-shuffle elements.

As a way of example, from the binary heap above, the next slot is the one to the right of element 76, which will be the right child of element 60.
However, if the element to be introduced is 1, then the element cannot stay there, as it would violate the 1st property given that 1 < 60. We have to re-organize the position of the elements so that the 1st property holds.

This can be achieved by comparing the child with its parent, and if necessary, swapping their position. The process – called percolation – is repeated until the 1st property holds across the entire tree.

End result of inserting element 1 into the previous binary heap.

Critically, notice that the number of comparisons done is only 4, which is equal to the height of the tree.

Algorithm
  1. New element is initially appended to end of the tree.
  2. Compare new element with its parent, and then
    1. If element lower, swap position with parent and repeat step 2 from current position.
    2. If element greater or equal, stop.

In the “worst case” scenario, the new element inserted is a new minimum, and therefore we must “pop it up” from the lowest level to the very first level. At each level change we must make one comparison operation. Therefore, in the worst case, we must make as many operations as there are levels. Because the number of levels is equal to log2N, we need to make exponentially fewer operations as there are total elements of the tree.

Where is the magic? The magic relies on the fact that every time we jump one level, we are ignoring half of the elements!

O(1) – on average

(skip to DeleteMin if not interested)

It gets better.
We need to make log2N operations in the worst case. On the best case, we would need only one – the comparison with the parent. What about on average? Mathematics tells us that on average we need to make only ~ 2 operations. That is, the average amount of operations required for inserts is amortized to be constant.

We won’t go into the formal explanation for it. The intuitive argument goes like this: every extra level adds twice as many elements as the level before it. Furthermore, the number of elements on the last level is given by 2h, and, as we shall proof below, the number of all elements before that level is given by 2h-1. In other words, at least half the elements of the binary heap are at the very bottom!

For example, our binary heap has 8 elements on level h=3, which is greater than the 7 elements above that level.

The argument is then, if you insert elements at random, there is 50% probability that it would “land” on the last level, and 25% it would “land” on the level h-1, and 12.5% on level h-2, and 6.25% on level h-3, and so forth. So, on average, it is very likely the random element will stay at the bottom levels, and therefore not need to be propagated up the tree.

The proof of this is not trivial. You can search on the internet for scientific articles deriving this rigorously.

DeleteMin

We don’t want to delete just any element. We want to delete the least element, the minimum, which we know is at the top of the heap.

Deletion is similar to insertion. But it goes from the top of the tree to the bottom.
Again, to respect the 2nd property, even though we delete the first element, that position (i.e. “slot”) of the tree remains. But its content changes. The “slot” which seizes to exist is the one of the last element of the heap.

End result of removing the least element (2) from the previous binary heap.

Algorithm
  1. The first element of the heap is removed.
  2. The last element of the heap is moved into the now vacant 1st “slot”.
  3. To respect the 1st property, the new first element is compared with its children.
    1. If any of its children are lower, swap position with the lowest child and repeat step 3 from current position.
    2. If the children’s values are higher, stop.

Notice again the core idea: The worst case scenario, after moving the last element into the top, this element will sink all the way down to the bottom of the tree. So the number of comparisons you have to do depend on the height of the tree, not on the total number of elements.

Implementation

On the original article we already focused on concurrency. Therefore, to keep things simple let us focus on the core implementation of the minHeap without thinking about the complexities of thread safety.

The most common approach is to use an array. Recall that arrays are conceptually contiguous memory locations whereby each element can be accessed by index in constant time.

The challenge is how to translate the tree structure, and the abstract insert/deleteMin algorithms onto an array.

Binary Heap
import java.util.Comparator;

// For conciseness, we overlook important points, such as: 1) checking for nulls, 
// 2) exception handling, and 3) growing/shrinking the backing array as required.
public final class BinaryHeap<E> {
  private static final int DEFAULT_CAPACITY = 1024;
  private final Comparator<? super E> comparator;
  private E[] queue; // zero based
  private int numberElem;
  public MyPriorityBlockingQueue(Comparator<? super E> comparator) {
    this.comparator = comparator;
    this.numberElem = 0;
    queue = (E[]) new Object[DEFAULT_CAPACITY];
  }

  public E peekMin() {
    return queue[0];
  }

  private boolean lower(int i, int j) {
    return comparator.compare(queue[i], queue[j]) < 0;
  }

  private void swap(int i, int j) {
    E savedI = queue[i];
    queue[i] = queue[j];
    queue[j] = savedI;
  }

  private int indexChildL(int index) {
    return 2 * index + 1;
  }

  private int indexChildR(int index) {
    return 2 * index + 2;
  }

  private int indexParent(int index) {
    return (int) Math.ceil(index * 1.0 / 2) - 1;
  }
}

There are many ways you could index the elements into the array. One that works is to do it breadth-first. Elements lower on the tree have higher indexes. Within each level, elements to the left come earlier on the array. For our example binary tree, you would have the following array [2, 5, 6, 40, 7, 8, 90, 45, 60, 12, 14, 75, 10, 91, 95, 99, 50, 76].

In this case, the first element of the array is the minimum and can be accessed with queue[0]. The last element of the tree is also the last element of the array.
This arrangement feels more natural, but others would also be possible.

The critical thing is off course to be able to implement the algorithms underlying the insert and deleteMin operations. For that we need to be able to:

  • Determine what is the first element of the heap. ✅
    From earlier, the index is 0.
  • Determine what is the last element of the heap. ✅
    From earlier, the index is queue.size - 1.
  • Determine the index of the parent of a given element. 🤔
    Required for the insert algorithm, whereby one iterates up the height of the tree, swapping elements as needed.
  • Determine the index of the children of a given element. 🤔
    Required for the deleteMin algorithm, whereby one iterates down the height of the tree, swapping elements as needed

Most online resources, as well as authoritative books, avoid explaining the derivation and simply state the following:

Given an element i of the binary heap the following holds:

where the odd looking, square bracket like symbol on the first expression is the mathematical notation for the ceil function Math.ceil(<some-double>).
We will take this as the truth. However, we can deduce these expressions on our own. Have a look at the Appendix if you are curious; there we explain the derivation step by step.

The algorithm is simple. The code snippets don’t require much explanation, aside from highlighting that this is not thread-safe.
Importantly, even if the usage of this code must be on the same thread, you should attempt to make the elements immutable. You must not mutate the elements of the binary queue in a way that violates the heap invariants.

Insert implementation
public final class BinaryHeap<E> {
    // constructor and parameters hidden
    // utility methods

  public void insert(E elem) { 
    queue[++numberElem - 1] = elem;
    perculateUp();
  }

  private void perculateUp() {
    int iC = numberElem - 1; // index of child

    while (iC > 0) {
      var iP = indexParent(iC); // index of parent
      if (lower(iP, iC)) break;
      swap(iC, iP);
      iC = iP;
    }
  }
}
DeleteMin implementation
public final class BinaryHeap<E> {
    // constructor and parameters hidden
    // utility methods

  public E deleteMin() {
    if (numberElem == 0) throw new NoSuchElementException("No elements");
    E elemToReturn = queue[0];
    queue[0] = queue[numberElem - 1];
    queue[--numberElem] = null;
    perculateDown();
    return elemToReturn;
  }

  private void perculateDown() {
    int iP = 0; // index of parent

    while (indexChildL(iP) <= numberElem - 1) {
      var iL = indexChildL(iP);
      var iR = indexChildR(iP);
      int iC; // index of child
      if (iR <= numberElem - 1 && lower(iR, iL)) iC = iR;
      else iC = iL;

      if (lower(iP, iC)) break;
      swap(iC, iP);
      iP = iC;
    }
  }
}

The benchmark results further down – that you can compare with sorting of array list – show the time it takes to deleteMin grows much slower (exponentially slower) than the number of elements.
We show also the results of java ‘s PriorityQueue, as that was the data-structure used on the original article about schedulers, and because that structure is implemented with a binary heap:

/**
 * Priority queue represented as a balanced binary heap: the two
 * children of queue[n] are queue[2*n+1] and queue[2*(n+1)].  The
 * priority queue is ordered by comparator, or by the elements'
 * natural ordering, if comparator is null: For each node n in the
 * heap and each descendant d of n, n <= d.  The element with the
 * lowest value is in queue[0], assuming the queue is nonempty.
 */
transient Object[] queue;

njava.util.PriorityQueueour BinaryHeap
10003.73.7
10,0003.33.1
100,0003.23.3
1,000,00012.18.6
10,000,00015.816.2
DeleteMin operation versus number of elements. Units in micro-seconds

Appendix

Let’s temporarily introduce a coordinate system, whereby the position of an element on the binary heap is determined by:

  1. The height h of its level (starting at 0)
  2. Its index/position p within that level (starting at 1)

So, from above, the last element – 76 – has coordinates (4, 3).

Index

Given an element with coordinates (h, p) its index in the array the sum between 1) all the elements on the levels above h, with 2) p, and 3) then the value 1 since the array is 0-indexed.
Since we established the number of elements at height h is 2h, then the number of all elements above h is:

From mathematics, the following equality on finite geometric series can be proven:

which leads to

For example, element 75, with coordinates (3,5) has index 11.


Children coordinates

For the element i we can say pi-1 is the number of elements that come before it on that level. Because every such element will have two children, and because the children of element i must come after those children, then it follows that the left child of i will have coordinate p as 2*(pi - 1) + 1. The right child is that plus 1.

For example, the element with value 7 is at position/index 2 within its level. Therefore, its left and right child are at positions 2*(2-1)+1=3 and 2*(2-1)+2=4 within their level.

At the same time, it follows from the definition that the height of the children of one element is one higher than the height of that element; hleft_child(i) = hi + 1.

Linking this with the earlier expression for the index of the array, we have:

Which can be simplified to:

Parent coordinates

The parent will have height hparent=h-1. Coordinate pparent is harder to derive but the reasoning is very similar to the child case above.

If the current element i is at position p, then it is the pth element on that level. This, and the other p-1 before it, are one of two children of the elements on the level above, and parent of i must be the last of all these parents. This should intuitively convince you pparent=p/2.
More precisely, it is pparent = Math.ceil(p/2) to account for when i is the left child and p/2 would be a rational number.

Linking this with the earlier expression for the index of the array, we have:

where the odd looking, square bracket like symbol is the mathematical notation for the ceiling function Math.ceil(<some-double>).

This can be factorized into:

For example, the element with value 75 has coordinates (3,5) and index 11. Its parent is the node with the value 8, which is at Math.ceil(11/2)-1=5.

Index