Functional Data Structures:Skew Heaps,Amortization and Lazy

Skew Heaps: Amortization and Lazy Evaluation

Next, we turn to priority queues, or heaps, supporting the following primitives:

• empty: a constant representing the empty heap.

• insert(x,h): insert the element x into the heap h and return the new heap.

• find Min(h): return the minimum element of h.

• delete Min(h): delete the minimum element of h and return the new heap.

• merge(h1,h2): combine the heaps h1 and h2 into a single heap and return the new heap.

image

Many of the standard heap data structures can easily be adapted to a functional setting, including binomial queues [7, 15] and leftist heaps [18, 24]. In this section, we describe a simple, yet interesting, design known as skew heaps [32]. (Non-persistent skew heaps are described in detail in Chapter 6.)

A skew heap is a heap-ordered binary tree. Each node contains a single element, and the nodes are ordered such that the element at each node is no larger than the elements at the node’s children. Because of this ordering, the minimum element in a tree is always at the root. Therefore, the find Min operation simply returns the element at the root. The insert and delete Min operations are defined in terms of merge: insert creates a new node and merges it with the existing heap, and delete Min discards the root and merges its children. The interesting operation is merge. Assuming both heaps are non-empty, merge compares their roots. The smaller root (that is, the root with the smaller element) becomes the new overall root and its children are swapped. Then the larger root is merged with the new left child of the smaller root (which used to be the right child). The net effect of a merge is to interleave the rightmost paths of the two trees in sorted order, swapping the children of nodes along the way.

This process is illustrated in Figure 40.8.

Notice how the nodes on the rightmost paths of the arguments end up on the leftmost path of the result. A Haskell implementation of skew heaps incorporating path copying is shown in Figure 40.9. A naive Java implementation is shown in Figure 40.10.

Skew heaps are not balanced, and individual operations can take linear time in the worst case. For example, Figure 40.11 shows an unbalanced shew heap generated by inserting the elements

5, 6, 4, 6, 3, 6, 2, 6, 1, 6

image

into an initially empty heap. Inserting a new element such as 7 into this unbalanced skew heap would take linear time. However, in spite of the fact that any one operation can be inefficient, the way that children are regularly swapped keeps the operations efficient in the amortized sense—insert, delete Min, and merge run in logarithmic amortized time [32].

Or, at least, those would be the bounds for non-persistent skew heaps. When we analyze skew heaps in a persistent setting, we receive a nasty shock. Making an amortized data structure such as skew heaps persistent using path copying breaks its amortized bounds! In the case of skew heaps, naively incorporating path copying causes the logarithmic amortized bounds to degrade to the linear worst-case bounds.

Consider, for example, the unbalanced skew heap s in Figure 40.11, for which insert takes linear time for large elements. The result of the insert is a new skew heap sl. Performing another insert on sl would actually be quite efficient, but because these structures are persistent, we are free to ignore sl and perform the next insert on the old skew heap s instead. This insert again takes linear time. We can continue performing operations on the old skew heap as often as we want. The average cost per operation over a sequence of such operations is linear, which means that the amortized cost per operation is now linear, rather than logarithmic. Simple experiments on the Java implementation from Figure 40.10 confirm this analysis.

image

However, if we repeat those experiments on the Haskell implementation from Figure 40.9, we do not observe linear behavior. Instead, the operations appear to retain their logarith- mic amortized bounds, even under persistent usage. This pleasant result is a consequence of a fortuitous interaction between path copying and a property of the Haskell language called lazy evaluation. (Many other functional programming languages also support lazy evaluation).

Under lazy evaluation, operations such as merge are not actually executed until their results are needed. Instead, a new kind of node that we might call a pending merge is automatically created. The pending merge lays dormant until some other operation such as find Min needs to know the result. Then and only then is the pending merge executed. The node representing the pending merge is overwritten with the result so that it cannot be executed twice.

Although Java does not directly support lazy evaluation, it is easy to simulate, as shown in Figure 40.12. A minor difference between the lazy Java implementation and the Haskell implementation is that the Java implementation avoids creating pending merge nodes when one of the arguments is null.

A crucial aspect of lazy evaluation is that a pending computation, once triggered, is executed only far enough to produce the part of the result that is needed. The remaining parts of the computation may be delayed further by creating new pending nodes. In the case of the merge operation, this means that when a pending merge is executed, the two roots are compared and the children of the smaller root are swapped as normal, but the recursive merge of the larger root with the former right child of the smaller root is not performed. Instead, a new pending merge is created and installed as the left child of the new root node. This process is illustrated in Figure 40.13, with the pending merges drawn as diamonds.

Figure 40.14 illustrates the propagation of pending merges through a sequence of operations. First, the initial tree is built via a series of inserts. Then find Min executes those pending merges to find the value of the root. Next, delete Min deletes the root and creates a new pending merge of the two children. Finally, find Min again executes the pending merges to find the new value of the root.

Notice that pending nodes and lazy evaluation affect when the various steps of a merge are carried out, but that they do not affect the end results of those steps. After all the pending merges have been executed, the final tree is identical to the one produced by skew heaps without lazy evaluation.

Strictly speaking, the nodes of a lazy skew heap can no longer be called immutable. In particular, when a pending merge is executed, the node representing the pending merge is updated with the result so that it cannot be executed twice. Functional languages typically allow this kind of mutation, known as memoization, because it is invisible to the user, except in terms of efficiency. Suppose that memoization was not performed. Then pending merges might be executed multiple times. However, every time a given pending merge was executed, it would produce the same result. Therefore, memoization is an optimization that makes lazy evaluation run faster, but that has no effect on the output of any computation.

40.4.1 Analysis of Lazy Skew Heaps

Next, we prove that merge on lazy skew heaps runs in logarithmic amortized time. We use the banker’s method, associating a certain number of credits with each pending merge.

We begin with a few definitions. The logical view of a tree is what the tree would look like if all of its pending merges were executed. The logical left spine of a tree is the leftmost path from the root in the logical view. Similarly, the logical right spine of a tree is the rightmost path from the root in the logical view. A pending node is called left-heavy if its left subtree is at least as large as its right subtree in the logical view, or right-heavy if its right subtree is larger than its left subtree in the logical view. The successor of a pending node is the new pending node that is created as the new left child of the existing node when the existing node is executed.

Now, we charge one credit to execute a single pending merge. This credit pays for the

image

image

comparison, the allocation of the successor node, and all necessary pointer manipulations, but it does not pay for executing the child nodes if they happen to be pending merges as well. When a right-heavy pending node is executed, it spends one of its own credits. When a left-heavy pending node is executed, the credit must be supplied either by its parent node, if it has one, or by the heap operation that originated the request, if the node is a root. This adds a single credit to the costs of the insert and deleteMin operations. After a pending node is executed, it passes any remaining credits to its successor node.

When we create a pending node, we must provide it with all the credits it will ever need. This includes

• one credit for itself, if it is right-heavy,

• one credit for each child that is a left-heavy pending node, and

• any credits needed by its successors.

Notice that a pending node and its successors all lay on the logical left spine of the resulting tree in the logical view. Similarly, the physical children of a pending node and its successors all lay on the logical right spines of the argument trees to the original merge. Therefore, the number of credits that we must create during a merge is bounded by the number of right-heavy nodes in the logical left spine of the resulting tree plus the numbers of left-heavy nodes in the logical right spines of the argument trees.

It is easy to see that the number of right-heavy nodes in a logical left spine is at most logarithmic in the size of the logical view. Similarly, the number of left-heavy nodes in a logical right spine is at most logarithmic in the size of the logical view. The total number of credits created by merge is therefore bounded by the sum of three logarithmic values, and thus is logarithmic itself.

Comments

Popular posts from this blog

Data Structure Visualization:Introduction and Value of Data Structure Rendering

Collision Detection:Penetration Depth Computation

Concurrent Data Structures:Linked Lists