Binomial, Fibonacci, and Pairing Heaps:Fibonacci Heaps.
Fibonacci Heaps
Fibonacci heaps were specifically designed to efficiently support decrease-key operations. For this purpose, the binomial heap can be regarded as a natural starting point. Why? Consider the class of priority queue data structures that are implemented as forests of heap- ordered trees, as will be the case for Fibonacci heaps. One way to immediately execute a
FIGURE 7.7: Structure associated with a binomial heap node. Figure (b) illustrates the positioning of pointers among a node and its three children.
decrease-key operation, remaining within the framework of heap-ordered forests, is to simply change the key of the specified data item and sever its link to its parent, inserting the severed subtree as a new tree in the forest.
Figure 7.8 illustrates the process.
(Observe that the link to the parent only needs to be cut if the new key value is smaller than the key in the parent node, violating heap-order.) Fibonacci heaps accomplish this without degrading the asymptotic efficiency with which other priority queue operations can be supported. Observe that to accommodate node cuts, the list of children of a node needs to be doubly linked.
Hence the nodes of a Fibonacci heap require two sibling pointers.
Fibonacci heaps support findmin, insertion, meld, and decrease-key operations in constant amortized time, and deletion operations in O(log n) amortized time. For many applications, the distinction between worst-case times versus amortized times are of little significance. A Fibonacci heap consists of a forest of heap-ordered trees. As we shall see, Fibonacci heaps differ from binomial heaps in that there may be many trees in a forest of the same rank, and there is no constraint on the ordering of the trees in the forest list. The heap also includes a pointer to the tree root containing the minimum item, referred to as the min-pointer , that facilitates findmin operations. Figure 7.9 provides an example of a Fibonacci heap and illustrates certain structural aspects.
The impact of severing subtrees is clearly incompatible with the pristine structure of the binomial tree that is the hallmark of the binomial heap. Nevertheless, the tree structures that can appear in the Fibonacci heap data structure must sufficiently approximate binomial trees in order to satisfy the performance bounds we seek. The linking constraint imposed by binomial heaps, that trees being linked must have the same size, ensures that the number of children a node has (its rank), grows no faster than the logarithm of the size of the subtree rooted at the node. This rank versus subtree size relation is key to obtaining the O(log n) deletion time bound. Fibonacci heap manipulations are designed with this in mind.
Fibonacci heaps utilize a protocol referred to as cascading cuts to enforce the required rank versus subtree size relation. Once a node ν has had two of its children removed as a result of cuts, ν’s contribution to the rank of its parent is then considered suspect in terms of rank versus subtree size. The cascading cut protocol requires that the link to ν’s parent
FIGURE 7.8: Immediate decrease-key operation. The subtree severing (Figures (b) and (c)) is necessary only when kt < j.
be cut, with the subtree rooted at ν then being inserted into the forest as a new tree. If ν’s parent has, as a result, had a second child removed, then it in turn needs to be cut, and the cuts may thus cascade. Cascading cuts ensure that no non-root node has had more than one child removed subsequent to being linked to its parent.
We keep track of the removal of children by marking a node if one of its children has been cut. A marked node that has another child removed is then subject to being cut from its parent. When a marked node becomes linked as a child to another node, or when it gets cut from its parent, it gets unmarked. Figure 7.10 illustrates the protocol of cascading cuts. Now the induced node cuts under the cascading cuts protocol, in contrast with those primary cuts immediately triggered by decrease-key operations, are bounded in number by the number of primary cuts. (This follows from consideration of a potential function defined to be the total number of marked nodes.) Therefore, the burden imposed by cascading cuts can be viewed as effectively only doubling the number of cuts taking place in the absence of the protocol. One can therefore expect that the performance asymptotics are not degraded as a consequence of proliferating cuts. As with binomial heaps, two trees in a Fibonacci heap can only be linked if they have equal rank. With the cascading cuts protocol in place,
FIGURE 7.9: A Fibonacci heap and associated structure.
we claim that the required rank versus subtree size relation holds, a matter which we address next.
Let’s consider how small the subtree rooted at a node ν having rank k can be. Let ω be the mth child of ν from the right. At the time it was linked to ν, ν had at least m − 1 other children (those currently to the right of ω were certainly present). Therefore ω had rank at least m − 1 when it was linked to ν. Under the cascading cuts protocol, the rank of ω could have decreased by at most one after its linking to ν; otherwise it would have been removed as a child. Therefore, the current rank of ω is at least m − 2. We minimize the size of the subtree rooted at ν by minimizing the sizes (and ranks) of the subtrees rooted at
FIGURE 7.10: Illustration of cascading cuts. In (b) the dashed lines reflect cuts that have taken place, two nodes marked in (a) get unmarked, and a third node gets marked.
ν’s children. Now let Fj denote the minimum possible size of the subtree rooted at a node of rank j, so that the size of the subtree rooted at ν is Fk . We conclude that (for k ≥ 2)
where the final term, 1, reflects the contribution of ν to the subtree size. Clearly, F0 = 1 and F1 = 2. See Figure 7.11 for an illustration of this construction. Based on the preceding recurrence, it is readily shown that Fk is given by the (k + 2)th Fibonacci number (from whence the name “Fibonacci heap” was inspired). Moreover, since the Fibonacci numbers grow exponentially fast, we conclude that the rank of a node is indeed bounded by the logarithm of the size of the subtree rooted at the node.
We proceed next to describe how the various operations are performed.
Since we are not seeking worst-case bounds, there are economies to be exploited that could also be applied to obtain a variant of Binomial heaps. (In the absence of cuts, the individual trees generated by Fibonacci heap manipulations would all be binomial trees.) In particular we shall adopt a lazy approach to melding operations: the respective forests are simply combined by concatenating their tree lists and retaining the appropriate min-pointer. This requires only constant time.
An item is deleted from a Fibonacci heap by deleting the node that originally contains it, in contrast with Binomial heaps. This is accomplished by (a) cutting the link to the node’s parent (as in decrease-key) if the node is not a tree root, and (b) appending the list of children of the node to the forest. Now if the deleted node happens to be referenced by the min-pointer, considerable work is required to restore the min-pointer – the work previously deferred by the lazy approach to the operations. In the course of searching among the roots
of the forest to discover the new minimum key, we also link trees together in a consolidation process.
Consolidation processes the trees in the forest, linking them in pairs until there are no longer two trees having the same rank, and then places the remaining trees in a new forest list (naturally extending the melding process employed by binomial heaps). This can be accomplished in time proportional to the number of trees in forest plus the maximum possible node rank. Let max-rank denote the maximum possible node rank. (The preceding discussion implies that max-rank = O(log heap-size).) Consolidation is initialized by setting up an array A of trees (initially empty) indexed by the range [0,max-rank]. A non-empty position A[d] of A contains a tree of rank d. The trees of the forest are then processed using the array A as follows. To process a tree T of rank d, we insert T into A[d] if this position of A is empty, completing the processing of T. However, if A[d] already contains a tree U, then T and U are linked together to form a tree W, and the processing continues as before, but with W in place of T, until eventually an empty location of A is accessed, completing the processing associated with T. After all of the trees have been processed in this manner, the array A is scanned, placing each of its stored trees in a new forest. Apart from the final scanning step, the total time necessary to consolidate a forest is proportional to its number of trees, since the total number of tree pairings that can take place is bounded by this number (each pairing reduces by one the total number of trees present). The time required for the final scanning step is given by max-rank = log(heap-size).
The amortized timing analysis of Fibonacci heaps considers a potential function defined as the total number of trees in the forests of the various heaps being maintained. Ignoring consolidation, each operation takes constant actual time, apart from an amount of time proportional to the number of subtree cuts due to cascading (which, as noted above, is only constant in amortized terms). These cuts also contribute to the potential. The children of a deleted node increase the potential by O(log heap-size). Deletion of a minimum heap node additionally incurs the cost of consolidation. However, consolidation reduces our potential, so that the amortized time it requires is only O(log heap-size). We conclude therefore that all non-deletion operations require constant amortized time, and deletion requires O(log n) amortized time.
An interesting and unresolved issue concerns the protocol of cascading cuts. How would the performance of Fibonacci heaps be affected by the absence of this protocol?
Detailed code for manipulating Fibonacci heaps can found in Knuth [9].
Comments
Post a Comment