Concurrent Data Structures:Priority Queues

Priority Queues

A concurrent priority queue is a data structure linearizable to a sequential priority queue that provides insert and delete-min operations with the usual priority queue semantics. (See Part II of this handbook for more on sequential priority queues.)

Heap-Based Priority Queues

Many of the concurrent priority queue constructions in the literature are linearizable versions of the heap structures described earlier in this book. Again, the basic idea is to use fine-grained locking of the individual heap nodes to allow threads accessing different parts of the data structure to do so in parallel where possible. A key issue in designing such concurrent heaps is that traditionally insert operations proceed from the bottom up and delete-min operations from the top down, which creates potential for deadlock. Biswas and Brown [17] present such a lock-based heap algorithm assuming specialized “cleanup” threads to overcome deadlocks. Rao and Kumar [115] suggest to overcome the drawbacks of [17] using an algorithm that has both insert and delete-min operations proceed from the top down. Ayani [11] improved on their algorithm by suggesting a way to have consecutive insertions be performed on opposite sides of the heap. Jones [67] suggests a scheme similar to [115] based on a skew heap.

Hunt et al. [60] present a heap-based algorithm that overcomes many of the limitations of the above schemes, especially the need to acquire multiple locks along the traversal path in the heap. It proceeds by locking for a short duration a variable holding the size of the heap and a lock on either the first or last element of the heap. In order to increase paral- lelism, insertions traverse the heap bottom-up while deletions proceed top-down, without introducing deadlocks. Insertions also employ a left-right technique as in [11] to allow them to access opposite sides on the heap and thus minimize interference.

On a different note, Huang and Weihl [59] show a concurrent priority queue based on a concurrent version of Fibonacci Heaps [34].

Nonblocking linearizable heap-based priority queue algorithms have been proposed by Herlihy [50], Barnes [12], and Israeli and Rappoport [64]. Sundell and Tsigas [132] present a lock-free priority queue based on a lock-free version of Pugh’s concurrent skiplist [111].

Tree-Based Priority Pools

Huang and Weihl [59] and Johnson [65] describe concurrent priority pools: priority queues with relaxed semantics that do not guarantee linearizability of the delete-min operations. Their designs are both based on a modified concurrent B+-tree implementation. Johnson introduces a “delete bin” that accumulates values to be deleted and thus reduces the load when performing concurrent delete-min operations. Shavit and Zemach [129] show a similar pool based on Pugh’s concurrent skiplist [111] with an added “delete bin” mechanism based on [65]. Typically, the weaker pool semantics allows for increased concurrency. In [129] they further show that if the size of the set of allowable keys is bounded (as is often the case in operating systems) a priority pool based on a binary tree of combining funnel nodes can scale to hundreds (as opposed to tens) of processors.

Comments

Popular posts from this blog

Data Structure Visualization:Introduction and Value of Data Structure Rendering

Collision Detection:Penetration Depth Computation

Concurrent Data Structures:Linked Lists