This section describes performance tests and their results. In the following, g++, msvc++, and local (the build used for generating this documentation) stand for three different builds:
The following table shows the complexities of the different underlying data structures in terms of orders of growth. It is interesting to note that this table implies something about the constants of the operations as well (see Amortized push and pop operations).
push | pop | modify | erase | join | |
std::priority_queue |
Θ(n) worst Θ(log(n)) amortized |
Θ(log(n)) Worst |
Theta;(n log(n)) Worst |
Θ(n log(n)) |
Θ(n log(n)) |
with Tag = |
O(1) |
Θ(n) worst Θ(log(n)) amortized |
Θ(n) worst Θ(log(n)) amortized |
Θ(n) worst Θ(log(n)) amortized |
O(1) |
with Tag = |
Θ(n) worst Θ(log(n)) amortized |
Θ(n) worst Θ(log(n)) amortized |
Θ(n) |
Θ(n) |
Θ(n) |
with Tag = |
Θ(log(n)) worst O(1) amortized |
Θ(log(n)) |
Θ(log(n)) |
Θ(log(n)) |
Θ(log(n)) |
with Tag = |
O(1) |
Θ(log(n)) |
Θ(log(n)) |
Θ(log(n)) |
Θ(log(n)) |
with Tag = |
O(1) |
Θ(n) worst Θ(log(n)) amortized |
Θ(log(n)) worst O(1) amortized, orΘ(log(n)) amortized |
Θ(n) worst Θ(log(n)) amortized |
Θ(n) |
[std note 1] This is not a property of the algorithm, but rather due to the fact that the STL's priority queue implementation does not support iterators (and consequently the ability to access a specific value inside it). If the priority queue is adapting an std::vector, then it is still possible to reduce this to Θ(n) by adapting over the STL's adapter and using the fact that top returns a reference to the first value; if, however, it is adapting an std::deque, then this is impossible.
[std note 2] As with [std note 1], this is not a property of the algorithm, but rather the STL's implementation. Again, if the priority queue is adapting an std::vector then it is possible to reduce this to Θ(n), but with a very high constant (one must call std::make_heap which is an expensive linear operation); if the priority queue is adapting an std::dequeu, then this is impossible.
[thin_heap_note] A thin heap has &Theta(log(n)) worst case modify time always, but the amortized time depends on the nature of the operation: I) if the operation increases the key (in the sense of the priority queue's comparison functor), then the amortized time is O(1), but if II) it decreases it, then the amortized time is the same as the worst case time. Note that for most algorithms, I) is important and II) is not.
In many cases, a priority queue is needed primarily for sequences of push and pop operations. All of the underlying data structures have the same amortized logarithmic complexity, but they differ in terms of constants.
The table above shows that the different data structures are "constrained" in some respects. In general, if a data structure has lower worst-case complexity than another, then it will perform slower in the amortized sense. Thus, for example a redundant-counter binomial heap (priority_queue with Tag = rc_binomial_heap_tag) has lower worst-case push performance than a binomial heap (priority_queue with Tag = binomial_heap_tag), and so its amortized push performance is slower in terms of constants.
As the table shows, the "least constrained" underlying data structures are binary heaps and pairing heaps. Consequently, it is not surprising that they perform best in terms of amortized constants.
In some graph algorithms, a decrease-key operation is required [clrs2001]; this operation is identical to modify if a value is increased (in the sense of the priority queue's comparison functor). The table above and Priority Queue Text modify Timing Test - I show that a thin heap (priority_queue with Tag = thin_heap_tag) outperforms a pairing heap (priority_queue with Tag =Tag = pairing_heap_tag), but the rest of the tests show otherwise.
This makes it difficult to decide which implementation to use in this case. Dijkstra's shortest-path algorithm, for example, requires Θ(n) push and pop operations (in the number of vertices), but O(n2) modify operations, which can be in practice Θ(n) as well. It is difficult to find an a-priori characterization of graphs in which the actual number of modify operations will dwarf the number of push and pop operations.