Cuttings:Applications
Applications
Cuttings have numerous uses in computational geometry. We mention just a handful: point location, Hopcroft’s problem, convex hulls, Voronoi diagrams, and range searching. In many cases, cuttings allow us to derandomize existing probabilistic solutions, ie, to remove any need for random bits and thus produce deterministic algorithms. Many other applications are described in the survey [2].
How do we preprocess n hyperplanes in Rd, so that, given a query point q, we can quickly find the face of the arrangement formed by the hyperplanes that contains the point? For an answer, simply set ε = 1/n in Theorem 25.1, and use the nesting structure of C1, C2, etc, to locate q in Ck. Note that this can be done in constant time once we know the location in Ck−1.
THEOREM 25.5 Point location among n hyperplanes can be done in O(log n) query time, using O(nd) preprocessing.
Observe that if we only wish to determine whether the point q lies on one of the hyper- planes, it is possible to cut down the storage requirement a little. To do that, we use an ε-cutting for ε = (log n)/n. The cells associated with the bottom of the hierarchy are each cut by O(log n) hyperplanes, which we can therefore check one by one. This reduces the storage to O(nd/(log n)d−1).
Given n points and n lines in R2, is there any incidence between points and lines? This is Hopcroft’s problem. It is self-dual; therefore dualizing it won’t help. A classical arrangement of n lines due to Erdo˝s has the property that its n highest-degree vertices are each incident to Ω(n1/3) edges. By picking these n lines as input to Hopcroft’s problem and positioning the n points in the near vicinity of these high-degree vertices, we get a sense (not a proof) that to solve the problem should require checking each point against the Ω(n1/3) lines incident to their nearby vertex. This leads to an Ω(n4/3) running time, which under some realistic (though restrictive) conditions, can be made into a rigorous lower bound [13]. At the very least this line of reasoning suggests that to beat Ω(n4/3) is unlikely to be easy. This bound has almost been achieved by an algorithm of Matouˇsek [20] with, at its heart, a highly intricate and subtle use of cuttings.
THEOREM 25.6 To decide whether n points and n lines in the plane are free of any incidence can be done in n4/3 2O(log∗ n) time.
Convex Hulls and Voronoi Diagrams
Cuttings play a key role in computing convex hulls in higher dimension. Given n points in Rd, their convex hull is a bounded convex polytope with O(nLd/2J) vertices. Of course, it may have much fewer of them: eg, d + 1, if n − d − 1 points lie strictly inside the convex hull of the d + 1 others. It is notoriously difficult to design output-sensitive algorithms, the term designating algorithms whose running time is a function of both input and output sizes. In the “worst case” approach our goal is a simpler one: to design an optimal convex hull algorithm that runs in O(n log n + nLd/2J) time. (The extra term n log n is unavoidable because sorting is easily embedded as a convex hull problem.)
Computing the convex hull of n points is equivalent by duality to computing the intersection of n halfspaces. A naive approach to this problem is to insert each halfspace one after the other while maintaining the intersection of previously inserted halfspaces incrementally. This can be done without difficulty if we maintain a canonical triangulation of the current intersection polyhedron and update a bipartite graph indicating which hyperplane intersects which cell of the triangulation. A surprising fact, first proven by Clarkson and Shor [11], is that if the halfspaces are inserted in random order, then the expected running time of the algorithm can be made optimal. By using an elaborate mix of ε-nets, ε-approximations, and ε-cuttings, Chazelle [5] showed how to compute the intersection deterministically in optimal time; his algorithm was subsequently simplified by Bro¨nnimann, Chazelle, and Matouˇsek [3]; a complete description is also given in the book [6]. This implies the two theorems below.
THEOREM 25.7 The polyhedron formed by the intersection of n halfspaces in Rd can be computed in O(n log n + nLd/2J) time.
Not only does this result give us an optimal deterministic solution for convex hulls, but it also solves the Voronoi diagram problem. Indeed, recall [12, 29] that a Voronoi diagram of n points in Rd can be “read off” from the facial structure of the convex hull of a lift of the n points into Rd+1.
THEOREM 25.8 The convex hull of a set of n points in Rd can be computed determin- istically in O(n log n + nLd/2J) time. By duality, the Voronoi diagram (or Delaunay trian- gulation) of a set of n points in Ed can be computed deterministically in O(n log n + nId/2l) time.
Range Searching
Simplex range searching refers to the problem of preprocessing a set P of n points in Rd so that, given a query (closed) simplex σ, the size of P ∩ σ can be quickly evaluated. Variants of the problem include reporting the points of P ∩σ explicitly or, assuming that each point p has a weight The most powerful data structure for solving simplex range searching, the simplicial partition, vividly illustrates the power of ε-cuttings. A collection {(Pi, Ri)} is called a simplicial partition if
• the collection {Pi} forms a partition of P ; and
• each Ri is a relatively open simplex that contains Pi.
The simplices Ri can be of any dimension and, in fact, need not even be disjoint; furthermore the Pi’s need not be equal to P ∩ Ri. A hyperplane is said to cut Ri if it intersects, but does not contain, Ri. The cutting number of the simplicial partition refers to the maximum number of Ri’s that can be cut by a single hyperplane. Matouˇsek [19] designed an optimal construction, which happens to be crucially based on ε-cuttings.
LEMMA 25.4 Given a set P of n points in Rd (d > 1), for any integer 1 < r ≤ n/2, there exists a simplicial partition of cutting number O(r1−1/d ) such that n/r ≤ |Pi| < 2n/r for each (Pi, Ri) in the partition.
To understand the usefulness of simplicial partitions for range searching, one needs to learn about partition trees. A partition tree for P is a tree T whose root is associated with the point set P . The set P is partitioned into subsets P1,..., Pm, with each Pi associated with a distinct child vi of the root. To each vi corresponds a convex open set Ri, called the region of vi, that contains Pi. The regions Ri are not necessarily disjoint. If |Pi| > 1, the subtree rooted at vi is defined recursively with respect to Pi.
Armed with a partition tree, it is a simple matter to handle range search queries. In preprocessing, at each node we store the sum of the weights of the points associated with the corresponding region. To answer a query σ, we visit all the children vi of the root and check whether σ intersects the region Ri of vi: (i) if the answer is yes, but σ does not completely enclose the region Ri of vi, then we visit vi and recurse; (ii) if the answer is yes, but σ completely encloses Ri, we add to our current weight count the sum of the weights within Pi, which happens to be stored at vi; (iii) if the answer is no, then we do not recurse at vi.
It is straightforward to see that Lemma 25.4 can be used to construct partition trees. It remains for us to choose the branching factor. If we choose a large enough constant r, we end up with a partition tree that lets us answer simplex range search queries in O(n1−1/d+ε) time for any fixed ε > 0, using only O(n) storage. A more complex argument by Matouˇsek [19] removes the ε term from the exponent.
With superlinear storage, various space-time tradeoffs can be achieved. For example, as shown by Chazelle, Sharir, and Welzl [9], simplex range searching with respect to n points in Rd can be done in O(n1+ε /m1/d) query time, using a data structure of size m, for any n ≤ m ≤ nd. Matouˇsek [20] slightly improved the query time to O(n(log m/n)d+1/m1/d), for m/n large enough. These bounds are essentially optimal under highly general computational models [6].
Comments
Post a Comment