Cuttings:Introduction and The Cutting Construction

Introduction

For divide-and-conquer purposes, it is often desirable to organize a set S of n numbers into a sorted list, or perhaps to partition it into two equal-sized groups with no element in one group exceeding any element in the other one. More generally, we might wish to break up S into k groups of size roughly n/k, with again a total ordering among the distinct groups. In the first case we sort; in the second one we compute the median; in the third one we compute quantiles. This is all well known and classical. Is it possible to generalize these ideas to higher dimension? Surprisingly the answer is yes. A geometric construction, known as an ε-cutting, provides a space partitioning technique that extends the classical notion of selection to any finite dimension. It is a powerful, versatile data structure with countless applications in computational geometry.

Let H be a set n hyperplanes in Rd. Our goal is to divide up Rd into simplices, none of which is cut by too many of the n hyperplanes. By necessity, of course, some of the simplices need to be unbounded. We choose a parameter ε > 0 to specify the coarseness of the subdivision. A set C of closed full-dimensional simplices is called an ε-cutting for H (Fig. 25.1) if:

(i) the union of the simplices is Rd, and their interiors are mutually disjoint;

(ii) the interior of any simplex is intersected by at most εn hyperplanes of H.

Historically, the idea of using sparsely intersected simplices for divide and conquer goes back to Clarkson [10] and Haussler and Welzl [15], among others. The definition of an ε- cutting given above is essentially due to Matousek [18]. Efficient but suboptimal construc- tions were given by Agarwal [1, 2] for the two-dimensional case and Matouˇsek [17, 18, 21] for arbitrary dimension. The optimal ε-cutting construction cited in the theorem below, due to Chazelle [4], is a simplification of an earlier design by Chazelle and Friedman [7].

THEOREM 25.1 Given a set H of n hyperplanes in Rd, for any 0 < ε < 1, there exists an ε-cutting for H of size O(ε−d), which is optimal. The cutting, together with the list

image

of hyperplanes intersecting the interior of each simplex, can be found deterministically in O(1−d) time.

The Cutting Construction

This section explains the main ideas behind the proof of Theorem 25.1. We begin with a quick overview of geometric sampling theory. For a comprehensive treatment of the subject, see [6, 23].

Geometric Sampling

A set system is a pair Σ = (X, R), where X is a set and R is a collection of subsets of X. In our applications, X Rd and each R R is of the form X f (K), where K is a fixed region of Rd and f is any member of a fixed group F of transformations. For example, we might consider n points in the plane, together with the subsets lying inside any triangle congruent to a fixed triangle.

Given Y X, we define the set system “induced by Y ” to be (Y, R|Y ), with R|Y = { Y R | R ∈ R }. The VC-dimension (named for Vapnik and Chervonenkis [28]) of Σ is defined as the maximum size of any Y such that R|Y = 2Y . For example, the VC-dimension of the infinite geometric set system formed by points in the plane and halfplanes is 3. The shatter function πR(m) of the set system Σ = (X, R) is the maximum number of subsets in the set system (Y, R|Y ) induced by any Y X of size m. If πR(m) is bounded by cm , for some constants c, d > 0, then the set system is said to have a shatter function exponent of at most d. It was shown in [26–28] that, if the shatter function exponent is O(1), then so is the VC-dimension. Conversely, if the VC-dimension is d 1 then, for any m d, πR(m) < (em/d) .

We now introduce two fundamental notions: ε-nets and ε-approximations. For any 0 < ε < 1, a set N X is called an ε-net for a finite set system (X, R) if N R /= for any R R with |R|/|X| > ε. A finer (but more costly) sampling mechanism is provided by an

image

Some simple structural facts about nets and approximations:

LEMMA 25.1 If X1, X2 are disjoint subsets of X of the same size, and A1, A2 are same- size ε-approximations for the subsystems induced by X1, X2 (respectively), then A1 A2 is an ε-approximation for the subsystem induced by X1 X2.

LEMMA 25.2 If A is an ε-approximation for (X, R), then any ε/-approximation (resp.-net) for (A, R|A ) is also an (ε + ε/)-approximation (resp. -net) for (X, R).

In the absence of any restrictive assumption on the set system, it is natural to expect the sample size to depend on both the desired accuracy and the size of the set system itself.

THEOREM 25.2 Given a set system (X, R), where |X| = n and |R| = m, for any 1/n ε < 1, it is possible to find, in time O(nm), an ε-net for (X, R) of size O(ε1 log m) and an ε-approximation for (X, R) of size O(ε2 log m).

If we assume bounded VC-dimension, everything changes. In fact the key result in geometric sampling theory is that, for any given level of accuracy, the sample size need not depend on the size of the set system.

In practice, geometric set systems often are “accessible” via an oracle function that takes any Y X as input and returns the list of sets in R|Y (each set represented explicitly). We assume that the time to do that is O(|Y |d+1), which is linear in the maximum possible size of the oracle’s output, where d is the shatter function exponent. For example, in the case of points and disks in the plane, we have d = 3, and so this assumes that, given n points, we can enumerate all subsets enclosed by a disk in time O(n4). To do this, enumerate all k-tuples of points (k 3) and, for each tuple, find which points lie inside the smallest disk enclosing the k points. The main result below is stated in terms of the shatter function exponent d, but the same results hold if d denotes the VC-dimension.

image

Vapnik and Chervonenkis [28] described a probabilistic construction of ε-approximations in bounded VC-dimension. The deterministic construction stated above is due to Chazelle and Matouˇsek [8], and builds on earlier work [7, 17, 18, 21]. Haussler and Welzl [15] proved the upper bound on the size of ε-nets. The running time for computing an ε-net was improved to O(d)3d (ε1 log dε1)d|X| by Br¨onnimann, Chazelle, and Matouˇsek [3], using the concept of a sensitive ε-approximation. Koml´os, Pach, and Woeginger [16] showed that, for any fixed d, the bound of O(ε1 log ε1) for ε-nets is optimal in the worst case (see also [25]).

The situation is different with ε-approximations: if d > 1 is the VC dimension, then there exists an ε-approximation for (X, R) of size O(ε2+2/(d+1)) [22, 24].

An important application of ε-approximations is for estimating how many vertices in an arrangement of hyperplanes in Rd lie within a given convex region. Let Σ = (H, R) be the set system formed by a set H of hyperplanes in Rd, where each R R is the subset of H intersected by an arbitrary line segment. Let σ be a convex body (not necessarily full-dimensional). In the arrangement formed by H within the affine span of σ, let V (H, σ) be the set of vertices that lie inside σ. The following was proven in [3, 4].

THEOREM 25.4 Given a set H of hyperplanes in Rd in general position, let A be an ε-approximation for Σ = (H, R). Given any convex body σ of dimension k d,

image

Optimal Cuttings

For convenience of exposition, we may assume that the set H of n hyperplanes in Rd is in general position. Let A(H) denote the arrangement formed by H. Obviously, no simplex of an ε-cutting can enclose more than O(εn)d vertices. Since A(H) itself has exactly (n) vertices, we should expect to need at least on the order of ε−d simplices. But this is precisely the upper bound claimed in Theorem 25.1, which therefore is asymptotically tight.

Our starting point is an ε-net N for H, where the underlying set system (X, R) is formed by a set X of hyperplanes and the collection R of subsets obtained by intersecting X with all possible open d-simplices. Its VC-dimension is bounded, and so by Theorem 25.3 an ε-net N of size O(ε1 log ε1) can be found in −O(1) time.

We need to use a systematic way to triangulate the arrangement formed by the ε-net. We build a canonical triangulation of A(N ) by induction on the dimension d (Fig. 25.2).

The case d = 1 is trivial, so we assume that d> 1.

1. Rank the vertices of A(N ) by the lexicographic order of their coordinate se- quences.

2. By induction, form a canonical triangulation of the (d 1)-dimensional arrange- ment made by each hyperplane with respect to the n 1 others.

3. For each cell (ie, full-dimensional face) σ of A(N ), lift toward its lowest-ranked vertex v each k-simplex (k = 0,...,d 2) on the triangulated boundary of σ that does not lie in a (d 1)-face of A(N ) that is incident to v.

It is not hard to see that the combinatorial complexity (ie, number of all faces of all dimensions) of the canonical triangulation of A(N ) is asymptotically the same as that of A(N ), which is O(ε1 log ε1)d. Therefore, the closures of its cells constitute an ε-cutting for H of size O(ε1 log ε1)d, which is good but not perfect. For optimality we must remove the log factor.

Assume that we have at our disposal an optimal method for building an ε0-cutting of size O(ε−d), for some suitably small constant ε0. To bootstrap this into an optimal ε-cutting construction for any ε, we might proceed as follows: Beginning with a constant-size cutting, we progressively refine it by producing several generations of finer and finer cuttings, C1, C2, etc, where Ck is an εk -cutting for H of size O(ε−dk ). Specifically, assume that we have recursively computed the cutting Ck for H. For each σ ∈ Ck , we have the incidence list Hσ of the hyperplanes intersecting the interior of σ. To compute the next-generation cutting Ck+1, consider refining each σ in turn as follows:

1. Construct an ε0-cutting for Hσ , using the algorithm whose existence is assumed.

image

image

therefore, we can estimate v(H, σ) in constant time with an error of at most c0 |Hσ |d, which for our purposes here is inconsequential.

How do we go about refining σ and how costly is it? If σ is a full simplex, then by Theorem 25.3, we can compute the required ε0-net in O(|Hσ |) time. Within the same amount of time, we can also find the new set of simplices in σ, together with all of their incidence lists.

The refinement of a sparse simplex σ is a little more involved. We begin with a randomized construction, from which we then remove all the randomness. We compute Ho by choosing a random sample from Aσ of size c1ε1 log ε1, for some constant c1 large enough (independent of ε0). It can be shown that, with probability at least 2/3, the sample forms an (ε0/2)-net for Aσ . By Lemma 25.2, Ho is a (c0/2+ ε0/2)-net for Hσ ; therefore, we ensure that (ii) holds with probability at least 2/3. A slightly more complex analysis shows that (i) also holds with probability at least 2/3; therefore (i,ii) are both true with probability at least 1/3. We derandomize the construction in a trivial manner by trying out all possible samples, which takes constant time; therefore, the running time for refining σ is O(|Hσ |).

Putting everything together, we see that refining any simplex takes time proportional to the total size of the incidence lists produced. By Lemma 25.3, the time needed for building generation k + 1 is O((d1)(k+1)). The construction goes on until we reach the first generation such that εk ε. This establishes Theorem 25.1.

From the proof above it is not difficult to derive a rough estimate on the constant factor in the O(ε−d) bound on the size of an ε-cutting. A thorough investigation into the smallest possible constant was undertaken by Har-Peled [14] for the two-dimensional case.

Comments

Popular posts from this blog

Data Structure Visualization:Introduction and Value of Data Structure Rendering

Binary Space Partitioning Trees:BSP Tree as a Hierarchy of Regions.

Drawing Trees:HV-Layout