# «Gefördert durch die Deutsche Forschungsgemeinschaft im Rahmen des Graduiertenkollegs ’Methods for Discrete Structures’ (GRK 1408) Präsident der ...»

The number of variables n forms a natural parameter for satisﬁability. In the case of d-CNF formulas, n is eﬀectively polynomially related to the size of the input, which makes the existence of kernels of polynomial size trivial. The quest for a small kernel is a relaxation of the quest for sparsiﬁcation in polynomial time. Eliminating duplicate clauses yields a kernel of bitlength O(nd log n). Does satisﬁability of n-variable d-CNF formulas have kernels of size O(nd− )?

Lossy Compression. Harnik and Naor [HN06] introduced a notion of compression with the goal of succinctly storing instances of computational problems for resolution in the future, where there may be more time and more computational power available. The compressed version need not be an instance of the original problem, and the original instance need not be recoverable from the compressed version. The only requirement is that the solution be preserved. In the case of decision problems this simply means the yes/no answer. In analogy to image compression one can think of the Harnik-Naor notion of compression as a “lossy compression”, where the only aspect of the scenery that is guaranteed not to be lost is the solution to the problem.

Harnik and Naor applied their notion to languages in NP and showed the relevance to problems in cryptography when the compression is measured as a function of the bitlength of the underlying witnesses. In the case of satisﬁability the latter coincides with the number of variables of the formula. This way lossy compression becomes a relaxation of the notion of kernelization – we now want a polynomial-time mapping reduction to any problem, rather than to the original problem, such that the reduced instances have small bitlength as a function of n. For d-CNF formulas bitlength O(nd ) is trivially achievable – simply map to the characteristic vector that for each possible d-clause on n variables indicates whether it is present in the given formula. Can we lossily compress to instances of bitlength O(nd− )?

Probabilistically Checkable Proofs. A somewhat diﬀerent question deals with the size of probabilistically checkable proofs (PCPs). A PCP for a language L is a randomized proof system in which the veriﬁer only needs to read a constant number of bits of the proof in order to verify that a given input x belongs to L. Completeness requires that for every input x in L there exists a proof which the veriﬁer accepts with probability one. Soundness requires that for any input x outside of L no proof can be accepted with probability above some constant threshold less than one. For satisﬁability of Boolean formulas, Dinur [Din07] constructed PCPs of bitlength O(s · poly log s), where s denotes the size of the formula. For d-CNF formulas on n variables, Dinur’s construction yields PCPs of bitlength O(nd · poly log n). On the other hand, standard proofs only contain n bits. Do n-variable d-CNF formulas have PCPs of bitlength O(nd− )?

** **

**1. Introduction**

Our Results for Satisﬁability We give evidence that the answer to all four of the above questions is negative: If any answer is positive then coNP is in NP/poly. The latter is considered unlikely as it means the existence of a nonuniform polynomial-time proof system for tautologies, or equivalently, that coNP has polynomial-size nondeterministic circuits, and implies that the polynomial-time hierarchy collapses to its third level [Yap83].

We obtain those statements as corollaries to a more general result, in which we consider the following communication process to decide a language L.

Deﬁnition 1.1 (Oracle Communication Protocol). An oracle communication protocol for a language L is a communication protocol between two players. The ﬁrst player is given the input x and has to run in time polynomial in the length of the input; the second player is computationally unbounded but is not given any part of x. At the end of the protocol the ﬁrst player should be able to decide whether x ∈ L. The cost of the protocol is the number of bits of communication from the ﬁrst player to the second player.

We often refer to the second player as the oracle. Note that the bits sent by the oracle do not contribute towards the cost. By default the players in an oracle communication protocol are deterministic, but one can consider variants in which one or both players are randomized, nondeterministic, etc.

Satisﬁability of n-variable d-CNF formulas has a trivial protocol of cost O(nd ). The following result implies that there is no protocol of cost O(nd− ) unless the polynomialtime hierarchy collapses. In fact, the result even holds when the ﬁrst player is conondeterministic, i.e., when the ﬁrst player can have multiple valid moves to choose from in any given step, possibly leading to diﬀerent conclusions about the satisﬁability of a given input formula ϕ, but such that (i) if ϕ is satisﬁable then every valid execution comes to that conclusion, and (ii) if ϕ is not satisﬁable then at least one valid execution comes to that conclusion.

** Theorem 1.1.**

Let d ≥ 3 be an integer and a positive real. If coNP ⊆ NP/poly, there is no protocol of cost O(nd− ) to decide whether an n-variable d-CNF formula is satisﬁable, even when the ﬁrst player is conondeterministic.

The proof of this theorem and its corollaries is in §3.3. The corollaries about sparsiﬁcation, kernelization, and lossy compression follow by considering deterministic singleround protocols in which the polynomial-time player acts as a mapping reduction, sends the reduced instance to the computationally unbounded player, and the latter answers this query as a membership oracle. The corollary about probabilistically checkable proofs follows by considering a similar single-round protocol in which the ﬁrst player is conondeterministic. Note that Theorem 1.1 can handle more general reductions, in which multiple queries are made to the oracle over multiple rounds. The above corollaries can be strengthened correspondingly. In fact, Theorem 1.1 is morally even more general as it allows the oracle to play an active role that goes beyond answering queries from the polynomial-time player. We discuss this potential further in §3.6.

** **

**1.1. Hardness of Sparsiﬁcation**

Our Results for Covering Problems By reducibility the lower bounds from Theorem 1.1 carry over to other parameterized NP-complete problems, where the tightness depends on how the reduction aﬀects the parameterization. In fact, we derive Theorem 1.1 from a similar result for the vertex cover problem on d-uniform hypergraphs.

** Theorem 1.2.**

Let d ≥ 2 be an integer and a positive real. If coNP ⊆ NP/poly, there is no protocol of cost O(nd− ) to decide whether a d-uniform hypergraph on n vertices has a vertex cover of at most k vertices, even when the ﬁrst player is conondeterministic.

We prove this theorem in §3.2. The cases of Theorem 1.2 with d ≥ 3 are equivalent to the corresponding cases of Theorem 1.1. Note, though, that Theorem 1.2 also holds for d = 2, i.e., for standard graphs.

Similar to Theorem 1.1, Theorem 1.2 can be interpreted in terms of (graph) sparsiﬁcation, kernelization, lossy compression, and probabilistically checkable proofs. Regarding kernelization, Theorem 1.2 has an interesting implication for the vertex cover problem parameterized by the size of the vertex cover – one of the prime examples of a parameterized problem that is NP-hard but ﬁxed-parameter tractable. Kernelizations for this problem have received considerable attention. For standard graphs S. Buss [BG93] came up with a kernelization avant la lettre. He observed that any vertex of degree larger than k must be contained in any vertex cover of size k, should it exist. This gives rise to a kernelization with O(k 2 ) vertices and O(k 2 ) edges. Subsequently, several researchers tried to reduce the size of the kernel. Various approaches based on matching, linear programming, and crown reductions (see [GN07] for a survey) led to kernels with O(k) vertices, one of which we review in §2.1, but the resulting kernels are all dense. It remains open to ﬁnd kernels with O(k 2− ) edges. Since k ≤ n, the case d = 2 of Theorem 1.2 implies that such kernels do not exist unless the polynomial-time hierarchy collapses.

In fact, a similar result holds for a wide class of covering-type problems known as vertex deletion problems. For a ﬁxed graph property Π, the corresponding vertex deletion problem asks whether removing at most k vertices from a given graph G can yield a graph that satisﬁes Π. A host of well-studied speciﬁc problems can be cast as the vertex deletion problem corresponding to some graph property Π that is inherited by subgraphs.

Examples besides the vertex cover problem include the feedback vertex set problem and the bounded-degree deletion problem (see §3.4 for the deﬁnitions of these problems and for more examples).

If only ﬁnitely many graphs satisfy Π or if all graphs satisfy Π, the vertex deletion problem is trivially decidable in polynomial time. For all other graph properties Π that are inherited by subgraphs, Lewis and Yannakakis [LY80] showed that the problem is NP-hard. They did so by constructing a mapping reduction from the vertex cover problem. By improving their reduction such that it preserves the size of the deletion set up to a constant factor, we obtain the following result.

** Theorem 1.3.**

Let Π be a graph property that is inherited by subgraphs, and is satisﬁed by inﬁnitely many but not all graphs. Let be a positive real. If coNP ⊆ NP/poly,

1. Introduction there is no protocol of cost O(k 2− ) for deciding whether a graph satisfying Π can be obtained from a given graph by removing at most k vertices, even when the ﬁrst player is conondeterministic.

The proof is in §3.4. Theorem 1.3 implies that problems like feedback vertex set and bounded-degree deletion do not have kernels consisting of O(k 2− ) edges unless the polynomial-time hierarchy collapses. For both problems the result is tight in the sense that kernels with O(k 2 ) edges exist. For feedback vertex set we argue that a recent kernelization by Thomassé [Tho09] does the job; for bounded-degree deletion, kernels with O(k 2 ) edges were known to exist [FGMN09].

Our Results for Packing Problems The matching problem in d-uniform hypergraphs, d-Set Matching, is to decide whether a given hypergraph has a matching of size k, i.e., a set of k pairwise disjoint hyperedges.

Correspondingly, the Perfect d-Set Matching problem is to ﬁnd a perfect matching, i.e., a matching with k = n/d where n is the number of vertices. Fellows et al. [FKN+ 08] show that d-Set Matching has kernels with O(k d ) hyperedges.

** Theorem 1.4 ([FKN+ 08]).**

The problem d-Set Matching has kernels with O(k d ) hyperedges.

In Appendix B, we sketch a straightforward but instructive proof of this fact using the sunﬂower lemma of Erdős and Rado [ER60]. We use our lower bound technology for oracle communication protocols to prove that the kernel size above is asymptotically optimal under the hypothesis coNP ⊆ NP/poly.

** Theorem 1.5.**

Let d ≥ 3 be an integer and a positive real. If coNP ⊆ NP/poly, there it no protocol of cost O(k d− ) for Perfect d-Set Matching.

A particularly well-studied special case of set matching is when the sets are certain ﬁxed subgraphs (e.g., triangles, cliques, stars, etc.) of a given graph. We use the terminology of Yuster [Yus07], who surveys graph theoretical properties of such graph packing problems. Formally, an H-matching of size k in a graph G is a collection of k vertex-disjoint subgraphs of G that are isomorphic to H. The problem H-Matching is to ﬁnd an H-matching of a given size in a given graph. Both problems are NP-complete whenever H contains a connected component with more than two vertices [KH78] and is in P otherwise.

The kernelization properties of graph packing problems received a lot of attention in the literature (e.g., [Mos09, FHR+ 04, PS04, FR09, WNFC10, MPS04]). H-Matching can be expressed as a d-Set Matching instance with O(k d ) edges (where d := |V (H)|) and therefore Theorem 1.4 implies a kernel of size O(k d ). In the particularly interesting special case when H is a clique Kd, we use a simple reduction to transfer the above theorem to obtain a lower bound for Kd -Matching.

** Theorem 1.6.**

Let d ≥ 4 be an integer and a positive real. If coNP ⊆ NP/poly, there it no protocol of cost O(k d−1− ) for Kd -Matching.

** **

**1.1. Hardness of Sparsiﬁcation**

An upper bound of size O(k d ) follows for Kd -Matching from Theorem 1.4. This does not quite match our conditional lower bounds of O(k d−1− ), and it is an interesting open problem to make the bounds tight.

The H-Factor problem is the restriction of H-Matching to the case k = n/d, i.e., the goal is to ﬁnd an H-matching that involves all vertices. Unlike the case of matching d-sets, where we had the same bounds for Perfect d-Set Matching and dSet Matching, we cannot expect that the same bounds hold always for H-Matching and H-Factor. The reason is that for H-Factor there is a trivial O(k 2 ) upper bound on the kernel size for every graph H: an n-vertex instance has size O(n2 ) and we have k = Θ(n) by the deﬁnition of H-Factor. We show that this bound is tight for every NP-hard H-Factor problem. Thus, we cannot reduce H-Factor to sparse instances.

** Theorem 1.7.**

Let H be a connected graph with d ≥ 3 vertices and a positive real. If coNP ⊆ NP/poly, there it no protocol of cost O(k 2− ) for Kd -Factor.

Obviously, Theorem 1.7 gives a lower bound for the more general H-Matching problem.

In particular, it proves the missing d = 3 case in Theorem 1.6.