# «Gefördert durch die Deutsche Forschungsgemeinschaft im Rahmen des Graduiertenkollegs ’Methods for Discrete Structures’ (GRK 1408) Präsident der ...»

Communication Protocols In general, a two-player communication protocol is described by strategies that tell each of the players when and what to communicate to the other player and how to further behave as a function of the input and the communication history. In the speciﬁc case of our oracle communication protocols of Deﬁnition 1.1, there is an asymmetry between the two players. We model the ﬁrst player as a polynomial-time Turing machine M and the second player as a function f. The machine M has a special oracle query tape, oracle query symbol, and oracle answer tape.

Whenever M writes the special oracle query symbol on the oracle query tape, in a single computation step the contents of the answer tape is replaced by f (q), where q represents the contents of the oracle query tape at that time. Note that the function f is independent of M ’s input x, which reﬂects the fact that the second player does not have direct access to the input. The oracle query tape is one-way and is never erased, which allows the strategy of the second player to depend on the entire communication history.

** **

**2.1. Review: Kernelization of Vertex Cover**

We say that the oracle communication protocol decides a parameterized problem (L, k) if M with oracle f accepts an input x if and only if x ∈ L. The cost c(k) of the protocol is the maximal number of bits written on the oracle query tape over all inputs x with parameter k(x) = k.

By considering Turing machines other than the standard deterministic model for the ﬁrst player, we obtain corresponding variants of oracle communication protocols. For example, we can let the ﬁrst player be a polynomial-time conondeterministic Turing machine. The second player is always modeled as a function. Whenever there are multiple possible valid executions (as in the case of conondeterministic protocols), we deﬁne the cost as the maximum cost over all of them, i.e., we consider the worst case.

2.1. Review: Kernelization of Vertex Cover Vertex cover on graphs is one of the most studied problems in parameterized complexity and many diﬀerent kernelization algorithms for vertex cover and its variants are known.

In this section, we brieﬂy review the fact that vertex cover has kernels with 2k vertices.

We use the Gallai–Edmonds structure theorem and the crown reduction rule.

As mentioned in the introduction, a simple high-degree argument shows that vertex cover has kernels with O(k 2 ) vertices. Nemhauser and Trotter [NT74] use the halfintegrality of the linear program relaxation of vertex cover to obtain a kernel with 2k vertices. Using purely combinatorial techniques, this kernel size is also achievable by applying the so-called crown reduction rule. Here the goal is to ﬁnd a special structure in the graph – a crown – apply the reduction rule, and repeat this process until no crown can be found anymore. We review a simple analysis of this reduction rule that yields kernels with 3k vertices [FG06, Chapter 9.2]. We show how to use the Gallai–Edmonds structure theorem to ﬁnd a crown which leads to kernels with 2k vertices after a single application of the crown reduction rule.

The following lemma constructs a maximum matching M in a graph G and an independent set I.

** Lemma 2.1.**

Let S be any set of vertices of G. There is a maximum matching M of G and an independent set I such that N (I) ⊆ S and every vertex of N (I) is matched by M to a vertex of I. Furthermore, all singleton components of G − S are contained in V (M ) ∪ I, and M and I can be computed in polynomial time.

We obtain a crown with head N (I), and the crown reduction consists of picking all vertices of N (I) into the vertex cover and deleting N (I) ∪ I from the graph to obtain the graph G = G − (N (I) ∪ I). Then G has a vertex cover of size k − |N (I)| if and only if G has a vertex cover of size k. If S is the vertex set of a maximal matching in G, then G − S is an independent set and each vertex of G is either in S or in V (M ) \ S. If G has a vertex cover of size k, then S has at most 2k elements and V (M ) \ S at most k. Thus, such choice for S yields a kernel G with at most 3k vertices. We now prove Lemma 2.1 using an alternating path argument.

** **

**2. Preliminaries**

Proof. Let M be a maximum matching of G that touches the maximal number of vertices of G − S that have degree zero in G − S. Let I be the set of vertices v for which there

**is a path v0, s1, v1,..., s, v in G with the following properties:**

1. v0 is in G − S − V (M ) and has degree zero in G − S, and

2. si is matched to vi by M, for all 1 ≤ i ≤.

We show inductively that all elements of I are in G − S and have degree zero in G − S.

This implies that the neighborhood of I in G satisﬁes N (I) ⊆ S. In the base case = 0, this is clearly the case. For 0, consider any alternating path v0, s1, v1,..., s, v as above. By induction, the vertices vi for i have degree zero in G − S and thus si ∈ S for i ∈ [ ]. Now M matches s to a vertex v in G. Assume for contradiction that v is not a vertex of G − S which has degree zero in G − S. Since the vertex v0 is not touched by M, we could replace the edges {si, vi } of M with the edges {vi−1, si } and thereby improve M : The size of the matching does not change, but the number of vertices that have degree zero in G − S and that are touched by M increases. This contradicts the fact that we chose M as to maximize that number, and thus, v has degree zero in G − S.

Furthermore, the choice of I guarantees that M matches the vertices of N (I) to vertices in I.

The Gallai–Edmonds structure theorem provides us with a diﬀerent set S, which we feed into Lemma 2.1 to obtain kernels with 2k vertices. To succinctly state the theorem, some standard notion is needed. A matching of G is near-perfect if it covers all but exactly one vertex. A graph G is factor-critical if all subgraphs obtained by removing a single vertex have a perfect matching. The structure given by the Gallai–Edmonds theorem allows us to argue about the location of maximum matchings.

** Theorem 2.1 (Gallai–Edmonds).**

(i) Every graph G has a unique Gallai–Edmonds separator S, i.e., a vertex set satisfying the following two properties:

(a) All connected components of G − S have a perfect matching or are factorcritical.

(b) For every T ⊆ S we have |N (T )| |T |.

(ii) Every maximum matching of G contains a near-perfect or perfect matching of each component of G − S, and it matches all vertices of S to vertices in distinct components of G − S.

(iii) There is a polynomial-time algorithm for computing S.

A short proof of that theorem uses Hall’s Marriage Theorem [Kot00]. We now analyse the reduction suggested by Lemma 2.1 when S is the Gallai–Edmonds separator.

** Lemma 2.2.**

Let S be the Gallai–Edmonds separator of G, and let M and I be as in ** Lemma 2.1.**

Let G = G − (N (I) ∪ I). Then every vertex cover of G uses at least |V (G )|/2 vertices.

** **

**2.1. Review: Kernelization of Vertex Cover**

Proof. Let T be some vertex cover of G. Let C be the set of factor-critical components of G − S that do not have a vertex matched in S by M. Since V (M ) ∪ I contains all singleton components of G − S, they are not contained in G − S anymore, so each component of C has at least three vertices. In each component B ∈ C with b vertices, T clearly uses some vertex v. The removal of that vertex leaves a component B − v with b − 1 vertices. Since B is factor-critical, B − v has a perfect matching and T contains at least (b − 1)/2 vertices of B − v. Summing over all components B of C, this implies that T contains at least |V (C)|/2 vertices of C. Now let G − C be the graph obtained from G by removing its unmatched factor-critical components. We claim that M induces a perfect matching on G − C. From the claim, it follows immediately that T contains at least |V (G − C)|/2 vertices in G − C. Thus T contains at least |V (G )|/2 vertices.

It remains to argue the claim. By Theorem 2.1(i), all components of G − S that have an odd number of vertices are factor-critical, and all components of G − S that have an even number of vertices have a perfect matching. Theorem 2.1(ii) implies that

• M contains no edges inside of S,

• M matches all vertices of S to odd components of G − S, and

• M contains a near-perfect or perfect matching for every component of G − S.

In particular, if we remove the set C of all odd components of G that are not matched to S by M, then M induces a perfect matching on G − C. Now we need to argue that this holds for G − C as well. If we delete I and N (I) from G − C, we obtain G − C.

The only edges of M that intersect N (I) ∪ I are those that match a vertex of N (I) to a vertex of I, and both are deleted. All other edges of M that are contained in G − C remain intact. The claim that M induces a perfect matching on G − C follows.

Corollary 2.1. Vertex Cover has kernels with at most 2k vertices.

Proof. Since M matches vertices of N (I) to vertices of I, any vertex cover must use at least |N (I)| vertices to cover the edges incident to I. On the other hand, all edges incident to I are covered by N (I). Let G be the graph obtained from G by removing I ∪ N (I). Now G has a vertex cover of size at most k = k − |N (I)| if and only if G has a vertex cover of size at most k.

It is sometimes stated in the literature that the above number of vertices in kernels for vertex cover is optimal, i.e., that its kernels cannot be guaranteed to have at most 1.99·k vertices unless some complexity theoretic assumptions related to the inapproximability of vertex cover fail. However, this is only known in the case that the kernelization size bound does not depend on the parameter k. More precisely, it is assumed that kernelization algorithms are given an instance (G, k) as input, and the task is to output in polynomial time an equivalent instance (G, k ) such that G has at most c · vc(G) vertices. Here, vc(G) is the minimum size of a vertex cover of G. It is easy to see that such restricted kernelization algorithms automatically approximate vc(G) up to a factor of c, so inapproximability results carry over. General kernelization algorithms only guarantee that the number of vertices in the output is c · k, and it is an open problem to show that a general kernelization with c 2 is complexity-theoretically unlikely.

** **

**2. Preliminaries**

2.2. Review: Polynomial Kernel Lower Bounds We review the world of polynomial kernel lower bounds using our terminology. For an NP-hard problem L, we regard the problem OR(L) as a parameterized problem with parameter s where s = maxi |xi | is the size of a largest given instance. We argue that this problem does not have polynomial kernels unless coNP ⊆ NP/poly, and that all polynomial kernel lower bounds given in the literature are by a suitable parameterfrugal ≤p -reduction from this problem (such reductions are often called “composition” m or “polynomial parameter transformation” in the literature). For this to work out seamlessly, we use the more general notion of compression instead of kernelization.

** Lemma 2.3 (OR Incompressibility Lemma for Polynomial Lower Bounds).**

For any NP-hard language L, the problem OR(L) does not have a compression of size poly(s) unless coNP ⊆ NP/poly.

Proof. Let t : N → N0 be polynomially bounded and assume that OR(L) has a compression of size t(s). In particular, this means that the problem ORt (L) has a compression of size t(s). By Lemma 1.1, this implies coNP ⊆ NP/poly.

It is apparent from the proof that we can conveniently use the variant of OR(L) in the above lemma in which all given instances have to have the same size s.

We showcase the simplicity of our terminology by reproving the polynomial kernel lower bound of k-Path. In this problem, we are given a graph G and a number k and we want to determine whether G contains a simple path of length k. The problem is ﬁxed-parameter tractable and generalizes the NP-complete Hamiltonian path problem.

Bodlaender et al. [BDFH09] show that k-Path does not have kernels of size polynomial in k unless coNP ⊆ NP/poly. Their proof implicitly gives rise to the following reduction.

** Lemma 2.4.**

Let L be the Hamiltonian path problem. There is a ≤p -reduction from m OR(L) to k-Path that maps tuples of instances of bitlength s each to instances of kPath with parameter k ≤ s.

Proof. In the Hamiltonian path problem, we are given a graph G on n vertices and we want to decide whether it contains a simple path of length n. We assume that L is deﬁned in such a way that graphs are encoded using the characteristic vector of their edge relations, i.e., binary strings of length n. This technicality makes sure that two graphs with the same encoding length have the same number of vertices, and thus, the lengths of the paths we are looking for are equal.

For the reduction, we are given a t-tuple (x1,..., xt ) of instances of L. Without loss of generality, we assume that s = |x1 | = · · · = |xt | for some s ∈ N. If s is not of the form n, then the xi ’s do not encode graph instances and we return a trivial no instance. Otherwise, these strings encode t graphs G1,..., Gt on n vertices each. Let ˙ ˙ G = G1 ∪... ∪Gt be the disjoint union of these graphs. The reduction outputs (G, n), an instance of the k-path problem with k = n ≤ s. For the correctness, we observe that some Gi has a Hamiltonian path if and only if G contains a simple path of length n.

Corollary 2.2 ([BDFH09]). If coNP ⊆ NP/poly, then k-Path does not have kernels of size polynomial in k.

Proof. If k-Path has polynomial kernels then, by the above reduction, OR(L) has a compression of size poly(s). By Lemma 2.3, this implies coNP ⊆ NP/poly.