# «Gefördert durch die Deutsche Forschungsgemeinschaft im Rahmen des Graduiertenkollegs ’Methods for Discrete Structures’ (GRK 1408) Präsident der ...»

We point out that the construction in√ the proof of Lemma 4.1 guarantees that the cardinality of the set A is at least p1−O(1/ log p) rather than just p1−o(1). By considering a thin annulus rather than a sphere for the set S, Elkin [Elk10, GW08] recently further improved the cardinality by a factor of the form logc p for some positive constant c.

However, the analysis becomes more complicated and Behrend’s already suﬃces for our application.

B. Sunﬂower Kernelization for Set Matching We sketch a modern proof of Theorem 1.4, that d-Set Matching has kernels with O(k d ) hyperedges.

Proof (Sketch). A sunﬂower with p petals is a set of p hyperedges whose pairwise intersections are equal. By the sunﬂower lemma, any d-uniform hypergraph G with more than d! · rd edges has a sunﬂower with r + 1 petals. We set r = dk and observe that, in any sunﬂower with r + 1 petals, we can arbitrarily choose an edge e of the sunﬂower and remove it from the graph. To see this, assume we have a matching M of G with k edges. If M does not contain e, then M is still a matching of size k in G − e. On the other hand, if M contains e, there must be a petal that does not intersect M since we have dk + 1 petals but M involves only dk vertices. Thus we can replace e in the matching by the edge that corresponds to that petal, and we obtain a matching of G that consists of k hyperedges. This establishes the completeness of the reduction. The soundness is clear since any matching of G is a matching of G.

C. The Sparsiﬁcation Lemma Sparsiﬁcation is the process of reducing the density of graphs, formulas, or other combinatorial objects, while some properties of the objects like the answer to a computational problem are preserved. From an algorithmic perspective, eﬃcient sparsiﬁcation procedures can be used as kernelization algorithms to make input instances sparse and thus possibly simpler and smaller, such that only the core information about the input remains. From a complexity-theoretic point of view, sparsiﬁcation is a tool to identify those instances of a problem that are computationally the hardest. If an NP-hard problem admits eﬃcient sparsiﬁcation, the hardest instances are sparse.

In the context of the exponential-time hypothesis, the sparsiﬁcation lemma provides a way to show that the hardest instances of d-Sat are sparse and thus the parameter n can be replaced with m in the statement of the exponential-time hypothesis. The following is the sparsiﬁcation lemma as formulated in [FG06, Lemma 16.17].

**such that:**

(1) β is equivalent to γ, (2) t ≤ 2n/k and (3) each γi is a subformula of γ in which each variable occurs at most f (d, k) times.

Furthermore, β can be computed from γ and k in time t · poly(n).

We sketch below a small modiﬁcation in the proof of the sparsiﬁcation lemma that allows us to replace (1) with the condition (1’) sat(γ) = ˙ i sat(γi ), where sat(ϕ) is the set of assignments that satisfy the formula ϕ. In particular, (1’) implies #Sat(γ) = i #Sat(γi ), which means that the sparsiﬁcation lemma can be used for the counting version of 3-Sat.

Proof (sketch). We adapt the terminology of [FG06, Proof of Lemma 16.17] and we follow their construction precisely, except for a small change in the sparsiﬁcation algorithm.

C. The Sparsiﬁcation Lemma When the algorithm decides to branch for a CNF-formula γ and a ﬂower α = {δ1,..., δp }, the original algorithm would branch on the two formulas

This way, the satisfying assignments become disjoint: In each branching step, we guess whether the heart contains a literal set to true, or whether all literals in the heart are set to false and each petal contains a literals set to true.

Now we have that, for all CNF-formulas γ, all assignments σ to the variables of γ, and all ﬂowers α of γ, (i) σ satisﬁes γ if and only if σ satisﬁes γheart ∨ γpetals, and α α (ii) σ does not satisfy γheart or σ does not satisfy γpetals.

α α By induction, we see that at the end of the algorithm, (i) σ satisﬁes γ if and only if σ satisﬁes some γi, and (ii) σ satisﬁes at most one γi.

This implies that sat(γ) = ˙ i∈[t] sat(γi ).

Notice that our new construction adds at most n clauses of size 1 to the formulas γi compared to the old one. Furthermore, our construction does not make t any larger because the REDUCE-step removes all clauses that properly contain {¬l} and thus these unit clauses never appear in a ﬂower.

Proof (of Theorem 1.8). For all integers d ≥ 3 and k ≥ 1, the sparsiﬁcation lemma gives an oracle reduction from #d-Sat to #d-Sat that, on input a formula γ with n variables, only queries formulas with m = O(n) clauses, such that the reduction runs in time exp(O(n/k)). Now, if for every c 0 there is an algorithm for #d-Sat that runs in time exp(cm), we can combine this algorithm and the above oracle reduction to obtain an algorithm for #d-Sat that runs in time exp(O(n/k)+c·m ) = exp(O(n/k)+c·O(n)).

Since this holds for all small c 0 and large k, we have for every c 0 an algorithm for #d-Sat running in time exp(c ·n). This proves that for all d ≥ 3, #d-Sat can be solved in variable-subexponential time if and only if it can be solved in clause-subexponential time.

To establish the equivalence between diﬀerent d’s, we transform an instance ϕ of #dSat into an instance ϕ of #3-Sat that has the same number of satisfying assignments.

The formula ϕ is constructed as in the standard width-reduction for d-CNF formulas, i.e., by introducing a constant number of new variables for every clause of ϕ. Thus, since the number of clauses of ϕ is O(m), any clause-subexponential algorithm for #3-Sat implies a clause-subexponential algorithm for #d-Sat.

D. Hardness of 3-Colouring and 3-Terminal MinCut The purpose of this section is to show that the standard reductions from 3-Sat to the computational problems 3-Colouring, NAE-Sat, MaxCut, and 3-Terminal MinCut already preserve the number of solutions and increase the number of clauses or edges of the instances by at most a constant factor. This then implies that the corresponding counting problems require time exponential in the number of clauses or edges unless #ETH fails.

Theorem D.1. Deterministically computing the problems #NAE-Sat, #MaxCut, and #3-Terminal MinCut requires time exp(Ω(m)) unless #ETH fails.

In the following, we formally deﬁne the problems, sketch the standard NP-hardness reductions, and provide their analyses as needed to proof Theorem D.1.

Name #NAE-Sat Input 3-CNF formula ϕ.

Output The number of truth assignments, so that no clause {a, b, c} ∈ ϕ contains only literals with the same truth value.

Lemma D.1. There is a polynomial-time reduction from #3-Sat to #NAE-Sat that maps formulas with m clauses to formulas with O(m) clauses.

Proof. Let ϕ be a 3-CNF formula with n variables and m clauses. To construct the instance ϕ to NAE-Sat, we ﬁrst replace every trivariate clause (a ∨ b ∨ c) with the clauses (x ∨ a) ∧ (x ∨ b) ∧ (x ∨ a ∨ b) ∧ (x ∨ c), where x is a fresh variable. These clauses force x to have the value of a ∨ b in a satisfying assignment. It can be checked that these clauses are satisﬁed exactly if the original clause was satisﬁed and moreover that the trivariate clause is never all-false or all-true.

In total, we increase the number of clauses four-fold.

Finally, introduce yet another fresh variable z and add this single variable (positively) to every mono- and bivariate clause. It is well-known that this modiﬁcation turns the instance into an instance of NAE-Sat (see [Pap94, Theorem 3]). The resulting instance ϕ has 4m clauses and #NAE-Sat(ϕ ) = #3-Sat(ϕ).

D. Hardness of 3-Colouring and 3-Terminal MinCut Output The number of maximum cuts.

A maximum cut is a set C ⊆ V (G) that maximizes the number |E(C, C)| of edges of G that cross the cut.

Lemma D.2. There is a polynomial-time reduction from #NAE-Sat to #MaxCut that maps formulas with m clauses to graphs with O(m) edges.

Proof. We follow the reduction in [Pap94, Theorem 9.5]. Given an instance to NAESat with n variables and m clauses, the reduction produces an instance to MaxCut, a graph with 2n vertices and at most 3m + 3m = 6m edges. Furthermore, the number of solutions is equal.

Name #3-Terminal MinCut Input Simple undirected graph G = (V, E) with three distinguished vertices (“terminals”) t1, t2, t3 ∈ V.

Output The number of cuts of minimal size that separate t1 from t2, t2 from t3, and t3 from t1.

Lemma D.3. There is a polynomial-time reduction from #MaxCut to #3-Terminal MinCut that maps graphs with m edges to graphs with O(m) edges.

Proof. We follow the reduction in [DJP+ 94]. So let G = (V, E) be a graph with n vertices and m edges. It is made explicit in [DJP+ 94] that the construction builds a graph F with n = 3 + n + 4m = O(m) vertices. For the number of edges, every uv ∈ E results in a gadget graph C with 18 edges, so the number of edges in F is 18m = O(m).

The construction is such that the number of minimum 3-terminal cuts of F equals the number of maximum cuts of G.

Proof (of Theorem D.1). Assume one of the problems can be solved in time exp(cm) for every c 0. Then 3-Sat can be solved by ﬁrst applying the applicable reductions of the preceding lemmas and then invoking the assumed algorithm. This gives for every c 0 an algorithm for 3-Sat that runs in time exp(O(cm)), which implies that #ETH fails.

Hardness of Colouring and Other Individual Points on the Chromatic Line Theorem 1.11 (ii) cannot be handled by the proof of Proposition 5.3 because thickenings do not produce enough points for the interpolation. Instead, we use Linial’s reduction [Lin86] along this line. For this, we need to observe that #3-Colouring is hard under #ETH.

Name #3-Colouring Input Simple undirected graph G.

Output The number of proper vertex-colourings with three colours.

Lemma D.4.There is a polynomial-time reduction from #NAE-Sat to #3-Colouring that maps formulas with m clauses to graphs with O(m) edges.

Proof. The hardness of 3-Colouring under ETH is already observed in [IPZ01] but without mentioning that it even holds if we use m instead of n to measure the size of the instance. For the counting variant, observe that the graph G that is constructed in [Pap94, Theorem 9.8] from an NAE-Sat-instance ϕ with n variables and m clauses has n = 1 + 2n + 3m vertices and m = 3n + 6m edges. Furthermore the number of proper 3-colourings is equal to #NAE-Sat(ϕ).

The chromatic polynomial χ(G; q) of G is the polynomial in q with the property that, for all c ∈ N, the evaluation χ(G; c) is the number of proper c-colourings of the vertices of G. We write χ(q) for the function G → χ(G; q). The Tutte polynomial specializes to

**the chromatic polynomial for y = 0:**

** χ(G; q) = (−1)n(G)−k(G) q k(G) T (G; 1 − q, 0). (D.1)**

We now prove Theorem 1.11 (ii).

Proposition D.1. Let x ∈ {−2, −3,...}.

Then Tutte01 (x, 0) requires time exp(Ω(m)) under #ETH.

Proof. Set q = 1 − x. We use (D.1) to see that evaluating the chromatic polynomial χ(q) is equivalent to evaluating Tutte(x, 0) if q = 0. Since χ(3) is the number of 3colourings, the case q = 3 requires time exp(Ω(m)) under #ETH, even for simple graphs (cf. App. D). For i ∈ {1, 2,...} and all real r, Linial’s identity is

follows that χ(q) requires time exp(Ω(m)) under #ETH, even for simple graphs.

Proposition D.2. Let x ∈ Q \ {1, 0, −1, −2, −3,...}.

/ Then Tutte01 (x, 0) requires time exp(Ω(n)) under #ETH.

Proof. Set q = 1 − x. We show that Tutte01 (x, 0) requires time exp (Ω(n)) under #ETH. Indeed, with access to χ(q), we can compute χ(G; q−i) for all i = 0,..., n, noting that all prefactors in (D.2) nonzero. From these n + 1 values, we interpolate to get the coeﬃcients of the polynomial r → χ(G; r), which in turn allows us evaluate χ(G; 3). In this case, the size of the oracle queries depends nonlinearly on the size of G, in particular m(G + Kn ) ∼ n2. However, the number of vertices is n(G + Ki ) ≤ 2n ≤ O(m(G)). Thus, since χ(3) requires time exp(Ω(n)) under #ETH, this also holds for χ(q), even for simple graphs.

The only points on the x-axis not covered here are x ∈ {1, 0, −1}. Two of these admit polynomial time algorithms, so we expect no hardness result. By Theorem 1.11 (iii), evaluating the Tutte polynomial at the point (1, 0) requires time exp(Ω(m/ log2 m)) under #ETH.

Bibliography [AB09] Arora, Sanjeev; Barak, Boaz: Computational Complexity: A Modern Approach. Cambridge University Press, New York, NY, USA, 2009. ISBN 0521424267, 9780521424264.

[AFKS00] Alon, Noga; Fischer, Eldar; Krivelevich, Michael; Szegedy, Mario: Eﬃcient testing of large graphs. In: Combinatorica, volume 20(4):pp. 451–476, 2000.

doi:10.1109/SFFCS.1999.814642.

[Agr06] Agrawal, Manindra: Determinant versus permanent. In: Proceedings of the 25th International Congress of Mathematicians, ICM 2006, volume 3, pp.

985–997. 2006.

[AKKR08] Alon, Noga; Kaufman, Tali; Krivelevich, Michael; Ron, Dana: Testing triangle-freeness in general graphs. In: SIAM Journal on Discrete Mathematics, volume 22(2):pp. 786–819, 2008. doi:10.1145/1109557.1109589.

[Alo02] Alon, Noga: Testing subgraphs in large graphs. In: Random structures and algorithms, volume 21(3–4):pp. 359–370, 2002. doi:10.1002/rsa.10056.

[AM07] Achlioptas, Dimitris; Moore, Christopher: Random k-SAT: two moments suﬃce to cross a sharp threshold. In: SIAM Journal on Computing, volume 36(3):pp. 740–762, 2007. doi:10.1137/S0097539703434231.

[AP04] Achlioptas, Dimitris; Peres, Yuval: The threshold for random k-SAT is 2k log 2 − O(k). In: Journal of the American Mathematical Society, volume 17(4):pp. 947–973, 2004. doi:10.1090/S0894-0347-04-00464-3.

[AS04] Alon, Noga; Shapira, Asaf: Testing subgraphs in directed graphs. In: Journal of Computer and System Sciences, volume 69(3):pp. 354–382, 2004. doi:

10.1016/j.jcss.2004.04.008.

[AS05] Alon, Noga; Shapira, Asaf: Linear equations, arithmetic progressions and hypergraph property testing. In: Theory of Computing, volume 1(1):pp.

177–216, 2005. doi:10.4086/toc.2005.v001a009.