Track A: Accepted Papers (with abstracts)

The following contributions to Track A have been accepted for presentation at ICALP 2011. They are listed in no particular order.

There is also a corresponding list without abstracts.

Leslie Ann Goldberg and Mark Jerrum. A polynomial-time algorithm for estimating the partition function of the ferromagnetic Ising model on a regular matroid
Abstract: We investigate the computational difficulty of approximating the partition function of the ferromagnetic Ising model on a regular matroid. Jerrum and Sinclair have shown that there is a fully polynomial randomised approximation scheme (FPRAS) for the class of graphic matroids. On the other hand, the authors have previously shown, subject to a complexity-theoretic assumption, that there is no FPRAS for the class of binary matroids, which is a proper superset of the class of graphic matroids. In order to  map out the region where approximation is feasible, we focus on the class of regular matroids, an important class of matroids which properly includes the class of graphic matroids, and is properly included in the class of binary matroids. Using Seymour's decomposition theorem, we give an FPRAS for the class of regular matroids.
Leah Epstein, Csanad Imreh, Asaf Levin and Judit Nagy-Gyorgy. On variants of file caching
Abstract: In the file caching problem, the input is a sequence of requests for files out of a slow memory. A file has two attributes, a retrieval cost and an integer size. It is required to maintain a cache of size k, bringing each file, which is not present in the cache at the time of request, from the slow memory into the cache. This situation is called a miss and incurs a cost equal to the retrieval cost of the file. Well-known special cases include paging (all costs and sizes are equal to 1), the cost model also known as weighted paging (all sizes are equal to 1), the fault model (all costs are equal to 1) and the bit model (the cost of a file is equal to its size). We study two online variants of the problem, caching with bypassing and caching with rejection. If bypassing is allowed, a miss for a file still results in an access to this file in the slow memory, but its subsequent insertion into the cache is optional. In the model with rejection, together with each request for a file, the algorithm is informed with a rejection penalty of the request. When a file which is not present in the cache is requested, the algorithm must either bring the file into the cache, paying the retrieval cost of the file, or reject the file, paying the rejection penalty of the request. The goal function is the sum of all amounts paid by the algorithm, that is, the sum of total rejection penalty and the total retrieval cost. This problem generalizes both caching and caching with bypassing. We study the relation between these two variants and design deterministic and randomized algorithms for both problems. The competitive ratios of the randomized algorithms match the best known results for caching. These are O(\log k)-competitive algorithms for all important special cases, and an O(\log^2 k)-competitive algorithm for caching. This improves the best known result with bypassing for paging, the bit model and the cost model, and it is the first algorithm of competitive ratio o(k) for the other cases. In the deterministic case, it is known that a (k+1)-competitive algorithm for caching with bypassing exists, and this is best possible. In contrast, we present a lower bound of 2k+1 on the competitive ratio of any deterministic algorithm for the variant with rejection, which holds already for paging. Exploiting the relation between the two problems, we design a (2k+2)-competitive algorithm for caching with rejection. We design a different (2k+1)-competitive algorithm for the last problem, which is applicable for paging, the bit model and the cost model.
Andrei Bulatov and Dániel Marx. Constraint satisfaction parameterized by solution size
Abstract:   In the constraint satisfaction problem (CSP) corresponding to a   constraint language (i.e., a set of relations) $\Gamma$, the goal is   to find an assignment of values to variables so that a given set of   constraints specified by relations from $\Gamma$ is satisfied.  In   this paper we study the fixed-parameter tractability of constraint   satisfaction problems parameterized by the size of the solution in   the following sense: one of the possible values, say 0, is ``free,''   and the number of variables allowed to take other, ``expensive,''   values is restricted. A size constraint requires that exactly $k$   variables take nonzero values.  We also study a more refined version   of this restriction: a global cardinality constraint prescribes how   many variables have to be assigned each particular value. We study   the parameterized complexity of these types of CSPs where the   parameter is the required number $k$ of nonzero variables. As   special cases, we can obtain natural and well-studied parameterized   problems such as Independent set, Vertex Cover,   $d$-Hitting Set, Biclique, etc.   In the case of constraint languages closed under substitution of   constants, we give a complete characterization of the   fixed-parameter tractable cases of CSPs with size constraints, and   we show that all the remaining problems are W[1]-hard. For CSPs with   cardinality constraints, we obtain a similar classification, but for   some of the problems we are only able to show that they are   Biclique-hard. The exact parameterized complexity of the   Biclique problem is a notorious open problem, although it   is believed to be W[1]-hard.
Fabian Kuhn and Monaldo Mastrolilli. Vertex Cover in Graphs with Locally Few Colors
Abstract: In 1986 Erd\H{o}s et. al. defined the local chromatic number of a graph as the minimum number of colors that must appear within distance 1 of a vertex. For any fixed $\Delta\geq 2$, they presented graphs with arbitrarily large chromatic number that can be colored so that: (i) no vertex neighborhood contains more than $\Delta$ different colors (\emph{bounded local colorability}), and (ii) adjacent vertices from two color classes form an induced subgraph that is complete and bipartite (\emph{local completeness}). We investigate the weighted vertex cover problem in graphs when a locally bounded coloring is given as input. This generalizes in a very natural vein the vertex cover problem in bounded degree graphs to a class of graphs with arbitrarily large chromatic number. Assuming the Unique Game Conjecture, we provide a tight characterization. More precisely, we prove that it is UG-hard to improve the approximation ratio of $2-2/(\Delta+1)$ if only the bounded local colorability, but not the local completeness condition holds for the given coloring. A matching upper bound is also provided. Vice versa, when both the above two properties (i) and (ii) hold, we present a randomized approximation algorithm with performance ratio of $2-\Omega(1)\frac{\ln \ln \Delta}{\ln\Delta}$. This matches (up to the constant factor in the lower order term) known inapproximability results for the special case of bounded degree graphs.    Moreover, we show that when both the above two properties (i) and (ii) hold, the obtained result finds a natural application in a classical scheduling problem, namely the precedence constrained single machine scheduling problem to minimize the weighted sum of completion times. In a series of recent papers it was established that this scheduling problem is a special case of the minimum weighted vertex cover in graphs $G$ of incomparable pairs defined in the dimension theory of partial orders. We show that $G$ satisfies properties (i) and (ii) where $\Delta-1$ is the maximum number of predecessors (or successors) of each job.
Sourav Chakraborty, David García-Soriano and Arie Matsliah. Efficient Sample Extractors for Juntas with Applications
Abstract: We develop a query-efficient sample extractor for juntas, that is, a probabilistic algorithm that can simulate random samples from the core of a $k$-junta $f:\zo^n \to \zo$ given oracle access to a function $f':\zo^n \to \zo$ that is only close to $f$. After a preprocessing step, which takes $\widetilde{O}(k)$ queries, generating each sample to the core of $f$ takes only one query to $f'$. We then plug in our sample extractor in the ``testing by implicit learning'' framework of Diakonikolas et al. \cite{til}, improving the query complexity of testers for various Boolean function classes. In particular, for some of the classes considered in \cite{til}, such as $s$-term DNF formulas, size-$s$ decision trees, size-$s$ Boolean formulas, $s$-sparse polynomials over $\mathbb{F}_2$, and size-$s$ branching programs, the query complexity is reduced from $\widetilde{O}(s^4/\epsilon ^2)$ to $\widetilde{O}(s/\epsilon ^2)$. This shows that using the new sample extractor, testing by implicit learning can lead to testers having better query complexity than those tailored to a specific problem, such as the tester of Parnas et al. \cite{prs} for the class of monotone $s$-term DNF formulas. In terms of techniques, we extend the tools used in \cite{fiso} for testing function isomorphism to juntas. Specifically, while the original analysis in \cite{fiso} allowed query-efficient noisy sampling from the core of any $k$-junta $f$, the one presented here allows similar sampling from the core of the {\em closest} $k$-junta to $f$, even if $f$ is not a $k$-junta but just close to being one. One of the observations leading to this extension is that the junta tester of Blais \cite{blais_juntas}, based on which the aforementioned sampling is achieved, enjoys a certain weak form of tolerance.
Christine Cheng, Eric Mcdermid and Ichiro Suzuki. Center stable matchings and centers of cover graphs of distributive lattices
Abstract: Let I be an instance of the stable marriage (SM) problem.  In the late 1990s, Teo and Sethuraman discovered the existence of median stable matchings, which are stable matchings that match all participants to  their (lower/upper) median stable partner.   About a decade later, Cheng showed that not only are they locally-fair, but they are also globally-fair in the following sense:   when G(I) is the cover graph of the distributive lattice of stable matchings, these stable matchings  are  also  medians of G(I) -- i.e., their average distance to the other stable matchings is as small as possible.   Unfortunately, finding a median stable matching of I is #P-hard.   Inspired by the fairness properties of the median stable matchings, we study the center stable matchings which are the centers of G(I) --  i.e., the stable matchings whose maximum distance to any stable matching is as small as possible.   Here are our two main results.  First,  we show that a center stable matching of I can be computed in O(|I|^{2.5}) time.   Thus,  center stable matchings are the first type of globally-fair stable matchings we know of that can be computed efficiently.    Second, we show that in spite of the first result, there are similarities between the set of median stable matchings and the set of center stable matchings of I.  The former induces a hypercube in G(I) while the latter is the union of hypercubes of a fixed dimension in G(I).   Furthermore,  center stable matchings have a property that approximates the one found by Teo and Sethuraman for median stable matchings.  Finally, we note that our results extend to other variants of SM whose solutions form a distributive lattice and whose rotation posets can be constructed efficiently.  
Ashley Montanaro, Aram Harrow and Anthony Short. Limitations on quantum dimensionality reduction
Abstract: The Johnson-Lindenstrauss Lemma is a classic result which implies that any set of n real vectors can be compressed to O(log n) dimensions while only distorting pairwise Euclidean distances by a constant factor. Here we consider potential extensions of this result to the compression of quantum states. We show that, by contrast with the classical case, there does not exist any distribution over quantum channels that significantly reduces the dimension of quantum states while preserving the 2-norm distance with high probability. We discuss two tasks for which the 2-norm distance is indeed the correct figure of merit. In the case of the trace norm, we show that the dimension of low-rank mixed states can be reduced by up to a square root, but that essentially no dimensionality reduction is possible for highly mixed states.
Jakob Nordstrom and Alexander Razborov. On Minimal Unsatisfiability and Time-Space Trade-offs for k-DNF Resolution
Abstract: A well-known theorem by Tarsi states that a minimally unsatisfiable CNF formula with m clauses can have at most m-1 variables, and this bound is exact.  In the context of proving lower bounds on proof space in k-DNF resolution, [Ben-Sasson and Nordstrom 2009] extended the concept of minimal unsatisfiability to sets of k-DNF formulas and proved that a minimally unsatisfiable k-DNF set with m formulas can have at most O((mk)^(k+1)) variables.  This result is far from tight, however, since they could only present explicit constructions of minimally unsatisfiable sets with Omega(mk^2) variables. In the current paper, we revisit this combinatorial problem and significantly improve the lower bound to Omega(m)^k, which almost matches the upper bound above.  Furthermore, using similar ideas we show that the analysis of the technique in [Ben-Sasson and Nordstrom 2009] for proving time-space separations and trade-offs for k-DNF resolution is almost tight. This means that although it is possible, or even plausible, that stronger results than in [Ben-Sasson and Nordström 2009] should hold, a fundamentally different approach would be needed to obtain such results.
Shengyu Zhang. On the power of lower bound methods for one-way quantum communication complexity
Abstract: his paper studies the three known lower bound methods for one-way quantum communication complexity, namely the Partition Tree method by Nayak, the Trace Distance method by Aaronson, and the two-way quantum communication complexity. It is shown that all these three methods can be exponentially weak for some total Boolean functions. In particular, for a large class of functions generated from Erdos-Renyi random graphs G(N,p) with p in some range of 1/poly(N), the two-way quantum communication complexity gives linear lower bound while the other two methods only gives constant lower bounds. This denies the possibility of using any of these known quantum lower bounds for proving the fundamental conjecture that the classical and quantum one-way communication complexities are polynomially related. En route of the exploration, we also discovered that the power of Nayak's method is exactly equal to the \emph{extended equivalence query complexity} in learning theory.
Markus Jalsenius and Raphael Clifford. Lower Bounds for Online Integer Multiplication and Convolution in the Cell-Probe Model
Abstract: We show time lower bounds for both online integer multiplication and convolution in the cell-probe model with word size w. For the multiplication problem, one pair of digits, each from one of two n digit numbers that are to be multiplied, is given as input at step i. The online algorithm outputs a single new digit from the product of the numbers before step i+1. We give a lower bound of Omega(d/w log n) time on average per output digit for this problem where 2^d is the maximum value of a digit. In the convolution problem, we are given a fixed vector V of length n and we consider a stream in which numbers arrive one at a time. We output the inner product of V and the vector that consists of the last n numbers of the stream. We show an Omega(d/w log n) lower bound for the time required per new number in the stream. All the bounds presented hold under randomisation and amortisation. Multiplication and convolution are central problems in the study of algorithms which also have the widest range of practical applications.
Chien-Chung Huang and Telikepalli Kavitha. Popular Matchings in the Stable Marriage Problem
Abstract: The input is a bipartite graph G = (A U B, E) where each vertex u in A U B ranks its neighbors in a strict order of preference. This is the same as an instance of the stable marriage problem with incomplete lists. A matching M is said to be popular if there is no matching M' such that more vertices are better off in M' than in M. Any stable matching of G is popular, however such a matching is a minimum cardinality popular matching. We consider the problem of computing a  maximum cardinality popular matching here. It has very recently been shown that when preference lists have ties, the problem of determining if a given instance admits a popular matching or not is NP-complete.  When preference lists are strict, popular matchings always exist, however the complexity of computing a maximum cardinality popular matching was unknown. In this paper we give a simple characterization of popular matchings when preference lists are strict and a sufficient condition for a maximum cardinality popular matching. We then show an O(mn) algorithm for computing a maximum cardinality popular matching, where m is the number of edges and n is the number of vertices in G.
Jiawei Qian and David Williamson. An O(log n)-competitive algorithm for online constrained forest problems
Abstract: In the generalized Steiner tree problem, we find a minimum-cost set of edges to connect a given set of source-sink pairs.  In the online version of this problem, the source-sink pairs arrive over time.  Agrawal, Klein, and Ravi give a 2-approximation algorithm for the offline problem; Berman and Coulston give an O(log n)-competitive algorithm for the online problem. Goemans and Williamson subsequently generalized the offline algorithm of Agrawal et al. to handle a large class of problems they called constrained forest problems, and other problems, such as the prize-collecting Steiner tree problem.  In this paper, we show how to combine the ideas of Goemans and Williamson and those of Berman and Coulston to give an O(log n)-competitive algorithm for online constrained forest problems, including an online version of the prize-collecting Steiner tree problem.
Hung Ngo, Ely Porat and Atri Rudra. Efficiently Decodable Error-Correcting List Disjunct Matrices and Applications
Abstract: A $(d,\ell)$-list disjunct matrix is a non-adaptive group testing primitive that, given a set of $n$ items with at most $d$ ``defectives," outputs a super-set of the defectives containing at most $\ell-1$ non-defective items. The primitive has found many applications as stand alone objects and as building blocks in the construction of other combinatorial objects. This paper studies error-tolerant list disjunct matrices which can correct up to $e_0$ false positives and $e_1$ false negatives in sub-linear time. We then use list-disjunct matrices to prove new results in three different applications. Our major contributions are as follows. First, we prove several (almost)-matching lower and upper bounds for the optimal number of tests, including the fact that $\Theta(d\log(n/d) + e_0+de_1)$ is necessary and sufficient when $\ell=\Theta(d)$. Similar results are also derived for the disjunct matrix case (i.e. $\ell=1$). Second, we present two methods that convert error-tolerant list disjunct matrices in a \textit{black-box} manner into error-tolerant list disjunct matrices that are also {\em efficiently decodable}. The methods help us derive a family of (strongly) explicit constructions of list-disjunct matrices which are either optimal or near optimal, and which are also efficiently decodable. Third, we show how to use error-correcting efficiently decodable list-disjunct matrices in three different applications: (i) explicit constructions of $d$-disjunct matrices with $t = O(d^2\log n+rd)$ tests which are decodable in $\mathrm{poly}(t)$ time, where $r$ is the maximum number of test errors. This result is optimal for $r = \Omega(d\log n)$, and even for $r=0$ this result improves upon known results; (ii) (explicit) constructions of (near)-optimal, error-correcting, and  efficiently decodable monotone encodings; and (iii) (explicit) constructions of (near)-optimal, error-correcting, and  efficiently decodable multiple user tracing families.
Daniel Lokshtanov and Dániel Marx. Clustering with Local Restrictions
Abstract: We study a family of graph clustering problems where each cluster   has to satisfy a certain local requirement. Formally, let $\mu$ be a   function on the subsets of vertices of a graph $G$. In the   $(mu,p,q)$-Partition problem, the task is to find a partition of the   vertices where each cluster $C$ satisfies the requirements that (1)   at most $q$ edges leave $C$ and (2) $\mu(C)\le p$. Our first result   shows that if $\mu$ is an {\em arbitrary} polynomial-time computable   monotone function, then $(mu,p,q)$-Partition can be solved in time   $n^{O(q)}$, i.e., it is polynomial-time solvable   {\em for every fixed $q$}. We study in detail three concrete functions $\mu$   (number of nonedges in the cluster, maximum degree of nonedges in   the cluster, number of vertices in the cluster), which correspond to    natural clustering problems. For these functions, we show   that $(mu,p,q)$-Partition can be solved in time $2^{O(p)}\cdot n^{O(1)}$ and   in randomized time $2^{O(q)}\cdot n^{O(1)}$, i.e., the problem is   fixed-parameter tractable parameterized by $p$ or by $q$.
Amin Coja-Oghlan and Angelica Pachon-Pinzon. The decimation process in random k-SAT
Abstract: Let F be a uniformly distributed random k-SAT formula with n variables and m clauses. Non-rigorous statistical mechanics ideas have inspired a message passing algorithm called Belief Propagation Guided Decimation for finding satisfying assignments of F. This algorithm can be viewed as an attempt at implementing a certain thought experiment that we call the Decimation Process. In this paper we identify a variety of phase transitions in the decimation process and link these phase transitions to the performance of the algorithm.
Hans L. Bodlaender, Bart M. P. Jansen and Stefan Kratsch. Preprocessing for Treewidth: A Combinatorial Analysis through Kernelization
Abstract: Using the framework of kernelization we study whether efficient preprocessing schemes for the Treewidth problem can give provable bounds on the size of the processed instances. Assuming the AND-distillation conjecture to hold, the standard parameterization of Treewidth does not have a kernel of polynomial size and thus instances (G, k) of the decision problem of Treewidth cannot be efficiently reduced to equivalent instances of size polynomial in k. In this paper, we consider different parameterizations of Treewidth. We show that Treewidth has a kernel with O(l^3) vertices, with l the size of a vertex cover, and a kernel with O(l^4) vertices, with l the size of a feedback vertex set. This implies that given an instance (G, k) of treewidth we can efficiently reduce its size to O((l*)^4) vertices, where l* is the size of a minimum feedback vertex set in G. In contrast, we show that Treewidth parameterized by the vertex-deletion distance to a co-cluster graph and Weighted Treewidth parameterized by the size of a vertex cover do not have polynomial kernels unless NP \subseteq coNP/poly. Treewidth parameterized by the deletion distance to a cluster graph has no polynomial kernel unless the AND-distillation conjecture does not hold.
Bundit Laekhanukit. An improved approximation algorithm for the minimum-cost subset k-connectivity
Abstract: The minimum-cost subset $k$-connected subgraph problem is a cornerstone problem in the area of network design with vertex connectivity requirements. In this problem, we are given a graph $G=(V,E)$ with costs on edges and a set of terminals $T$. The goal is to find a minimum cost subgraph such that every pair of terminals are connected by $k$ openly (vertex) disjoint paths. In this paper, we present an approximation algorithm for the subset $k$-connected subgraph problem which improves on the previous best approximation guarantee of $O(k^2\log{k})$ by Nutov (FOCS 2009). Our approximation guarantee, $\alpha(|T|)$, depends upon the number of terminals: \[ \alpha(|T|) \ \ =\ \   \begin{cases} O(|T|^2) &\mbox{if } |T| < 2k\\ O(k \log^3 k) & \mbox{if } 2k\le |T| < k^2\\ O(k \log^2 k) & \mbox{if } |T| \ge k^2 \end{cases} \] So, when the number of terminals is {\em large enough}, the approximation guarantee improves dramatically. Moreover, we show that, given an approximation algorithm for $|T|=k$, we can obtain almost the same approximation guarantee for any instances with $|T|>k$. This suggests that the hardest instances of the problem are when $|T|\approx k$.
Malte Beecken, Johannes Mittmann and Nitin Saxena. Algebraic Independence and Blackbox Identity Testing
Abstract: Algebraic independence is an advanced notion in commutative algebra that generalizes independence of linear polynomials to higher degree. Polynomials {f_1, ..., f_m} \subset \F[x_1, ..., x_n] are called algebraically independent if there is no non-zero polynomial F such that F(f_1, ..., f_m) = 0. The transcendence degree, trdeg{f_1, ..., f_m}, is the maximal number r of algebraically independent polynomials in the set. In this paper we design blackbox and efficient linear maps \phi that reduce the number of variables from n to r but maintain trdeg{\phi(f_i)}_i = r, assuming f_i's sparse and small r. We apply these fundamental maps to solve several cases of blackbox identity testing: (1) Given a polynomial-degree circuit C and sparse polynomials f_1, ..., f_m with trdeg r, we can test blackbox D := C(f_1, ..., f_m) for zeroness in poly(size(D))^r time. (2) Define a spsp_\delta(k,s,n) circuit C to be of the form \sum_{i=1}^k \prod_{j=1}^s f_{i,j}, where f_{i,j} are sparse n-variate polynomials of degree at most \delta. For k = 2 we give a poly(\delta sn)^{\delta^2} time blackbox identity test. (3) For a general depth-4 circuit we define a notion of rank. Assuming there is a rank bound R for minimal simple spsp_\delta(k,s,n) identities, we give a poly(\delta snR)^{Rk\delta^2} time blackbox identity test for spsp_\delta(k,s,n) circuits. This partially generalizes the state of the art of depth-3 to depth-4 circuits.   The notion of trdeg works best with large or zero characteristic, but we also give versions of our results for arbitrary fields.
Naonori Kakimura and Kazuhisa Makino. Robust Independence Systems
Abstract: An independence system \cal{F} is one of the most fundamental combinatorial concepts, which includes a variety of objects in graphs and hypergraphs such as matchings, stable sets, and matroids. We discuss the robustness for independence systems, which is a natural generalization of the greedy property of matroids. For a real number \alpha> 0, a set X in \cal{F} is said to be \alpha-robust if for any k, it includes an \alpha-approximation of the maximum k-independent set, where a set Y in \cal{F} is called k-independent if the size |Y| is at most k. In this paper, we show that every independence system has a 1/\sqrt{\mu(\cal{F})}-robust independent set, where \mu(\cal{F}) denotes the exchangeability of \cal{F}. Our result contains a classical result for matroids and the ones of Hassin and Rubinstein[SIAM J. Disc. Math. 2002] for matchings and Fujita, Kobayashi, and Makino[ESA 2010] for matroid 2-intersections, and provides better bounds for the robustness for many independence systems such as b-matchings, hypergraph matchings, matroid p-intersections, and unions of vertex disjoint paths. Furthermore, we provide bounds of the robustness for nonlinear weight functions such as submodular and convex quadratic functions. We also extend our results to independence systems in the integral lattice with separable concave weight functions.
Marek Cygan, Marcin Pilipczuk, Michal Pilipczuk and Jakub Wojtaszczyk. Subset Feedback Vertex Set is Fixed Parameter Tractable
Abstract: The classical FEEDBACK VERTEX SET problem asks, for a given undirected graph G and an integer k, to find a set of at most k vertices that hits all the cycles in the graph G. FEEDBACK VERTEX SET has attracted a large amount of research in the parameterized setting, and subsequent kernelization and fixed-parameter algorithms have been a rich source of ideas in the field. In this paper we consider a more general and difficult version of the problem, named SUBSET FEEDBACK VERTEX SET (SUBSET-FVS in short) where an instance comes additionally with a set S of vertices, and we ask for a set of at most k vertices that hits all simple cycles passing through S. Because of its applications in circuit testing and genetic linkage analysis SUBSET-FVS was studied from the approximation algorithms perspective by Even et al. [SICOMP’00, SIDMA’00]. The question whether the SUBSET-FVS problem is fixed parameter tractable was posed independently by K. Kawarabayashi and S. Saurabh in 2009. We answer this question affirmatively. We begin by showing that this problem is fixed-parameter tractable when parametrized by |S|. Next we present an algorithm which reduces the size of S to O(k^3) in 2^O(k log k)n^O(1) time using kernelization techniques such as the 2-Expansion Lemma, Menger’s theorem and Gallai’s theorem. These two facts allow us to give a 2^O(k log k)n^O(1) algorithm solving the SUBSET FEEDBACK VERTEX SET problem, proving that it is indeed fixed parameter tractable.
Eric Allender and Fengming Wang. On the power of algebraic branching programs of width two
Abstract: We show that there are families of polynomials having small depth-two arithmetic circuits that cannot be expressed by algebraic branching programs of width two.  This clarifies the complexity of the problem of computing the product of a sequence of two-by-two matrices, which arises in several settings.
Shi Li. A 1.488-approximation algorithm for the uncapacitated facility location problem
Abstract: We present a 1.488 approximation algorithm for the metric uncapacitated facility location (UFL) problem. The previous best algorithm was due to Byrka \cite{Byr07}. By linearly combining two algorithms $A1(\gamma_f)$ for $\gamma_f = 1.6774$ and the (1.11,1.78)-approximation algorithm $A2$ proposed by Jain, Mahdian and Saberi \cite{JMS02}, Byrka gave a 1.5 approximation algorithm for the UFL problem.  We show that if $\gamma_f$ is randomly selected from some distribution, the approximation ratio can be improved to 1.488.
Ken-Ichi Kawarabayashi, Philip Klein and Christian Sommer. Linear-Space Approximate Distance Oracles for Planar, Bounded-Genus, and Minor-Free Graphs
Abstract:   A $(1+\epsilon)$-approximate distance oracle for a graph is a data   structure that supports approximate point-to-point   shortest-path-distance queries.  The relevant measures for a   distance-oracle construction are: space, query time, and   preprocessing time.   There are strong distance-oracle constructions known for planar   graphs (Thorup) and, subsequently, minor-excluded graphs (Abraham   and Gavoille).  However, these require $\Omega(\epsilon^{-1} n \lg   n)$ space for $n$-node graphs.     We argue that a very low space requirement is essential.  Since   modern computer architectures involve hierarchical memory (caches,   primary memory, secondary memory), a high memory requirement in   effect may greatly increase the actual running time.  Moreover, we   would like data structures that can be deployed on small mobile   devices, such as handhelds, which have relatively small primary   memory.     In this paper,  for planar graphs, bounded-genus graphs, and   minor-excluded graphs we give distance-oracle constructions that   require only   $O(n)$ space.  The big $O$ hides only a fixed constant, independent of   $\epsilon$ and independent of genus or size of an excluded minor.   Our constructions have proportionately higher query times.  For   planar graphs, the query time is $O(\epsilon^{-2} \lg^2 n)$.     The preprocessing times for our distance oracle are also better   than those for the previously known constructions.  For planar   graphs, the preprocessing time is $O(n \lg^2 n)$.   For bounded-genus graphs, there was previously no   distance-oracle construction known other than the one implied by the   minor-excluded construction, for which the constant is enormous and   the preprocessing time is a high-degree polynomial.  In our result,   the query time is $O(\epsilon^{-2}(\lg n + g)^2)$ and the preprocessing   time is $O(n(\lg n)(g^3+\lg n))$.      For all these linear-space results, we can in fact ensure, for any $\delta>0$, that the space required is only $1+\delta$ times the space required just to represent the graph itself.
Velumailum Mohanaraj and Martin Dyer. Pairwise-interaction Games
Abstract: We study the complexity of computing Nash equilibria in games where players arranged as the vertices of a graph play a symmetric 2-player game against their neighbours. We call this a pairwise-interaction game. We analyse this game for n players with a fixed number of actions and show that (1) a mixed Nash equilibrium can be computed in constant time for any game, (2) a pure Nash equilibrium can be computed through Nash dynamics in polynomial time for games with symmetrisable payoff matrix, (3) determining whether a pure Nash equilibrium exists for zero-sum games is NP-complete, and (4) counting pure Nash equilibria is #P-complete even for 2-strategy games. In proving (3), we define a new defective graph colouring problem called Nash colouring, which is of independent interest, and prove that its decision version is NP-complete. Finally, we show that pairwise-interaction games form a proper subclass of the usual graphical games.
Tim Nonner. Clique Clustering yields a PTAS for max-Coloring Interval Graphs
Abstract: We are given an interval graph $ G = (V,E) $ where each interval $ I \in V $ has a weight $ w_I \in \mathbb{R}^+ $. The goal is to color the intervals $ V $ with color classes $ C_1, C_2, \ldots, C_k $ such that $ \sum_{i=1}^k \max_{I \in C_i} w_I $ is minimized. This problem, called max-coloring interval graphs, contains the classical problem of coloring interval graphs as a special case, and it arises in many practical scenarios such as memory management. Pemmaraju, Raman, and Varadarajan showed that max-coloring interval graphs is NP-hard (SODA'04) and presented a 2-approximation algorithm. Closing a gap which has been open for years, we settle the approximation complexity of this problem by giving a polynomial-time approximation scheme (PTAS), that is, we show that there is an $ (1+\epsilon) $-approximation algorithm for any $ \epsilon > 0 $. Besides using standard preprocessing techniques such as geometric rounding and shifting, our main building block is a general technique for trading the overlap structure of an interval graph for accuracy, which we call clique clustering.
Hu Ding and Jinhui Xu. Solving the Chromatic Cone Clustering Problem via Minimum Spanning Sphere
Abstract: In this paper, we study the following {\em Chromatic Cone Clustering (CCC)} problem: Given $n$ point-sets  with each containing $k$ points in the first quadrant of the $d$-dimensional space $R^{d}$, find $k$ cones apexed at the origin such that each cone contains at least one distinct point (i.e., different from other cones) from every point-set and the total size of the $k$ cones  is minimized, where the size of a cone is the angle from any boundary ray to its center line. CCC  is motivated by an important biological problem and finds applications in several other areas. For the CCC problem, we consider in this paper the cases that $k$ is either $2$ or some other constant number. Our approaches for solving the CCC problem relies on the solutions to the problem of computing a {\em Minimum Spanning Sphere (MinSS)} for point-sets. For the MinSS problem, we present two $(1+\epsilon)$-approximation algorithms based on core-sets and $\epsilon$-net respectively. With these algorithms, we then show that the CCC problem admits $(1+\epsilon)$-approximation solutions for both cases. Our results use several interesting geometric techniques in high dimensional space, and are the first solutions to these problems.
Markus Chimani and Petr Hlineny. A Tighter Insertion-based Approximation of the Crossing Number
Abstract: Let $G$ be a planar graph and $F$ a set of additional edges not yet in $G$. The {\em multiple edge insertion} problem (MEI) asks for a drawing of $G+F$ with the minimum number of pairwise edge crossings, such that the subdrawing of $G$ is plane. As an exact solution to MEI is NP-hard for general $F$, we present the first approximation algorithm for MEI which achieves an additive approximation factor (depending only on the size of $F$ and the maximum degree of $G$). Our algorithm seems to be the first directly implementable one in that realm, too, next to the single edge insertion. It is also known that an (even approximate) solution to the MEI problem would approximate the crossing number of the \emph{$F$-almost-planar graph} $G+F$, while computing the crossing number of $G+F$ exactly is NP-hard already when $|F|=1$. Hence our algorithm induces new, improved approximation bounds for the crossing number problem of $F$-almost-planar graphs, achieving constant-factor approximation for the large class of such graphs of bounded degrees and bounded size of $F$.
Hans-Joachim Boeckenhauer, Dennis Komm, Rastislav Kralovic and Richard Kralovic. On the Advice Complexity of the k-Server Problem
Abstract: Competitive analysis is the established tool for measuring the output quality of algorithms that work in an online environment. Recently, the model of advice complexity has been introduced as an alternative measurement which allows for a more fine-grained analysis of the hardness of online problems. In this model, one tries to measure the amount of information an online algorithm is lacking about the future parts of the input. This concept was investigated for a number of well-known online problems including the k-server problem. In this paper, we first extend the analysis of the k-server problem by giving both a lower bound on the advice needed to obtain an optimal solution, and upper bounds on algorithms for the general k-server problem on metric graphs and the special case of dealing with the Euclidean plane. In the general case, we improve the previously known results by an exponential factor, in the Euclidean case we design an algorithm which achieves a constant competitive ratio for a very small (i.e., constant) number of advice bits per request. Furthermore, we investigate the relation between advice complexity and randomized online computations by showing how lower bounds on the advice complexity can be used for proving lower bounds for the competitive ratio of randomized online algorithms.
Stefan Mengel. Characterizing Arithmetic Circuit Classes by Constraint Satisfaction Problems
Abstract: We explore the expressivity of constraint satisfaction problems (CSPs) in the arithmetic circuit model. While CSPs are known to yield VNP-complete polynomials in the general case, we show that for different restrictions of the structure of the CSPs we get characterizations of different arithmetic circuit classes. In particular we give the first natural non-circuit characterization of VP, the class of polynomial families efficiently computable by arithmetic circuits.
Carsten Moldenhauer. Primal-Dual Approximation Algorithms for Node-Weighted Steiner Forest on Planar Graphs
Abstract: Node-Weighted Steiner Forest is the following problem: Given an undirected graph, a set of pairs of terminal vertices, a weight function on the vertices, find a minimum weight set of vertices that includes and connects each pair of terminals. We consider the restriction to planar graphs where the problem remains NP-complete. Demaine et al. [DHK09] showed that the generic primal-dual algorithm of Goemans and Williamson [GW97] is a 6-approximation on planar graphs. We present (1) a different analysis to prove an approximation factor of 3, (2) show that this bound is tight for the generic algorithm, and (3) show how the algorithm can be improved to yield a 9/4-approximation algorithm. We give a simple proof for the first result using contraction techniques and present an example for the lower bound. Then, we establish a connection to the feedback problems studied by Goemans and Williamson [GW98]. We show how our constructions can be combined with their proof techniques yielding the third result and an alternative, more involved, way of deriving the first result. The third result induces an upper bound on the integrality gap of 9/4. Our analysis implies that improving this bound for Node-Weighted Steiner Forest via the primal-dual algorithm is essentially as difficult as improving the integrality gap for the feedback problems in [GW98].
Chandra Chekuri and Alina Ene. Submodular Cost Allocation Problem and Applications
Abstract: We study the Submodular Cost Allocation problem (MCSA). In this problem we are given a finite ground set $V$ and $k$ non-negative submodular set functions $f_1,\ldots,f_k$ on $V$. The objective is to partition $V$ into $k$ (possibly empty) sets $C_1, \cdots, C_k$ such that the sum $\sum_{i = 1}^k f_i(C_i)$ is minimized. Several well-studied problems such as the non-metric facility location problem, multiway-cut in graphs and hypergraphs, and uniform metric-labeling and its generalizations can be shown to be special case of MCSA. In this paper we consider a convex-programming relaxation obtained via the Lov\'{a}sz-extension for submodular functions. This allows us to understand several previous relaxations and rounding procedures in a unified fashion and also develop new formulations and approximation algorithms for several problems. In particular, we give a $1.5$-approximation for the hypergraph multiway partition problem. We also give a $\min\{2(1-1/k), H_\Delta\}$-approximation for the hypergraph multiway cut problem when $\Delta$ is the maximum hyperedge degree. Both problems generalize the multiway cut problem in graphs and the hypergraph cut problem is approximation equivalent to the node-weighted multiway cut problem in graphs.
Yuval Filmus, Toniann Pitassi and Rahul Santhanam. Exponential Lower Bounds for AC0-Frege Imply Superpolynomial Frege Lower Bounds
Abstract: We give a general transformation which turns polynomial-size Frege proofs to subexponential-size AC0-Frege proofs. This indicates that proving exponential lower bounds for AC0-Frege is hard, since it is a longstanding open problem to prove super-polynomial lower bounds for Frege. Our construction is optimal for tree-like proofs. As a consequence of our main result, we are able to shed some light on the question of weak automatizability for bounded-depth Frege systems. First, we present a simpler proof of the results of Bonet et al. showing that under cryptographic assumptions, bounded-depth Frege proofs are not weakly automatizable. Secondly, we show that because our proof is more general, under the right cryptographic assumptions, it could resolve the weak automatizability question for lower depth Frege systems.
Maurice Jansen and Rahul Santhanam. Permanent Does Not Have Succinct Polynomial Size Arithmetic Circuits of Constant Depth
Abstract: We show that over fields of characteristic zero there does not exist a polynomial $p(n)$ and a constant-free succinct arithmetic circuit family $\{\Phi_n\}$, where $\Phi_n$ has size at most  $p(n)$ and  depth $O(1)$, such that $\Phi_n$ computes the $n\times n$ permanent. A circuit family $\{\Phi_n\}$ is succinct if there exists a {\em nonuniform} Boolean circuit family $\{C_n\}$ with $O(\log n)$ many inputs and size $n^{o(1)}$ such that that $C_n$ can correctly answer direct connection language queries about $\Phi_n$-succinctness is a relaxation of uniformity. To obtain this result we develop a novel technique that further strengthens the connection between black-box derandomization of polynomial identity testing and lower bounds for arithmetic circuits. From this we obtain the lower bound by explicitly constructing a hitting set against arithmetic circuits in the polynomial hierarchy. To the best of our knowledge, this is the first lower bound proof which takes advantage of (partial) uniformity without using diagonalization.
Vassilis Zikas and Martin Hirt. Player-Centric Byzantine Agreement
Abstract: Byzantine Agreement (BA) is one of the most important primitives in the area of secure distributed computation. It allows a set of players to agree on a value, even when some of the players are faulty. BA comes in two flavors: {\em Consensus} and {\em Broadcast}. Both primitives require that all non-faulty players agree on an output-value $y$ (consistency). However, in Consensus every player has an input, and it is required that if all players pre-agree on some $x$ then $y=x$, whereas in Broadcast only one player, the {\em sender}, has input and it is required that $y$ equals the sender's input, unless he is faulty (validity). Most of the existing feasibility results for BA are of an {\em all-or-nothing} fashion: in Broadcast they address the question whether or not there exists a protocol which allows {\em any} player to broadcast his input. Similarly, in Consensus the question is whether or not consensus can be reached which respects pre-agreement on the inputs of {\em all} correct players. In this work, we take a different approach motivated by the above observation, namely that Consensus and Broadcast are only two extreme sub-cases of a more general consistency primitive. In particular, we are interested in the question whether or not the validity condition of BA primitives can be satisfied {\em with respect to a specific sub-set    of the complete player set \PS}. To this direction, we introduce the natural notion of {\em player-centric BA} which is a class of BA primitives, denoted as $\pcba=\{\pba{\CC}\}_{\CC\subseteq\PS}$, parametrized by subsets $\CC$ of the player set. For each primitive $\pba{\CC}\in\pcba$ the validity is defined on the input(s) of the players in $\CC$.  Note that Broadcast (with sender $p$) and Consensus are special cases of \pcba primitives for $\CC=\{p\}$ and $\CC=\PS$, respectively. We study feasibility of $\pcba$ for arbitrary sets $\CC\subseteq\PS$ in a setting where a general (aka non-threshold) mixed (active/passive) adversary is considered.  Such an adversary is characterized by a so-called adversary structure, which is an enumeration of admissible adversary classes specifying the sets of players that can be actively and passively corrupted. We give a complete characterization of adversaries tolerable for $\pcba$ for all three security levels, i.e., perfect, statistical, and computational security. Note that with the exception of perfect security, exact bounds for this model are not even known for the traditional notions of BA.  Our results expose an asymmetry of Broadcast which has, so far, been neglected in the literature: there exist non-trivially adversaries which can be tolerated for Broadcast with sender some $p_i\in\PS$ but {\em not} for every $p_j\in\PS$ being the sender. Finally, we show how to extend the definition of \pcba in a setting where the adversary can actively, passively, and fail corrupt players, simultaneously. For this setting, we give exact feasibility bounds for computational secure \pba{\PS} (aka Consensus), assuming a public-key infrastructure (PKI).  This last result answers an open problem from ASIACRYPT~2008 that concerns feasibility of computationally secure multi-party computation in this model.
Konstantin Makarychev and Maxim Sviridenko. Maximizing a Polynomial Subject to Assignment Constraints
Abstract: We study the q-adic assignment problem. We first give an $O(n^{(q-1)/2})$-approximation algorithm for the Koopmans-Beckman version of the problem improving upon the result of Barvinok. Then, we introduce a new family of instances satisfying "tensor triangle inequalitiesa" and give a constant factor approximation algorithm for them. We show that many classical optimization problems can be modeled by q-adic assignment problems from this family. Finally, we give several integrality gap examples for the natural LP relaxations of the problem.
Danny Hermelin, Matthias Mnich, Erik Jan Van Leeuwen and Gerhard J. Woeginger. Domination when the Stars Are Out
Abstract: We algorithmize the recent structural characterization for claw-free graphs by Chudnovsky and Seymour. Building on this result, we show that Dominating Set on claw-free graphs is (i) fixed-parameter tractable and (ii) even possesses a polynomial kernel. To complement these results, we establish that Dominating Set is not fixed-parameter tractable on the slightly larger class of graphs that exclude K1;4 as an induced subgraph. Our results provide a dichotomy for Dominating Set in K1;L-free graphs and show that the problem is fixed-parameter tractable if and only if L <= 3. Finally, we show that our algorithmization can also be used to show that the related Connected Dominating Set problem is fixed-parameter tractable on claw-free graphs.
Ittai Abraham, Daniel Delling, Amos Fiat, Andrew Goldberg and Renato Werneck. VC-Dimension and Shortest Path Algorithms
Abstract: We explore the relationship between learning theory and graph algorithm design. In particular, we show that set systems induced by sets of vertices on shortest paths have VC-dimension at most two. This allows us to use a result from learning theory to improve time bounds on query algorithms for the point-to-point shortest path problem in networks of low highway dimension. We also refine the definitions of highway dimension and related concepts, making them more general and potentially more relevant to practice. In particular, we define highway dimension in terms of set systems induced by shortest paths, and give cardinality-based and average case definitions.
Isolde Adler, Stavros Kolliopoulos, Philipp Klaus Krause, Daniel Lokshtanov, Saket Saurabh and Dimitrios Thilikos. Tight Bounds for Linkages in Planar Graphs
Abstract: The {\sc Disjoint-Paths Problem} asks, given a graph $G$ and a set of pairs of terminals $(s_{1},t_{1}),\ldots,(s_{k},t_{k})$, whether there is a collection of $k$ vertex-disjoint paths linking  $s_{i}$ and $t_{i}$, for $i=1,\ldots,k$. In their $f(k)\cdot n^{3}$  algorithm for this problem, Robertson and Seymour introduced the {\sl irrelevant vertex technique} according to which in every instance of treewidth grater than $g(k)$ there is an ``irrelevant'' vertex whose removal creates an equivalent instance of the problem. This fact is based on the celebrated {\sl Unique Linkage Theorem}, whose -- very technical --  proof gives a function $g(k)$ that is responsible for an  immense parametric dependence in the running time of the algorithm. In this paper we prove this result for planar graphs achieving  $g(k)=2^{O(k)}$. Our bound is radically better than the bounds known for general graphs. Moreover, our proof is new and self-contained, and it strongly exploits the combinatorial properties of planar graphs. We also prove that our result is optimal, in the sense that the bound on $g(k)$ cannot become better than exponential. Our results suggest that any algorithm for the {\sc Disjoint-Paths Problem} that runs in time better than $2^{2^{o(k)}}\cdot n^{O(1)}$, should probably require drastically different ideas from those in the irrelevant vertex technique.
Heng Guo, Pinyan Lu and Leslie Valiant. The Complexity of Symmetric Boolean Parity Holant Problems
Abstract: For certain subclasses of NP, $\oplus$P  or \#P characterized by local constraints, it is known that if there exist any problems that are not polynomial time computable within that subclass, then those problems are NP-,  $\oplus$P- or \#P-complete. Such dichotomy results have been proved for characterizations such as Constraint Satisfaction Problems, and directed and undirected Graph Homomorphism Problems,  often with additional restrictions. Here we give a dichotomy result for the more expressive framework of Holant Problems. These additionally allow for the expression of matching problems, which have had pivotal roles in complexity theory. As our main result we prove the dichotomy theorem that, for the class $\oplus$P, every set of boolean symmetric Holant signatures of any arities that is not polynomial time computable is $\oplus$P-complete. The result exploits some special properties of the class $\oplus$P and characterizes four distinct tractable subclasses within $\oplus$P. It leaves open the corresponding questions for NP, $\#$P and $\#_k$P for $k\neq 2$.
Sanjeev Arora and Rong Ge. New Algorithms for Learning in Presence of Errors
Abstract: We give new algorithms for a variety of randomly-generated instances of computational problems using a linearization technique that reduces to solving a system of linear equations. These algorithms are derived in the context of learning with structured noise, a notion introduced in this paper. This notion is best illustrated with the learning parities with noise (LPN) problem ---well-studied in learning theory and cryptography. In the standard version, we have access to an oracle that, each time we press a button, returns a random vector $ \vec{a} \in \GF(2)^n$ together with a bit $b \in \GF(2)$ that was computed as $\vec{a}\cdot \vec{u} +\eta$, where $\vec{u}\in \GF(2)^n$ is a {\em secret} vector, and $\eta \in \GF(2)$ is a noise bit that is $1$ with some probability $p$. Say $p=1/3$. The goal is to recover $\vec{u}$. This task is conjectured to be intractable. In the structured noise setting we introduce a slight (?) variation of the model: upon pressing a button, we receive (say) $10$ random vectors $\vec{a_1}, \vec{a_2}, \ldots, \vec{a_{10}} \in \GF(2)^n$, and corresponding bits $b_1, b_2, \ldots, b_{10}$, of which at most $3$ are noisy. The oracle may arbitrarily decide which of the $10$ bits to make noisy. We exhibit a polynomial-time algorithm to recover the secret vector $\vec{u}$ given such an oracle. We think this structured noise model may be of independent interest in machine learning. We discuss generalizations of our result, including learning with more general noise patterns. We also give the first nontrivial algorithms for two problems, which we show fit in our structured noise framework. We give a slightly subexponential algorithm for the well-known learning with errors (LWE) problem over $\GF(q)$ introduced by Regev for cryptographic uses. Our algorithm works for the case when the gaussian noise is small; which was an open problem. Our result also clarifies why existing hardness results fail at this particular noise rate. We also give polynomial-time algorithms for learning the MAJORITY OF PARITIES function of Applebaum et al. for certain parameter values. This function is a special case of Goldreich's pseudorandom generator.
Arash Farzan and Shahin Kamali. Compact Navigation and Distance Oracles for Graphs with Small Treewidth
Abstract: Given an unlabeled, unweighted, and undirected graph with $n$ vertices and small (but not necessarily constant) treewidth $k$, we consider the problem of preprocessing the graph to build space-efficient encodings (oracles) to perform various queries efficiently.  We assume the word RAM model where the size of a word is $\omegah{\log n}$ bits. The first oracle, we present, is the navigation oracle which facilitates primitive navigation operations of adjacency, neighborhood, and degree queries. By way of an enumerate argument, which is of independent interest, we show the space requirement of the oracle is optimal to within lower order terms for all treewidths. The oracle supports the mentioned queries all in constant worst-case time. The second oracle, we present, is an exact distance oracle which facilitates distance queries between any pair of vertices (i.e., an all-pair shortest-path oracle). The space requirement of the oracle is also optimal to within lower order terms. Moreover, the distance queries perform in $\oh{k^2\log^3 k}$ time. Particularly, for the class of graphs of our interest, graphs of bounded treewidth (where $k$ is constant), the distances are reported in constant worst-case time.
Ashwinkumar Badanidiyuru Varadaraja. Buyback Problem - Approximate matroid intersection with  cancellation costs
Abstract: In the buyback problem, an algorithm observes a sequence of bids and must decide whether to accept each bid at the moment it arrives, sub ject to some constraints on the set of accepted bids. Decisions to reject bids are irrevocable, whereas decisions to accept bids may be canceled at a cost that is a fixed fraction of the bid value. Previous to our work, deterministic and randomized algorithms were known when the constraint is a matroid constraint. We extend this and give a deterministic algorithm for the case when the constraint is an intersection of k matroid constraints. We further prove a matching lower bound on the competitive ratio for this problem and extend our results to arbitrary downward closed set systems. This problem has applications to banner advertisement, semi-streaming, routing, load balancing and other problems where preemption or cancellation of previous allocations is allowed.
Anand S, Naveen Garg and Nicole Megow. Meeting deadlines: How much speed suffices?
Abstract: We consider the online problem of scheduling real-time jobs with   hard deadlines on~$m$ parallel machines. Each job has a processing   time and a deadline, and the objective is to schedule jobs so that   they complete before the deadline. It is known that even when the   instance is feasible it may not be possible to meet all deadlines   when jobs arrive online over time. We therefore consider the setting   when the algorithm has available machines with speed~$s>1$.   We present a new online algorithm that finds a feasible schedule on   machines of speed~$e/(e-1)$ for any instance that is feasible on   unit speed machines. This improves on the previously best known   result which requires a speed of~$2-2/(m+1)$. Our algorithm only   uses the relative order of job deadlines and is oblivious of the   actual deadline values. It was shown earlier that the minimum   speed required for such algorithms is~$e/(e-1)$, and thus, our   analysis is tight. We also show that our new algorithm outperforms   two other well-known algorithms by giving the first lower bounds on   their minimum speed requirement.
Daniel Reichman and Uriel Feige. Recoverable Values for Independent Sets
Abstract: The notion of {\em recoverable value} was advocated in work of Feige, Immorlica, Mirrokni and Nazerzadeh [Approx 2009] as a measure of quality for approximation algorithms. There this concept was applied to facility location problems. In the current work we apply a similar framework to the maximum independent set problem (MIS). We say that an approximation algorithm has {\em recoverable value} $\rho$, if for every graph it recovers an independent set of size at least $\max_I \sum_{v\in I} \min[1,\rho/(d(v) + 1)]$, where $d(v)$ is the degree of vertex $v$, and $I$ ranges over all independent sets in $G$. Hence, in a sense, from every vertex $v$ in the maximum independent set the algorithm recovers a value of at least $\rho/(d_v + 1)$ towards the solution. This quality measure is most effective in graphs in which the maximum independent set is composed of low degree vertices. It easily follows from known results that some simple algorithms for MIS ensure $\rho \ge 1$. We design a new randomized algorithm for MIS that ensures an expected recoverable value of at least $\rho \ge 15/7$. In addition, we show that approximating MIS in graphs with a given $k$-coloring within a ratio larger than $2/k$ is unique games hard. This rules out a natural approach for obtaining $\rho \ge 2$.
Piotr Berman, Arnab Bhattacharyya, Elena Grigorescu, Sofya Raskhodnikova, David Woodruff and Grigory Yaroslavtsev. Linear Programming and Combinatorial Bounds on Steiner Transitive-Closure Spanners
Abstract: Given a directed graph G = (V,E) and an integer k >= 1, a k-transitive-closure-spanner (k-TC-spanner) of G is a directed graph H = (V, E_H) that has (1) the same transitive closure as G and (2) diameter at most k.  In some applications, the shortcut paths added to the graph in order to obtain small diameter can use Steiner vertices, that is, vertices not in the original graph G. The resulting spanner is called a Steiner transitive-closure spanner (Steiner TC-spanner). Motivated by applications to property reconstruction and access control hierarchies, we concentrate on Steiner TC-spanners of directed acyclic graphs or, equivalently, partially ordered sets. In these applications, the goal is to find a sparsest Steiner k-TC-spanner of a poset G for a given k and G. The focus of this paper is the relationship between the dimension of a poset and the size of its sparsest Steiner TC-spanner. The dimension of a poset G is the smallest d such that G can be embedded into a d-dimensional directed hypergrid via an order-preserving embedding. We present a nearly tight lower bound on the size of Steiner 2-TC-spanners of d-dimensional directed hypergrids. It implies better lower bounds on the complexity of local reconstructors of monotone functions and functions with low Lipschitz constant. The proof of the lower bound constructs a dual solution to a linear programming relaxation of the Steiner 2-TC-spanner problem. We also show that one can efficiently construct a Steiner 2-TC-spanner, of size matching the lower bound, for any low-dimensional poset. Finally, we present a lower bound on the size of Steiner k-TC-spanners of d-dimensional posets that shows that the best-known construction, due to De Santis et al., cannot be improved significantly.
Laurent Bulteau, Guillaume Fertin and Irena Rusu. Sorting by Transpositions is Difficult
Abstract: In comparative genomics, a transposition is an operation that exchanges two consecutive sequences of genes in a genome. The transposition distance, that is, the minimum number of transpositions needed to transform a genome into another, is, according to numerous studies, a relevant evolutionary distance. The problem of computing this distance when genomes are represented by permutations, called the Sorting by Transpositions problem (SBT), has been introduced by Bafna and Pevzner in 1995. It has naturally been the focus of a number of studies, but the computational complexity of this problem has remained undetermined for 15 years. In this paper, we answer this long-standing open question by proving that the Sorting by Transpositions problem is NP-hard. As a corollary of our result, we also prove that the following problem is NP-hard: given a permutation \pi, is it possible to sort \pi using d_b(\pi)/3 permutations, where d_b(\pi) is the number of breakpoints of \pi?
Moran Feldman, Seffi Naor and Roy Schwartz. Nonmonotone Submodular Maximization via a Structural Continuous Greedy Algorithm
Abstract: Consider a situation in which one has a suboptimal solution $S$ to a maximization problem which only constitutes a weak approximation to the problem. Suppose that even though the value of $S$ is small compared to an optimal solution $OPT$ to the problem, $S$ happens to be structurally similar to $OPT$. A natural question to ask in this scenario is whether there is a way of improving the value of $S$ based solely on this information. In this paper we introduce the \emph{Structural Continuous Greedy Algorithm} which answers this question affirmatively in the context of the \textsc{Nonmonotone Submodular Maximization Problem}. Using this algorithm we are able to improve on the best approximation factor known for this problem. In the \textsc{Nonmonotone Submodular Maximization Problem} we are given a non-negative submodular function $f$, and the objective is to find a subset maximizing $f$. This is considered one of the basic submodular optimization problems, generalizing many well known problems such as the Max Cut problem in undirected graphs. The current best approximation factor for this problem is $0.41$ given by Gharan and Vondr\'{a}k. On the other hand, Feige et al. showed that no algorithm can give $0.5 + \ee$ approximation for it. Our method yields an improved $0.42$-approximation for the problem.
Robert Elsaesser and Tobias Tscheuschner. Settling the complexity of local max-cut (almost) completely
Abstract: We consider the problem of finding a local optimum for the max-cut problem with FLIP-neighborhood, in which exactly one node may change the partition. Schäffer and Yannakakis (SICOMP, 1991) showed PLS-completeness of this problem on graphs with unbounded degree. On the other side, Poljak (SICOMP 1995) showed that in cubic graphs every FLIP local search takes O(n^2) steps, where n is the size of the graph. Due to the huge gap between degree three and unbounded degree, Ackermann, Röglin, and Vöcking (JACM, 2008) asked for the smallest d such that on graphs with maximum degree d the local max-cut problem with FLIP-neighborhood is PLS-complete. In this paper, we prove that the computation of a local optimum on graphs with maximum degree five is PLS-complete. Thus, we solve the problem posed by Ackermann et al. almost completely, by showing that d is either 4 or 5 (unless PLS is in P). On the other side, we also prove that on graphs with degree O(logn) every FLIP local search has probably polynomial smoothed complexity. Roughly speaking, for any instance, in which the edge weights are perturbated by a (Gaussian) random noise with variance σ^2, every FLIP local search terminates in time polynomial in n and σ, with probability 1−n^{−Ω(1)}. Putting both results together, we may conclude that although local max-cut is likely to be hard on graphs with bounded degree, it can be solved in polynomial time for slightly perturbated instances with high probability.
Magnus Bordewich and Ross J. Kang. Rapid mixing of subset Glauber dynamics on graphs of bounded tree-width
Abstract: Motivated by the `subgraphs world' view of the ferromagnetic Ising model, we develop a general approach to studying mixing times of Glauber dynamics based on subset expansion expressions for a class of graph polynomials.  With a canonical paths argument, we demonstrate that the chains defined within this framework mix rapidly upon graphs of bounded tree-width. This extends known results on rapid mixing for the Tutte polynomial, the adjacency-rank ($R_2$-)polynomial and the interlace polynomial.
Lei Huang and Toniann Pitassi. Automatizability and Simple Stochastic Games
Abstract: The complexity of simple stochastic games (SSGs) has been open since they were defined by Condon in 1992. Such a game is played by two players, Min and Max, on a graph consisting of max nodes, min nodes, and average nodes. The goal of Max is to reach the 1-sink, while the goal of Min is to avoid the 1-sink. When on a max (min) node, Max (Min) chooses the outedge, and when on an average node, they take each edge with equal probability. The complexity problem is to determine, given a graph, whether or not Max has a strategy that is guaranteed to reach the 1-sink with probability at least 1/2. Despite intensive effort, the complexity of this problem is still unresolved. In this paper, we establish a new connection between the complexity of SSGs and the complexity of an important problem in proof complexity--the proof search problem for low depth Frege systems. We prove that if depth-3 Frege systems are weakly automatizable, then SSGs are solvable in polynomial-time. Moreover we identify a natural combinatorial principle, which is a version of the well-known Graph Ordering Principle (GOP), that we call the integer-valued GOP (IGOP). This principle states that for any graph $G$ with nonnegative integer weights associated with each node, there exists a locally maximal vertex (a vertex whose weight is at least as large as its neighbors). We prove that if depth-2 Frege plus IGOP is weakly automatizable, then SSG is in P.
Stefan Canzar, Khaled Elbassioni, Gunnar W. Klau and Julián Mestre. On Tree-Constrained Matchings and Generalizations
Abstract: We consider the following \textsc{Tree-Constrained Bipartite     Matching} problem: Given two rooted trees $T_1=(V_1,E_1)$,   $T_2=(V_2,E_2)$ and a weight function $w: V_1\times V_2 \mapsto   \mathbb{R}_+$, find a maximum weight matching $\mathcal{M}$ between   nodes of the two trees, such that none of the matched nodes is an   ancestor of another matched node in either of the trees.  This   generalization of the classical bipartite matching problem appears,   for example, in the computational analysis of live cell video   data. We show that the problem is $\mathcal{APX}$-hard and thus,   unless $\mathcal{P} = \mathcal{NP}$, disprove a previous claim that   it is solvable in polynomial time. Furthermore, we give a   $2$-approximation algorithm based on a combination of the local   ratio technique and a careful use of the structure of basic feasible   solutions of a natural LP-relaxation, which we also show to have an   integrality gap of $2-o(1)$.   In the second part of the paper, we consider a natural   generalization of the problem, where trees are replaced by partially   ordered sets (posets).  We show that the local ratio technique gives   a $2k\rho$-approximation for the $k$-dimensional matching   generalization of the problem, in which the maximum number of   incomparable elements below (or above) any given element in each   poset is bounded by $\rho$. We finally give an almost matching   integrality gap example, and an inapproximability result showing   that the dependence on $\rho$ is most likely unavoidable.
Olaf Beyersdorff, Nicola Galesi, Massimo Lauria and Alexander Razborov. Parameterized Bounded-Depth Frege is Not Optimal
Abstract: A general framework for parameterized proof complexity was introduced by Dantchev, Martin, and Szeider [DMS07]. In that framework the parameterized version of any proof system is not fpt-bounded for some technical reasons, but we remark that this question becomes much more interesting if we restrict ourselves to those parameterized contradictions $(F,k)$ in which $F$ itself is a contradiction. We call such parameterized contradictions {\em strong}, and with one important exception (vertex cover) all interesting contradictions we are aware of are strong. It follows from the gap complexity theorem of [DMS07] that tree-like Parameterized Resolution is not fpt-bounded w.r.t. strong parameterized contradictions. The main result of this paper significantly improves upon this by showing that even the parameterized version of bounded-depth Frege is not fpt-bounded w.r.t. strong contradictions. More precisely, we prove that the pigeonhole principle requires proofs of size $n^{\Omega(k)}$ in bounded-depth Frege, and, as a special case, in dag-like Parameterized Resolution. This answers an open question posed in [DMS07]. In the opposite direction, we interpret a well-known FPT algorithm for vertex cover as a DPLL procedure for Parameterized Resolution. Its generalization leads to a proof search algorithm for Parameterized Resolution that in particular shows that tree-like Parameterized Resolution allows short refutations of all parameterized contradictions given as bounded-width CNF's.
Stephane Durocher, Meng He, Ian Munro, Patrick Nicholson and Matthew Skala. Range Majority in Constant Time and Linear Space
Abstract: Given an array $A$ of size $n$, we consider the problem of answering range majority queries: given a query range $[i..j]$ where $1 \le i \le j \le n$, return the majority element of the subarray $A[i..j]$ if it exists.  We describe a linear space data structure that answers range majority queries in constant time.  We further generalize this problem by defining range $\alpha$-majority queries: given a query range $[i..j]$, return all the elements in the subarray $A[i..j]$ with frequency greater than $\alpha(j-i+1)$.  We prove an upper bound on the number of $\alpha$-majorities that can exist in a subarray, assuming that query ranges are restricted to be larger than a given threshold.  Using this upper bound, we generalize our range majority data structure to answer range $\alpha$-majority queries in $O(\frac{1}{\alpha})$ time using $O(n \lg(1/\alpha + 1))$ space, for any fixed $\alpha \in (0, 1)$.  This result is interesting since other similar range query problems based on frequency have nearly logarithmic lower bounds on query time when restricted to linear space.
Endre Boros, Khaled Elbassioni, Mahmoud Fouz, Vladimir Gurvich, Kazuhisa Makino and Bodo Manthey. Stochastic Mean Payoff Games: Smoothed Analysis and Approximation Schemes
Abstract: We consider two-player zero-sum stochastic mean payoff games with perfect information modeled by a digraph with black, white, and random vertices. These BWR-games games are polynomially equivalent with the classical Gillette games, which include many well-known subclasses, such as cyclic games, simple stochastic games, stochastic parity games, and Markov decision processes. They can also be used to model parlor games such as Chess or Backgammon. It is a long-standing open question if a polynomial algorithm exists that solves BWR-games. In fact, a pseudo-polynomial algorithm  for these games with an arbitrary number of random nodes would already imply their polynomial solvability. Currently, only two classes are known to have such a pseudo-polynomial algorithm: BW-games (the case with no random nodes) and ergodic BWR-games (in which the game's value does not depend on the initial position) with constant number of random nodes. In this paper, we show that the existence of a pseudo-polynomial algorithm for BWR-games with constant number of random vertices implies smoothed polynomial complexity and the existence of absolute and relative polynomial-time approximation schemes. In particular, we obtain smoothed polynomial complexity and derive absolute and relative approximation schemes for BW-games and ergodic BWR-games (assuming a technical requirement about the probabilities at the random nodes).
Scott Aaronson and Andrew Drucker. Advice Coins for Classical and Quantum Computation
Abstract: We study the power of classical and quantum algorithms equipped with nonuniform advice, in the form of a coin whose bias encodes useful information. This question takes on particular importance in the quantum case, due to a surprising result that we prove: a quantum finite automaton with just two states can be sensitive to arbitrarily small changes in a coin's bias.  This contrasts with classical probabilistic finite automata, whose sensitivity to changes in a coin's bias is bounded by a classic 1970 result of Hellman and Cover. Despite this finding, we are able to bound the power of advice coins for space-bounded classical and quantum computation.  We define the classes BPPSPACE/coin and BQPSPACE/coin, of languages decidable by classical and quantum polynomial-space machines with advice coins.  Our main theorem is that both classes coincide with PSPACE/poly. Proving this result turns out to require substantial machinery.  We use an algorithm due to Neff for finding roots of polynomials in NC; a result from algebraic geometry that lower-bounds the separation of a polynomial's roots; and a result on fixed-points of superoperators due to Aaronson and Watrous, originally proved in the context of quantum computing with closed timelike curves.
Sze-Hang Chan, Tak-Wah Lam, Lap-Kei Lee, Chi-Man Liu and Hing-Fung Ting. Sleep Management on Multiple Machines for Energy and Flow Time
Abstract: In large data centers, determining the right number of working machines is often non-trivial, especially when the workload is unpredictable. Using too many machines would waste energy, while using too few would affect the performance. Motivated by such concern, we extend the traditional study of online flow-time scheduling on multiple machines to take sleep management and energy into consideration. Specifically, we study online algorithms that can determine dynamically when and which subset of machines should wake up (or sleep), how jobs are dispatched among the machines, and how each machine schedules its job. To understand the tradeoff between flow time and energy, we consider schedules whose objective is to minimize the sum of  flow time and energy. We study this new scheduling problem in two settings: the first one assumes machines running at a fixed speed, and the second one assumes the dynamic speed scaling model (i.e., each machine can scale its speed dynamically to further optimize its energy). The latter implies that the online scheduler also has to adjust the speed for each machine. For each speed model, we give an $O(1)$-competitive algorithm. Like the previous work on flow time and energy, the analysis of our algorithms is also based on potential functions. What is new is that the sleep management problem would allow the online and offline algorithms to use different subsets of machines at different times, and this pushes us to derive a more general potential analysis that can consider different match-up of machines between the two algorithms.
Subhash Khot and Per Austrin. A Simple Deterministic Reduction for the Gap Minimum Distance of Code Problem
Abstract: We present a simple deterministic gap-preserving reduction from SAT to the Minimum Distance of Code Problem over GF(2). We also show how to extend the reduction to work over any finite field. Previously a randomized reduction was known due to Dumer, Micciancio, and Sudan [8], which was recently derandomized by Cheng and Wan [6, 7]. These reductions rely on highly non-trivial coding theoretic constructions whereas our reduction is elementary. As an additional feature, our reduction gives a constant factor hardness even for asymptotically good codes, i.e., having constant rate and relative distance. Previously it was not known how to achieve deterministic reductions for such codes.
Lance Fortnow and Rahul Santhanam. Robust Simulations and Significant Separations
Abstract: We define and study a new notion of ``robust simulations'' between complexity classes which is intermediate between the traditional notions of infinitely-often and almost-everywhere, as well as a corresponding notion of ``significant separations''. A language L has a robust simulation in a complexity class C if there is a language in C which agrees with L on arbitrarily large polynomial stretches of input lengths. There is a significant separation of L from C if there is no robust simulation of L in C. The new notion of simulation is a cleaner and more natural notion of simulation than the infinitely-often notion. We show that various implications in complexity theory such as the collapse of PH if NP = P and the Karp-Lipton theorem have analogues for robust simulations. We then use these results to prove that most known separations in complexity theory, such as hierarchy theorems, fixed polynomial circuit lower bounds, time-space tradeoffs, and the recent theorem of Williams, can be strengthened to significant separations, though in each case, an almost everywhere separation is unknown. Proving our results requires several new ideas, including a completely different proof of the hierarchy theorem for non-deterministic polynomial time than the ones previously known.
Ryan O'Donnell, John Wright and Yuan Zhou. The Fourier Entropy-Influence Conjecture for certain classes of Boolean functions
Abstract: In 1996, Friedgut and Kalai made the "Fourier Entropy-Influence Conjecture": For every Boolean function f : {-1,1}^n -> {-1,1} it holds that H[\hat{f}^2] <= C I[f], where H[\hat{f}^2] is the spectral entropy of f, I[f] is the total influence of f, and C is a universal constant.  In this work we verify the conjecture for symmetric functions.  More generally, we verify it for functions with symmetry group S_{n_1} x ... x S_{n_d}$ where d is constant.  We also verify the conjecture for functions computable by read-once decision trees.
Frederic Magniez, Ashwin Nayak, Miklos Santha and David Xiao. Improved bounds for the randomized decision tree complexity of recursive majority
Abstract: We consider the randomized decision tree complexity of the recursive 3-majority function. For evaluating a height h formulae, we prove a lower bound for the δ-two-sided-error randomized decision tree complexity of (1−2δ)(5/2)^h, improving the lower bound of (1−2δ)(7/3)^h given by Jayram et al. (STOC ’03). We also state a conjecture which would further improve the lower bound to (1 − 2δ)2.54355^h. Second, we improve the upper bound by giving a new zero-error randomized decision tree algorithm that has complexity at most (1.007) · 2.64946^h, improving on the previous best known algorithm, which achieved (1.004) · 2.65622^h. Our lower bound follows from a better analysis of the base case of the recursion of Jayram et al. . Our algorithm uses a novel “interleaving” of two recursive algorithms.
André Chailloux, Iordanis Kerenidis and Bill Rosgen. Quantum Commitments from Complexity Assumptions
Abstract: Bit commitment schemes are at the basis of modern cryptography. Since information-theoretic security is impossible both in the classical and the quantum regime, we need to look at computationally secure commitment schemes. In this paper, we study worst-case complexity assumptions that imply quantum bit-commitment schemes.  First, we show that QSZK \not\subseteq QMA implies a computationally hiding and statistically binding auxiliary-input quantum commitment scheme. We then extend our result to show that the much weaker assumption QIP \not\subseteq QMA (which is weaker than PSPACE \not\subseteq PP) implies the existence of auxiliary-input commitment schemes with quantum advice. Finally, to strengthen the plausibility of the first assumption, we  find a quantum oracle relative to which honest-verifier QSZK is not contained in QCMA, the class of languages that can be verified using a classical proof in quantum polynomial time.
Sebastian Faust, Krzysztof Pietrzak and Daniele Venturi. Tamper-Proof Circuits: How to Trade Leakage for Tamper-Resilience
Abstract: Tampering attacks are cryptanalytic attacks on the implementation of cryptographic algorithms (e.g. smart cards), where an adversary introduces faults with the hope that the tampered device will reveal secret information. Inspired by the work of Ishai et al. [Eurocrypt'06], we propose a compiler that transforms any circuit into a new circuit with the same functionality, but which is resilient against a well-defined and powerful tampering adversary. More concretely, our transformed circuits remain secure even if the adversary can adaptively tamper with every wire in the circuit as long as the tampering fails with some probability $\delta>0$. This additional requirement is motivated by practical tampering attacks, where it is often difficult to guarantee the success of a specific attack. Formally, we show that a $q$-query tampering attack against the transformed circuit can be ``simulated'' with only black-box access to the original circuit and $\log(q)$ bits of additional auxiliary information. Thus, if the implemented cryptographic scheme is secure against $\log(q)$ bits of leakage, then our implementation is tamper-proof in the above sense. Surprisingly, allowing for this small amount of information leakage --- and not insisting on perfect simulability like in the work of Ishai et al. --- allows for much more efficient compilers, which moreover do not require randomness during evaluation. Similar to earlier work our compiler requires small, stateless and computation-independent tamper-proof gadgets. Thus, our result can be interpreted as reducing the problem of shielding arbitrary complex computation to protecting simple components.
Eric Allender, Luke Friedman and William Gasarch. Limits on the Computational Power of Random Strings
Abstract: R, the set of Kolmogorov-random strings, is a central notion in the study of algorithmic information theory, and in recent years R has increasingly been studied in relation to computational complexity theory.  This work takes as its starting point three strange inclusions that have been proved since 2002: 1. NEXP is contained in the class of problems NP-Turing-reducible to R. 2. PSPACE is contained in the class of problems poly-time Turing-reducible to R. 3. BPP is contained in the class of problems poly-time truth-table-reducible to R. (These inclusions hold for both of the most widely-studied variants of Kolmogorov complexity: the plain complexity C(x) and the prefix-complexity K(x).  They also hold no matter which "universal" Turing machine is used in the definitions of the functions C and K.) These inclusions are "strange" since R is not even computable!  Thus it is not at all clear that these are meaningful upper bounds on the complexity of BPP, PSPACE, and NEXP, and indeed it is not at all clear that it is very interesting to consider efficient reductions to noncomputable sets such as R. Our main theorems are that, if we restrict attention to prefix complexity K and the corresponding set of random strings R_K, then the class of decidable problems that are in NP relative to R_K (no matter which universal machine is used to define K) lies in EXPSPACE, and the class of decidable problems that are poly-time truth-table reducible to R_K (no matter which universal machine is used to define K) lies in PSPACE. Thus we can "sandwich" PSPACE between the class of problems truth-table- and Turing-reducible to R_K, and the class of decidable problems that are in NP relative to R_K lies between NEXP and EXPSPACE.  The corresponding questions for plain Kolmogorov complexity C are wide open; no upper bounds are known at all for the class of decidable problems efficiently reducible to R_C. These results also provide the first quantitative limits on the applicability of uniform derandomization techniques.
Konstantinos Tsakalidis and Gerth Stølting Brodal. Dynamic Planar Range Maxima Queries
Abstract: We consider the dynamic two-dimensional maxima query problem. Let $P$ be a set of $n$ two-dimensional points. A point is \emph{maximal} if it is not dominated by any other point in $P$. We describe two data structures that support the reporting of the $t$ maximal points that dominate a given query point, and allow for insertions and deletions of points in $P$. % In the pointer machine model we present a linear space data structure with $O(\log n + t)$ worst case query time and $O(\log n)$ worst case update time.  This is the first dynamic data structure for the planar maxima dominance query problem that achieves these bounds in the worst case.  The update time is optimal for the achieved query time. % The data structure also supports the more general query of reporting the maximal points among the points that lie in a given a 3-sided orthogonal range unbounded from above.  Queries take $O(\log n + t)$ worst case time, where $t$ is the number of reported points. We can support 4-sided queries in $O(\log^2 n + t)$ worst case time, and $O(\log^2 n)$ worst case update time, using $O(n\log n)$ space. This allows us to improve the worst case deletion time of the dynamic rectangular visibility query problem from $O(\log^3 n)$ to $O(\log^2 n)$. % Our data structure can be adapted to the RAM model with word size $w$, where the coordinates of the points are integers in the range $U\smash = \{0, \dots ,2^w\smash -1 \}$. We present a linear space data structure for 3-sided range maxima queries with $O( \frac{\log n}{\log   \log n } + t )$ worst case query time and $O( \frac{\log n}{\log   \log n } )$ worst case update time. This is the first data structure that achieves sublogarithmic worst case complexity for all operations in the RAM model. The query time is optimal for the achieved update time.
Anna Adamaszek, Artur Czumaj, Andrzej Lingas and Jakub Onufry Wojtaszczyk. Approximation schemes for capacitated geometric network design
Abstract: We study a capacitated network design problem in geometric setting. We assume that the input consists of an integral link capacity k and two sets of points on a plane, sources and sinks, each source/sink having an associated integral demand (amount of flow to be shipped from/to). The capacitated geometric network design problem is to construct a minimum-length network N that allows to route the requested flow from sources to sinks, such that each link in N has capacity k; the flow is splittable and parallel links are allowed in N. The capacitated geometric network design problem generalizes, among others, the geometric Steiner tree problem, and as such it is NP-hard. We show that if the demands are polynomially bounded and the link capacity k is not too large, the single-sink capacitated geometric network design problem admits a polynomial-time approximation scheme. If the capacity is arbitrarily large, then we design a quasi polynomial-time approximation scheme for the capacitated geometric network design problem allowing for arbitrary number of sinks. Our results rely on a derivation of an upper bound on the number of vertices different from sources and sinks (the so called Steiner vertices) in an optimal network. The bound is polynomial in the total demand of the sources.
Piotr Berman, Arnab Bhattacharyya, Konstantin Makarychev, Sofya Raskhodnikova and Grigory Yaroslavtsev. Improved Approximation for the Directed Spanner Problem
Abstract: We give a new $O(\sqrt{n}\log n)$ approximation algorithm for the problem of finding the sparsest  spanner of a given {\em directed} graph $G$ on $n$ vertices. A spanner of a graph is a sparse subgraph that approximately preserves distances in the original graph. More precisely, given a graph $G=(V,E)$ with nonnegative edge lengths $d:~E\rightarrow{\mathbb N}$ and a {\em stretch} $k\geq 1$, a subgraph $H = (V,E_H)$ is a {\em $k$-spanner} of $G$ if for every edge $(u,v) \in E$, the graph $H$ contains a path from $u$ to $v$ of length at most $k \cdot d(u,v)$. The previous best approximation ratio was $\tilde{O}(n^{2/3})$, due  to Dinitz and Krauthgamer~(STOC '11). We also present an improved algorithm for the important  special  case of directed 3-spanners with unit edge lengths. The approximation ratio of our algorithm is $\tilde{O}(n^{1/3})$ which almost matches the lower bound of Dinitz and Krauthgamer for the integrality gap of the linear program. The best previously known algorithms for this problem, due to Berman, Raskhodnikova and Ruan (FSTTCS '10) and Dinitz and Krauthgamer, had approximation ratio $\tilde{O}(\sqrt{n})$.
Ryan Harkins and John Hitchcock. Exact Learning Algorithms, Betting Games, and Circuit Lower Bounds
Abstract: This paper extends and improves work of Fortnow and Klivans   \cite{FortnowKlivans09}, who showed that if a circuit class $\calC$   has an efficient learning algorithm in Angluin's model of exact   learning via equivalence and membership queries \cite{Angluin88},   then we have the lower bound $\EXP^\NP \not\subseteq \calC$.  We use   entirely different techniques involving betting games   \cite{BvMRSS01} to remove the $\NP$ oracle and improve the lower   bound to $\EXP \not\subseteq \calC$.  This shows that it is even   more difficult to design a learning algorithm for $\calC$ than the   results of Fortnow and Klivans indicated.
Andrew Drucker. A PCP Characterization of AM
Abstract: We introduce a 2-round stochastic constraint-satisfaction problem, and show that its approximation version is complete for (the promise version of) the complexity class AM.  This gives a `PCP characterization' of AM analogous to the PCP Theorem for NP.  Similar characterizations have been given for higher levels of the Polynomial Hierarchy, and for PSPACE; however, we suggest that the result for AM might be of particular significance for attempts to derandomize this class. To test this notion, we pose some hypotheses related to our stochastic CSPs that (in light of our result) would imply collapse results for AM.  Unfortunately, the hypotheses may be over-strong, and we present evidence against them.  In the process we show that, if some language in NP is hard-on-average against circuits of size 2^{Omega(n)}, then there exist `inapproximable-on-average' optimization problems of a particularly elegant form. All our proofs use a powerful form of PCPs known as Probabilistically Checkable Proofs of Proximity, and demonstrate their versatility.  We also use known results on randomness-efficient soundness- and hardness-amplification.  In particular, we make essential use of the Impagliazzo-Wigderson generator; our analysis relies on a recent Chernoff-type theorem for expander walks.