Accepted Papers (with abstracts)

The following contributions have been accepted for presentation at ICALP 2011. They are listed in no particular order.

There is also a corresponding list without abstracts.

Leslie Ann Goldberg and Mark Jerrum. A polynomial-time algorithm for estimating the partition function of the ferromagnetic Ising model on a regular matroid
Abstract: We investigate the computational difficulty of approximating the partition function of the ferromagnetic Ising model on a regular matroid. Jerrum and Sinclair have shown that there is a fully polynomial randomised approximation scheme (FPRAS) for the class of graphic matroids. On the other hand, the authors have previously shown, subject to a complexity-theoretic assumption, that there is no FPRAS for the class of binary matroids, which is a proper superset of the class of graphic matroids. In order to  map out the region where approximation is feasible, we focus on the class of regular matroids, an important class of matroids which properly includes the class of graphic matroids, and is properly included in the class of binary matroids. Using Seymour's decomposition theorem, we give an FPRAS for the class of regular matroids.
T-H. Hubert Chan and Li Ning. Fast Convergence for Consensus in Dynamic Networks
Abstract: We study the convergence time required to achieve consensus in dynamic networks. In each time step, a node's value is updated to some weighted average of its neighbors' and its old values. We study the case when the underlying network is dynamic, and investigate different averaging models. Both our analysis and experiments show that dynamic networks exhibit fast convergence behavior, even under very mild connectivity assumptions.
Leah Epstein, Csanad Imreh, Asaf Levin and Judit Nagy-Gyorgy. On variants of file caching
Abstract: In the file caching problem, the input is a sequence of requests for files out of a slow memory. A file has two attributes, a retrieval cost and an integer size. It is required to maintain a cache of size k, bringing each file, which is not present in the cache at the time of request, from the slow memory into the cache. This situation is called a miss and incurs a cost equal to the retrieval cost of the file. Well-known special cases include paging (all costs and sizes are equal to 1), the cost model also known as weighted paging (all sizes are equal to 1), the fault model (all costs are equal to 1) and the bit model (the cost of a file is equal to its size). We study two online variants of the problem, caching with bypassing and caching with rejection. If bypassing is allowed, a miss for a file still results in an access to this file in the slow memory, but its subsequent insertion into the cache is optional. In the model with rejection, together with each request for a file, the algorithm is informed with a rejection penalty of the request. When a file which is not present in the cache is requested, the algorithm must either bring the file into the cache, paying the retrieval cost of the file, or reject the file, paying the rejection penalty of the request. The goal function is the sum of all amounts paid by the algorithm, that is, the sum of total rejection penalty and the total retrieval cost. This problem generalizes both caching and caching with bypassing. We study the relation between these two variants and design deterministic and randomized algorithms for both problems. The competitive ratios of the randomized algorithms match the best known results for caching. These are O(\log k)-competitive algorithms for all important special cases, and an O(\log^2 k)-competitive algorithm for caching. This improves the best known result with bypassing for paging, the bit model and the cost model, and it is the first algorithm of competitive ratio o(k) for the other cases. In the deterministic case, it is known that a (k+1)-competitive algorithm for caching with bypassing exists, and this is best possible. In contrast, we present a lower bound of 2k+1 on the competitive ratio of any deterministic algorithm for the variant with rejection, which holds already for paging. Exploiting the relation between the two problems, we design a (2k+2)-competitive algorithm for caching with rejection. We design a different (2k+1)-competitive algorithm for the last problem, which is applicable for paging, the bit model and the cost model.
Andrei Bulatov and Dániel Marx. Constraint satisfaction parameterized by solution size
Abstract:   In the constraint satisfaction problem (CSP) corresponding to a   constraint language (i.e., a set of relations) $\Gamma$, the goal is   to find an assignment of values to variables so that a given set of   constraints specified by relations from $\Gamma$ is satisfied.  In   this paper we study the fixed-parameter tractability of constraint   satisfaction problems parameterized by the size of the solution in   the following sense: one of the possible values, say 0, is ``free,''   and the number of variables allowed to take other, ``expensive,''   values is restricted. A size constraint requires that exactly $k$   variables take nonzero values.  We also study a more refined version   of this restriction: a global cardinality constraint prescribes how   many variables have to be assigned each particular value. We study   the parameterized complexity of these types of CSPs where the   parameter is the required number $k$ of nonzero variables. As   special cases, we can obtain natural and well-studied parameterized   problems such as Independent set, Vertex Cover,   $d$-Hitting Set, Biclique, etc.   In the case of constraint languages closed under substitution of   constants, we give a complete characterization of the   fixed-parameter tractable cases of CSPs with size constraints, and   we show that all the remaining problems are W[1]-hard. For CSPs with   cardinality constraints, we obtain a similar classification, but for   some of the problems we are only able to show that they are   Biclique-hard. The exact parameterized complexity of the   Biclique problem is a notorious open problem, although it   is believed to be W[1]-hard.
Fabian Kuhn and Monaldo Mastrolilli. Vertex Cover in Graphs with Locally Few Colors
Abstract: In 1986 Erd\H{o}s et. al. defined the local chromatic number of a graph as the minimum number of colors that must appear within distance 1 of a vertex. For any fixed $\Delta\geq 2$, they presented graphs with arbitrarily large chromatic number that can be colored so that: (i) no vertex neighborhood contains more than $\Delta$ different colors (\emph{bounded local colorability}), and (ii) adjacent vertices from two color classes form an induced subgraph that is complete and bipartite (\emph{local completeness}). We investigate the weighted vertex cover problem in graphs when a locally bounded coloring is given as input. This generalizes in a very natural vein the vertex cover problem in bounded degree graphs to a class of graphs with arbitrarily large chromatic number. Assuming the Unique Game Conjecture, we provide a tight characterization. More precisely, we prove that it is UG-hard to improve the approximation ratio of $2-2/(\Delta+1)$ if only the bounded local colorability, but not the local completeness condition holds for the given coloring. A matching upper bound is also provided. Vice versa, when both the above two properties (i) and (ii) hold, we present a randomized approximation algorithm with performance ratio of $2-\Omega(1)\frac{\ln \ln \Delta}{\ln\Delta}$. This matches (up to the constant factor in the lower order term) known inapproximability results for the special case of bounded degree graphs.    Moreover, we show that when both the above two properties (i) and (ii) hold, the obtained result finds a natural application in a classical scheduling problem, namely the precedence constrained single machine scheduling problem to minimize the weighted sum of completion times. In a series of recent papers it was established that this scheduling problem is a special case of the minimum weighted vertex cover in graphs $G$ of incomparable pairs defined in the dimension theory of partial orders. We show that $G$ satisfies properties (i) and (ii) where $\Delta-1$ is the maximum number of predecessors (or successors) of each job.
Sourav Chakraborty, David García-Soriano and Arie Matsliah. Efficient Sample Extractors for Juntas with Applications
Abstract: We develop a query-efficient sample extractor for juntas, that is, a probabilistic algorithm that can simulate random samples from the core of a $k$-junta $f:\zo^n \to \zo$ given oracle access to a function $f':\zo^n \to \zo$ that is only close to $f$. After a preprocessing step, which takes $\widetilde{O}(k)$ queries, generating each sample to the core of $f$ takes only one query to $f'$. We then plug in our sample extractor in the ``testing by implicit learning'' framework of Diakonikolas et al. \cite{til}, improving the query complexity of testers for various Boolean function classes. In particular, for some of the classes considered in \cite{til}, such as $s$-term DNF formulas, size-$s$ decision trees, size-$s$ Boolean formulas, $s$-sparse polynomials over $\mathbb{F}_2$, and size-$s$ branching programs, the query complexity is reduced from $\widetilde{O}(s^4/\epsilon ^2)$ to $\widetilde{O}(s/\epsilon ^2)$. This shows that using the new sample extractor, testing by implicit learning can lead to testers having better query complexity than those tailored to a specific problem, such as the tester of Parnas et al. \cite{prs} for the class of monotone $s$-term DNF formulas. In terms of techniques, we extend the tools used in \cite{fiso} for testing function isomorphism to juntas. Specifically, while the original analysis in \cite{fiso} allowed query-efficient noisy sampling from the core of any $k$-junta $f$, the one presented here allows similar sampling from the core of the {\em closest} $k$-junta to $f$, even if $f$ is not a $k$-junta but just close to being one. One of the observations leading to this extension is that the junta tester of Blais \cite{blais_juntas}, based on which the aforementioned sampling is achieved, enjoys a certain weak form of tolerance.
Amin Karbasi, Stratis Ioannidis and Laurent Massoulie. Content Search Through Comparisons
Abstract: We study the problem of navigating through a database of similar objects using comparisons under heterogeneous demand, a problem strongly related to small-world network design. We show that, under heterogeneous demand, the small-world network design problem is NP-hard. Given the above negative result, we propose a novel mechanism for small-world network design and provide an upper bound on its performance under heterogeneous demand. The above mechanism has a natural equivalent in the context of content search through comparisons, again under heterogeneous demand; we use this to establish both upper and lower bounds on content-search through comparisons.
Christos Kapoutsis. Nondeterminism is Essential in Small 2FAs with Few Reversals
Abstract: On every n-long input, every two-way finite automaton (2FA) can reverse its head O(n) times before halting. A "2FA with few reversals" is an automaton where this number is only o(n). For every h, we exhibit a language that requires Ω(2^h) states on every deterministic 2FA with few reversals, but only h states on a nondeterministic 2FA with few reversals.
Christine Cheng, Eric Mcdermid and Ichiro Suzuki. Center stable matchings and centers of cover graphs of distributive lattices
Abstract: Let I be an instance of the stable marriage (SM) problem.  In the late 1990s, Teo and Sethuraman discovered the existence of median stable matchings, which are stable matchings that match all participants to  their (lower/upper) median stable partner.   About a decade later, Cheng showed that not only are they locally-fair, but they are also globally-fair in the following sense:   when G(I) is the cover graph of the distributive lattice of stable matchings, these stable matchings  are  also  medians of G(I) -- i.e., their average distance to the other stable matchings is as small as possible.   Unfortunately, finding a median stable matching of I is #P-hard.   Inspired by the fairness properties of the median stable matchings, we study the center stable matchings which are the centers of G(I) --  i.e., the stable matchings whose maximum distance to any stable matching is as small as possible.   Here are our two main results.  First,  we show that a center stable matching of I can be computed in O(|I|^{2.5}) time.   Thus,  center stable matchings are the first type of globally-fair stable matchings we know of that can be computed efficiently.    Second, we show that in spite of the first result, there are similarities between the set of median stable matchings and the set of center stable matchings of I.  The former induces a hypercube in G(I) while the latter is the union of hypercubes of a fixed dimension in G(I).   Furthermore,  center stable matchings have a property that approximates the one found by Teo and Sethuraman for median stable matchings.  Finally, we note that our results extend to other variants of SM whose solutions form a distributive lattice and whose rotation posets can be constructed efficiently.  
Ashley Montanaro, Aram Harrow and Anthony Short. Limitations on quantum dimensionality reduction
Abstract: The Johnson-Lindenstrauss Lemma is a classic result which implies that any set of n real vectors can be compressed to O(log n) dimensions while only distorting pairwise Euclidean distances by a constant factor. Here we consider potential extensions of this result to the compression of quantum states. We show that, by contrast with the classical case, there does not exist any distribution over quantum channels that significantly reduces the dimension of quantum states while preserving the 2-norm distance with high probability. We discuss two tasks for which the 2-norm distance is indeed the correct figure of merit. In the case of the trace norm, we show that the dimension of low-rank mixed states can be reduced by up to a square root, but that essentially no dimensionality reduction is possible for highly mixed states.
Bogdan Chlebus, Dariusz Kowalski, Andrzej Pelc and Mariusz Rokicki. Efficient Distributed Communication in Ad-hoc Radio Networks
Abstract: We present new distributed deterministic solutions to two  communication problems in $n$-node ad-hoc radio networks: rumor gathering and multi-broadcast. In these problems, some or all nodes of the network initially contain input data called rumors,  which have to be learned by other nodes. In rumor gathering, there are $k$ rumors initially distributed arbitrarily among the nodes,  and the goal is to collect all the rumors at one node. Multi-broadcast is related to two fundamental communication problems: gossiping and routing. In gossiping, every node is initialized with a rumor and the goal is for all nodes to learn all rumors. In routing,  $k$ rumors distributed arbitrarily among the nodes must be delivered each to its designated destination. The problem of multi-broadcast, considered in this work, is a generalization of gossiping and a strengthening of routing, and is defined as follows: some $k$ rumors are distributed arbitrarily among the nodes and the goal is for all the nodes to learn every rumor. Our rumor gathering algorithm works in $O((k+n)\log n)$  time and our multi-broadcast algorithm works in $O(k\log^3 n+n\log^4 n)$~time,  for any $n$-node networks and $k$ rumors (with arbitrary $k$), which is a substantial improvement over the best previously known deterministic solutions to these problems. As a consequence, we \emph{exponentially} decrease  the gap between upper and lower bounds on the deterministic time complexity of four communication problems: rumor gathering, multi-broadcast, gossiping and routing,  in the important case when every node has initially at most one rumor (this is the scenario for gossiping and for the usual formulation of  routing). Indeed,  for $k=O(n)$, our results simultaneously decrease the complexity gaps for these four problems from polynomial to polylogarithmic in the size of the graph. Moreover, our {\em deterministic} gathering algorithm applied for  $k=O(n)$ rumors,  improves over the best previously known {\em randomized} algorithm of time $O(k\log n+ n\log^2 n)$.
Jakob Nordstrom and Alexander Razborov. On Minimal Unsatisfiability and Time-Space Trade-offs for k-DNF Resolution
Abstract: A well-known theorem by Tarsi states that a minimally unsatisfiable CNF formula with m clauses can have at most m-1 variables, and this bound is exact.  In the context of proving lower bounds on proof space in k-DNF resolution, [Ben-Sasson and Nordstrom 2009] extended the concept of minimal unsatisfiability to sets of k-DNF formulas and proved that a minimally unsatisfiable k-DNF set with m formulas can have at most O((mk)^(k+1)) variables.  This result is far from tight, however, since they could only present explicit constructions of minimally unsatisfiable sets with Omega(mk^2) variables. In the current paper, we revisit this combinatorial problem and significantly improve the lower bound to Omega(m)^k, which almost matches the upper bound above.  Furthermore, using similar ideas we show that the analysis of the technique in [Ben-Sasson and Nordstrom 2009] for proving time-space separations and trade-offs for k-DNF resolution is almost tight. This means that although it is possible, or even plausible, that stronger results than in [Ben-Sasson and Nordström 2009] should hold, a fundamentally different approach would be needed to obtain such results.
Shengyu Zhang. On the power of lower bound methods for one-way quantum communication complexity
Abstract: his paper studies the three known lower bound methods for one-way quantum communication complexity, namely the Partition Tree method by Nayak, the Trace Distance method by Aaronson, and the two-way quantum communication complexity. It is shown that all these three methods can be exponentially weak for some total Boolean functions. In particular, for a large class of functions generated from Erdos-Renyi random graphs G(N,p) with p in some range of 1/poly(N), the two-way quantum communication complexity gives linear lower bound while the other two methods only gives constant lower bounds. This denies the possibility of using any of these known quantum lower bounds for proving the fundamental conjecture that the classical and quantum one-way communication complexities are polynomially related. En route of the exploration, we also discovered that the power of Nayak's method is exactly equal to the \emph{extended equivalence query complexity} in learning theory.
Markus Jalsenius and Raphael Clifford. Lower Bounds for Online Integer Multiplication and Convolution in the Cell-Probe Model
Abstract: We show time lower bounds for both online integer multiplication and convolution in the cell-probe model with word size w. For the multiplication problem, one pair of digits, each from one of two n digit numbers that are to be multiplied, is given as input at step i. The online algorithm outputs a single new digit from the product of the numbers before step i+1. We give a lower bound of Omega(d/w log n) time on average per output digit for this problem where 2^d is the maximum value of a digit. In the convolution problem, we are given a fixed vector V of length n and we consider a stream in which numbers arrive one at a time. We output the inner product of V and the vector that consists of the last n numbers of the stream. We show an Omega(d/w log n) lower bound for the time required per new number in the stream. All the bounds presented hold under randomisation and amortisation. Multiplication and convolution are central problems in the study of algorithms which also have the widest range of practical applications.
Shiri Chechik. Fault-Tolerant Compact Routing Schemes for General Graphs
Abstract: This paper considers compact fault-tolerant routing schemes for weighted general graphs, namely, routing schemes that avoid a set of failed (or {\em forbidden}) edges. We present a compact routing scheme capable of handling multiple edge failures. Assume a source node $s$ contains a message $M$ designated to a destination target $t$ and assume a set $F$ of edges crashes (unknown to $s$). Our scheme routes the message to $t$ (provided that $s$ and $t$ are still connected in $G \setminus F$) over a path whose length is proportional to the distance between $s$ and $t$ in $G \setminus F$, the number of failures in $F$ and some poly-log factor. The routing table required at a node $v$ is of size proportional to the degree of $v$ in $G$ and some poly-log factor. This improves on the previously known fault-tolerant compact routing scheme for general graphs, which was capable of overcoming at most 2 edge failures.
Chien-Chung Huang and Telikepalli Kavitha. Popular Matchings in the Stable Marriage Problem
Abstract: The input is a bipartite graph G = (A U B, E) where each vertex u in A U B ranks its neighbors in a strict order of preference. This is the same as an instance of the stable marriage problem with incomplete lists. A matching M is said to be popular if there is no matching M' such that more vertices are better off in M' than in M. Any stable matching of G is popular, however such a matching is a minimum cardinality popular matching. We consider the problem of computing a  maximum cardinality popular matching here. It has very recently been shown that when preference lists have ties, the problem of determining if a given instance admits a popular matching or not is NP-complete.  When preference lists are strict, popular matchings always exist, however the complexity of computing a maximum cardinality popular matching was unknown. In this paper we give a simple characterization of popular matchings when preference lists are strict and a sufficient condition for a maximum cardinality popular matching. We then show an O(mn) algorithm for computing a maximum cardinality popular matching, where m is the number of edges and n is the number of vertices in G.
Jiawei Qian and David Williamson. An O(log n)-competitive algorithm for online constrained forest problems
Abstract: In the generalized Steiner tree problem, we find a minimum-cost set of edges to connect a given set of source-sink pairs.  In the online version of this problem, the source-sink pairs arrive over time.  Agrawal, Klein, and Ravi give a 2-approximation algorithm for the offline problem; Berman and Coulston give an O(log n)-competitive algorithm for the online problem. Goemans and Williamson subsequently generalized the offline algorithm of Agrawal et al. to handle a large class of problems they called constrained forest problems, and other problems, such as the prize-collecting Steiner tree problem.  In this paper, we show how to combine the ideas of Goemans and Williamson and those of Berman and Coulston to give an O(log n)-competitive algorithm for online constrained forest problems, including an online version of the prize-collecting Steiner tree problem.
Hung Ngo, Ely Porat and Atri Rudra. Efficiently Decodable Error-Correcting List Disjunct Matrices and Applications
Abstract: A $(d,\ell)$-list disjunct matrix is a non-adaptive group testing primitive that, given a set of $n$ items with at most $d$ ``defectives," outputs a super-set of the defectives containing at most $\ell-1$ non-defective items. The primitive has found many applications as stand alone objects and as building blocks in the construction of other combinatorial objects. This paper studies error-tolerant list disjunct matrices which can correct up to $e_0$ false positives and $e_1$ false negatives in sub-linear time. We then use list-disjunct matrices to prove new results in three different applications. Our major contributions are as follows. First, we prove several (almost)-matching lower and upper bounds for the optimal number of tests, including the fact that $\Theta(d\log(n/d) + e_0+de_1)$ is necessary and sufficient when $\ell=\Theta(d)$. Similar results are also derived for the disjunct matrix case (i.e. $\ell=1$). Second, we present two methods that convert error-tolerant list disjunct matrices in a \textit{black-box} manner into error-tolerant list disjunct matrices that are also {\em efficiently decodable}. The methods help us derive a family of (strongly) explicit constructions of list-disjunct matrices which are either optimal or near optimal, and which are also efficiently decodable. Third, we show how to use error-correcting efficiently decodable list-disjunct matrices in three different applications: (i) explicit constructions of $d$-disjunct matrices with $t = O(d^2\log n+rd)$ tests which are decodable in $\mathrm{poly}(t)$ time, where $r$ is the maximum number of test errors. This result is optimal for $r = \Omega(d\log n)$, and even for $r=0$ this result improves upon known results; (ii) (explicit) constructions of (near)-optimal, error-correcting, and  efficiently decodable monotone encodings; and (iii) (explicit) constructions of (near)-optimal, error-correcting, and  efficiently decodable multiple user tracing families.
Silvia Crafa and Francesco Ranzato. Probabilistic Bisimulation and Simulation Algorithms by Abstract Interpretation
Abstract: We show how probabilistic bisimulation equivalence and simulation preorder, namely the main behavioural relations on probabilistic nondeterministic processes, can be characterized by abstract interpretation. Indeed, both bisimulation and simulation can be obtained as domain completions of partitions and preorders, viewed as abstractions, w.r.t. a pair of concrete functions that encode a probabilistic LTS. As a consequence, this approach provides a general framework for designing algorithms for computing probabilistic bisimulation and simulation. Notably, (i) we show that the standard probabilistic bisimulation algorithm by Baier et al. can be viewed as an instance of such a framework and (ii) we design a new efficient probabilistic simulation algorithm that improves the state of the art.
Daniel Lokshtanov and Dániel Marx. Clustering with Local Restrictions
Abstract: We study a family of graph clustering problems where each cluster   has to satisfy a certain local requirement. Formally, let $\mu$ be a   function on the subsets of vertices of a graph $G$. In the   $(mu,p,q)$-Partition problem, the task is to find a partition of the   vertices where each cluster $C$ satisfies the requirements that (1)   at most $q$ edges leave $C$ and (2) $\mu(C)\le p$. Our first result   shows that if $\mu$ is an {\em arbitrary} polynomial-time computable   monotone function, then $(mu,p,q)$-Partition can be solved in time   $n^{O(q)}$, i.e., it is polynomial-time solvable   {\em for every fixed $q$}. We study in detail three concrete functions $\mu$   (number of nonedges in the cluster, maximum degree of nonedges in   the cluster, number of vertices in the cluster), which correspond to    natural clustering problems. For these functions, we show   that $(mu,p,q)$-Partition can be solved in time $2^{O(p)}\cdot n^{O(1)}$ and   in randomized time $2^{O(q)}\cdot n^{O(1)}$, i.e., the problem is   fixed-parameter tractable parameterized by $p$ or by $q$.
Amin Coja-Oghlan and Angelica Pachon-Pinzon. The decimation process in random k-SAT
Abstract: Let F be a uniformly distributed random k-SAT formula with n variables and m clauses. Non-rigorous statistical mechanics ideas have inspired a message passing algorithm called Belief Propagation Guided Decimation for finding satisfying assignments of F. This algorithm can be viewed as an attempt at implementing a certain thought experiment that we call the Decimation Process. In this paper we identify a variety of phase transitions in the decimation process and link these phase transitions to the performance of the algorithm.
Hans L. Bodlaender, Bart M. P. Jansen and Stefan Kratsch. Preprocessing for Treewidth: A Combinatorial Analysis through Kernelization
Abstract: Using the framework of kernelization we study whether efficient preprocessing schemes for the Treewidth problem can give provable bounds on the size of the processed instances. Assuming the AND-distillation conjecture to hold, the standard parameterization of Treewidth does not have a kernel of polynomial size and thus instances (G, k) of the decision problem of Treewidth cannot be efficiently reduced to equivalent instances of size polynomial in k. In this paper, we consider different parameterizations of Treewidth. We show that Treewidth has a kernel with O(l^3) vertices, with l the size of a vertex cover, and a kernel with O(l^4) vertices, with l the size of a feedback vertex set. This implies that given an instance (G, k) of treewidth we can efficiently reduce its size to O((l*)^4) vertices, where l* is the size of a minimum feedback vertex set in G. In contrast, we show that Treewidth parameterized by the vertex-deletion distance to a co-cluster graph and Weighted Treewidth parameterized by the size of a vertex cover do not have polynomial kernels unless NP \subseteq coNP/poly. Treewidth parameterized by the deletion distance to a cluster graph has no polynomial kernel unless the AND-distillation conjecture does not hold.
Stefan Kiefer, Andrzej Murawski, Joel Ouaknine, James Worrell and Lijun Zhang. On Stabilization in Herman's Algorithm
Abstract: Herman's algorithm is a synchronous randomized protocol for achieving self-stabilization in a token ring consisting of N processes. The interaction of tokens makes the dynamics of the protocol very difficult to analyze. In this paper we study the expected time to stabilization in terms of the initial configuration. It is straightforward that the algorithm achieves stabilization almost surely from any initial configuration, and it is known that the worst-case expected time to stabilization (with respect to the initial configuration) is Theta(N^2). Our first contribution is to give an upper bound of 0.64 N^2 on the expected stabilization time, improving on previous upper bounds and reducing the gap with the best existing lower bound.  We also introduce an asynchronous version of the protocol, showing a similar O(N^2) convergence bound in this case. Assuming that errors arise from the corruption of some number k of bits, where k is fixed independently of the size of the ring, we show that the expected time to stabilization is O(N). This reveals a hitherto unknown and highly desirable property of Herman's algorithm: it recovers quickly from bounded errors. We also show that if the initial configuration arises by resetting each bit independently and uniformly at random, then stabilization is significantly faster than in the worst case.
Bundit Laekhanukit. An improved approximation algorithm for the minimum-cost subset k-connectivity
Abstract: The minimum-cost subset $k$-connected subgraph problem is a cornerstone problem in the area of network design with vertex connectivity requirements. In this problem, we are given a graph $G=(V,E)$ with costs on edges and a set of terminals $T$. The goal is to find a minimum cost subgraph such that every pair of terminals are connected by $k$ openly (vertex) disjoint paths. In this paper, we present an approximation algorithm for the subset $k$-connected subgraph problem which improves on the previous best approximation guarantee of $O(k^2\log{k})$ by Nutov (FOCS 2009). Our approximation guarantee, $\alpha(|T|)$, depends upon the number of terminals: \[ \alpha(|T|) \ \ =\ \   \begin{cases} O(|T|^2) &\mbox{if } |T| < 2k\\ O(k \log^3 k) & \mbox{if } 2k\le |T| < k^2\\ O(k \log^2 k) & \mbox{if } |T| \ge k^2 \end{cases} \] So, when the number of terminals is {\em large enough}, the approximation guarantee improves dramatically. Moreover, we show that, given an approximation algorithm for $|T|=k$, we can obtain almost the same approximation guarantee for any instances with $|T|>k$. This suggests that the hardest instances of the problem are when $|T|\approx k$.
Michael Goodrich and Michael Mitzenmacher. Privacy-Preserving Access of Outsourced Data via Oblivious RAM Simulation
Abstract: Suppose a client, Alice, has outsourced her data to an external storage provider, Bob, because he has capacity for her massive data set, of size $n$, whereas her private storage is much smaller---say, of size $O(n^{1/r})$, for some constant $r>1$. Alice trusts Bob to maintain her data, but she would like to keep its contents private. She can encrypt her data, of course, but she also wishes to keep her access patterns hidden from Bob as well. In this paper, we describe a scheme for this \emph{oblivious RAM simulation} problem with a small logarithmic or polylogarithmic amortized increase in her access times, while keeping the storage Alice purchases from Bob to be of size $O(n)$.   To achieve this, our algorithmic contributions include a parallel MapReduce cuckoo-hashing algorithm and an external-memory data-oblivious sorting algorithm.
Malte Beecken, Johannes Mittmann and Nitin Saxena. Algebraic Independence and Blackbox Identity Testing
Abstract: Algebraic independence is an advanced notion in commutative algebra that generalizes independence of linear polynomials to higher degree. Polynomials {f_1, ..., f_m} \subset \F[x_1, ..., x_n] are called algebraically independent if there is no non-zero polynomial F such that F(f_1, ..., f_m) = 0. The transcendence degree, trdeg{f_1, ..., f_m}, is the maximal number r of algebraically independent polynomials in the set. In this paper we design blackbox and efficient linear maps \phi that reduce the number of variables from n to r but maintain trdeg{\phi(f_i)}_i = r, assuming f_i's sparse and small r. We apply these fundamental maps to solve several cases of blackbox identity testing: (1) Given a polynomial-degree circuit C and sparse polynomials f_1, ..., f_m with trdeg r, we can test blackbox D := C(f_1, ..., f_m) for zeroness in poly(size(D))^r time. (2) Define a spsp_\delta(k,s,n) circuit C to be of the form \sum_{i=1}^k \prod_{j=1}^s f_{i,j}, where f_{i,j} are sparse n-variate polynomials of degree at most \delta. For k = 2 we give a poly(\delta sn)^{\delta^2} time blackbox identity test. (3) For a general depth-4 circuit we define a notion of rank. Assuming there is a rank bound R for minimal simple spsp_\delta(k,s,n) identities, we give a poly(\delta snR)^{Rk\delta^2} time blackbox identity test for spsp_\delta(k,s,n) circuits. This partially generalizes the state of the art of depth-3 to depth-4 circuits.   The notion of trdeg works best with large or zero characteristic, but we also give versions of our results for arbitrary fields.
Naonori Kakimura and Kazuhisa Makino. Robust Independence Systems
Abstract: An independence system \cal{F} is one of the most fundamental combinatorial concepts, which includes a variety of objects in graphs and hypergraphs such as matchings, stable sets, and matroids. We discuss the robustness for independence systems, which is a natural generalization of the greedy property of matroids. For a real number \alpha> 0, a set X in \cal{F} is said to be \alpha-robust if for any k, it includes an \alpha-approximation of the maximum k-independent set, where a set Y in \cal{F} is called k-independent if the size |Y| is at most k. In this paper, we show that every independence system has a 1/\sqrt{\mu(\cal{F})}-robust independent set, where \mu(\cal{F}) denotes the exchangeability of \cal{F}. Our result contains a classical result for matroids and the ones of Hassin and Rubinstein[SIAM J. Disc. Math. 2002] for matchings and Fujita, Kobayashi, and Makino[ESA 2010] for matroid 2-intersections, and provides better bounds for the robustness for many independence systems such as b-matchings, hypergraph matchings, matroid p-intersections, and unions of vertex disjoint paths. Furthermore, we provide bounds of the robustness for nonlinear weight functions such as submodular and convex quadratic functions. We also extend our results to independence systems in the integral lattice with separable concave weight functions.
Kohei Suenaga and Ichiro Hasuo. Programming with Infinitesimals: A While-Language for Hybrid System Modeling
Abstract: We add, to the common combination of a WHILE-language and a Hoare-style program logic, a constant dt that represents an infinitesimal (i.e. infinitely small) value. The outcome is a framework for modeling and verification of hybrid systems: hybrid systems exhibit both continuous and discrete dynamics and getting them right is a pressing challenge. We rigorously define the semantics of programs in the language of nonstandard analysis, on the basis of which the program logic is shown to be sound and relatively complete.
Tomas Brazdil, Stefan Kiefer, Antonin Kucera and Ivana Hutarova Varekova. Runtime Analysis of Probabilistic Programs with Unbounded Recursion
Abstract: We study the runtime in probabilistic programs with unbounded recursion. As underlying formal model for such programs we use probabilistic pushdown automata (pPDA) which exactly correspond to recursive Markov chains. We show that every pPDA can be transformed into a stateless pPDA (called ``pBPA'') whose runtime and further properties are closely related to those of the original pPDA. This result substantially simplifies the analysis of runtime and other pPDA properties. We prove that for every pPDA the probability of performing a long run decreases exponentially in the length of the run, if and only if the expected runtime in the pPDA is finite. If the expectation is infinite, then the probability decreases ``polynomially''. We show that these bounds are asymptotically tight. Our tail bounds on the runtime are generic, i.e., applicable to any probabilistic program with unbounded recursion. An intuitive interpretation is that in pPDA the runtime is exponentially unlikely to deviate from its expected value.
Marek Cygan, Marcin Pilipczuk, Michal Pilipczuk and Jakub Wojtaszczyk. Subset Feedback Vertex Set is Fixed Parameter Tractable
Abstract: The classical FEEDBACK VERTEX SET problem asks, for a given undirected graph G and an integer k, to find a set of at most k vertices that hits all the cycles in the graph G. FEEDBACK VERTEX SET has attracted a large amount of research in the parameterized setting, and subsequent kernelization and fixed-parameter algorithms have been a rich source of ideas in the field. In this paper we consider a more general and difficult version of the problem, named SUBSET FEEDBACK VERTEX SET (SUBSET-FVS in short) where an instance comes additionally with a set S of vertices, and we ask for a set of at most k vertices that hits all simple cycles passing through S. Because of its applications in circuit testing and genetic linkage analysis SUBSET-FVS was studied from the approximation algorithms perspective by Even et al. [SICOMP’00, SIDMA’00]. The question whether the SUBSET-FVS problem is fixed parameter tractable was posed independently by K. Kawarabayashi and S. Saurabh in 2009. We answer this question affirmatively. We begin by showing that this problem is fixed-parameter tractable when parametrized by |S|. Next we present an algorithm which reduces the size of S to O(k^3) in 2^O(k log k)n^O(1) time using kernelization techniques such as the 2-Expansion Lemma, Menger’s theorem and Gallai’s theorem. These two facts allow us to give a 2^O(k log k)n^O(1) algorithm solving the SUBSET FEEDBACK VERTEX SET problem, proving that it is indeed fixed parameter tractable.
Simone Bova, Hubie Chen and Matt Valeriote. Generic Expression Hardness Results for Primitive Positive Formula Comparison
Abstract: We study the expression complexity of two basic problems involving the comparison of primitive positive formulas: equivalence and containment. We give two generic hardness results for the studied problems, and discuss evidence that they are optimal and yield, for each of the problems, a complexity trichotomy.
Martin Hoefer. Local Matching Dynamics in Social Networks
Abstract: We study stable marriage and roommates problems in graphs with locality constraints. Each player is a node in a social network and has an incentive to match with other players. The value of a match is specified by an edge weight. Players explore possible matches only based on their current neighborhood. We study convergence of natural better response dynamics that converge to locally stable matchings -- matchings that allow no incentive to deviate with respect to their imposed information structure in the social network. For every starting state we construct in polynomial time a sequence of polynomially many better response moves to a locally stable matching. However, for a large class of oblivious dynamics including random and concurrent better response the convergence time turns out to be exponential. Perhaps surprisingly, convergence time becomes polynomial if we allow the players to have a small amount of random memory, even for many-to-many matchings and more general notions of neighborhood.
Eric Allender and Fengming Wang. On the power of algebraic branching programs of width two
Abstract: We show that there are families of polynomials having small depth-two arithmetic circuits that cannot be expressed by algebraic branching programs of width two.  This clarifies the complexity of the problem of computing the product of a sequence of two-by-two matrices, which arises in several settings.
Benoit Libert and Moti Yung. Adaptively Secure Non-Interactive Threshold Cryptosystems
Abstract: Threshold cryptography aims at enhancing the availability and security of encryption and signature schemes by splitting private keys into several (say $n$) shares (typically, each of size comparable to the original secret key).  In these schemes, a quorum ($t \leq n$) of servers needs to act upon a message to produce the result (decrypted value or signature), while corrupting less than $t$ servers maintains the security of the scheme. For about two decades starting from the mid 80's, extensive study was dedicated to this subject, which created a number of notable results. So far, most practical threshold signatures, where servers act non-interactively, were analyzed in the limited static corruption  model (where the adversary chooses which servers will be corrupted at the system's initialization stage). Existing threshold encryption schemes that withstand the strongest combination of adaptive corruptions (allowing the adversary to corrupt servers at any time based on his complete view), and chosen-ciphertext attacks (CCA) all require interaction (in the non-idealized model) and attempts to remedy this problem resulted only in relaxed schemes. To date (and for about 10 years), it was an open problem whether there are non-interactive threshold schemes providing the highest security (namely, CCA-secure encryption and CMA-secure signature) with scalable shares (i.e., shares are as short as the original key) and adaptive security. This paper answers this question affirmatively by presenting such efficient encryption and signature schemes within a unified algebraic framework.
Shi Li. A 1.488-approximation algorithm for the uncapacitated facility location problem
Abstract: We present a 1.488 approximation algorithm for the metric uncapacitated facility location (UFL) problem. The previous best algorithm was due to Byrka \cite{Byr07}. By linearly combining two algorithms $A1(\gamma_f)$ for $\gamma_f = 1.6774$ and the (1.11,1.78)-approximation algorithm $A2$ proposed by Jain, Mahdian and Saberi \cite{JMS02}, Byrka gave a 1.5 approximation algorithm for the UFL problem.  We show that if $\gamma_f$ is randomly selected from some distribution, the approximation ratio can be improved to 1.488.
Nicole Megow, Kurt Mehlhorn and Pascal Schweitzer. Online Graph Exploration: New Results on Old and New Algorithms
Abstract: We study the problem of exploring an unknown undirected connected graph. Beginning in some start vertex, a searcher must visit each node of the graph by traversing edges. Upon visiting a vertex for the first time, the searcher learns all incident edges and their respective traversal costs. The goal is to find a tour of minimum total cost. Kalyanasundaram and Pruhs proposed a sophisticated generalization of a Depth First Search that is 16-competitive on planar graphs.  While the algorithm is feasible on arbitrary graphs, the question whether it has constant competitive ratio in general has remained open. Our main result is an involved lower bound construction that answers this question negatively. On the positive side, we prove that the algorithm has constant competitive ratio on any class of graphs with bounded genus.  Furthermore, we provide a constant competitive algorithm for general graphs with a bounded number of distinct weights.
Ken-Ichi Kawarabayashi, Philip Klein and Christian Sommer. Linear-Space Approximate Distance Oracles for Planar, Bounded-Genus, and Minor-Free Graphs
Abstract:   A $(1+\epsilon)$-approximate distance oracle for a graph is a data   structure that supports approximate point-to-point   shortest-path-distance queries.  The relevant measures for a   distance-oracle construction are: space, query time, and   preprocessing time.   There are strong distance-oracle constructions known for planar   graphs (Thorup) and, subsequently, minor-excluded graphs (Abraham   and Gavoille).  However, these require $\Omega(\epsilon^{-1} n \lg   n)$ space for $n$-node graphs.     We argue that a very low space requirement is essential.  Since   modern computer architectures involve hierarchical memory (caches,   primary memory, secondary memory), a high memory requirement in   effect may greatly increase the actual running time.  Moreover, we   would like data structures that can be deployed on small mobile   devices, such as handhelds, which have relatively small primary   memory.     In this paper,  for planar graphs, bounded-genus graphs, and   minor-excluded graphs we give distance-oracle constructions that   require only   $O(n)$ space.  The big $O$ hides only a fixed constant, independent of   $\epsilon$ and independent of genus or size of an excluded minor.   Our constructions have proportionately higher query times.  For   planar graphs, the query time is $O(\epsilon^{-2} \lg^2 n)$.     The preprocessing times for our distance oracle are also better   than those for the previously known constructions.  For planar   graphs, the preprocessing time is $O(n \lg^2 n)$.   For bounded-genus graphs, there was previously no   distance-oracle construction known other than the one implied by the   minor-excluded construction, for which the constant is enormous and   the preprocessing time is a high-degree polynomial.  In our result,   the query time is $O(\epsilon^{-2}(\lg n + g)^2)$ and the preprocessing   time is $O(n(\lg n)(g^3+\lg n))$.      For all these linear-space results, we can in fact ensure, for any $\delta>0$, that the space required is only $1+\delta$ times the space required just to represent the graph itself.
Velumailum Mohanaraj and Martin Dyer. Pairwise-interaction Games
Abstract: We study the complexity of computing Nash equilibria in games where players arranged as the vertices of a graph play a symmetric 2-player game against their neighbours. We call this a pairwise-interaction game. We analyse this game for n players with a fixed number of actions and show that (1) a mixed Nash equilibrium can be computed in constant time for any game, (2) a pure Nash equilibrium can be computed through Nash dynamics in polynomial time for games with symmetrisable payoff matrix, (3) determining whether a pure Nash equilibrium exists for zero-sum games is NP-complete, and (4) counting pure Nash equilibria is #P-complete even for 2-strategy games. In proving (3), we define a new defective graph colouring problem called Nash colouring, which is of independent interest, and prove that its decision version is NP-complete. Finally, we show that pairwise-interaction games form a proper subclass of the usual graphical games.
Tim Nonner. Clique Clustering yields a PTAS for max-Coloring Interval Graphs
Abstract: We are given an interval graph $ G = (V,E) $ where each interval $ I \in V $ has a weight $ w_I \in \mathbb{R}^+ $. The goal is to color the intervals $ V $ with color classes $ C_1, C_2, \ldots, C_k $ such that $ \sum_{i=1}^k \max_{I \in C_i} w_I $ is minimized. This problem, called max-coloring interval graphs, contains the classical problem of coloring interval graphs as a special case, and it arises in many practical scenarios such as memory management. Pemmaraju, Raman, and Varadarajan showed that max-coloring interval graphs is NP-hard (SODA'04) and presented a 2-approximation algorithm. Closing a gap which has been open for years, we settle the approximation complexity of this problem by giving a polynomial-time approximation scheme (PTAS), that is, we show that there is an $ (1+\epsilon) $-approximation algorithm for any $ \epsilon > 0 $. Besides using standard preprocessing techniques such as geometric rounding and shifting, our main building block is a general technique for trading the overlap structure of an interval graph for accuracy, which we call clique clustering.
Markus Lohrey and Christian Mathissen. Isomorphism of regular trees and words
Abstract: The computational complexity of the isomorphism problem for regular trees, regular linear orders, and regular words is analyzed. A tree is regular if it is isomorphic to the prefix order on a regular language. In case regular languages are represented by NFAs (DFAs), the isomorphism problem for regular trees turns out to be EXPTIME-complete (resp. P-complete). In case the input automata are acyclic NFAs (acyclic DFAs), the corresponding trees are (succinctly represented) finite trees, and the isomorphism problem turns out to be PSPACE-complete (resp. P-complete). A linear order is regular if it is isomorphic to the lexicographic order on a regular language. A polynomial time algorithm for the isomorphism problem for regular linear orders (and even regular words, which generalize the latter) given by DFAs is presented. This solves an open problem by Esik and Bloom.
Chien-Chung Huang. Collusion in Atomic Splittable Routing Games
Abstract: We investigate how collusion affects the social cost in atomic splittable routing games. Suppose that players form coalitions and each coalition behaves as if it were a single player controlling all the flows of its participants. It may be tempting to conjecture that the social cost would be lower after collusion, since there would be more coordination among the players. We construct examples to show that this conjecture is not true. Even in very simple single-source-single-destination networks, the social cost of the post-collusion equilibrium can be higher than that of the pre-collusion equilibrium. This counter-intuitive phenomenon of collusion prompts us to ask the question: \emph{under what conditions would the social cost of the post-collusion equilibrium be  bounded by the social cost of the pre-collusion equilibrium?} We show that if (i) the network is ``well-designed'' (satisfying a natural condition), and  (ii) the delay functions are affine, then collusion is always beneficial for the social cost in the Nash equilibria. On the other hand, if either of the above conditions is unsatisfied, collusion can worsen the social cost. Our main technique is a novel flow-augmenting algorithm to build Nash equilibria. Our positive result for collusion is obtained by applying this algorithm simultaneously to two different flow value profiles of players and observing the difference in the derivatives of their social costs. Moreover, for a non-trivial subclass of selfish routing games, this algorithm finds the \emph{exact} Nash equilibrium in polynomial time.
Hu Ding and Jinhui Xu. Solving the Chromatic Cone Clustering Problem via Minimum Spanning Sphere
Abstract: In this paper, we study the following {\em Chromatic Cone Clustering (CCC)} problem: Given $n$ point-sets  with each containing $k$ points in the first quadrant of the $d$-dimensional space $R^{d}$, find $k$ cones apexed at the origin such that each cone contains at least one distinct point (i.e., different from other cones) from every point-set and the total size of the $k$ cones  is minimized, where the size of a cone is the angle from any boundary ray to its center line. CCC  is motivated by an important biological problem and finds applications in several other areas. For the CCC problem, we consider in this paper the cases that $k$ is either $2$ or some other constant number. Our approaches for solving the CCC problem relies on the solutions to the problem of computing a {\em Minimum Spanning Sphere (MinSS)} for point-sets. For the MinSS problem, we present two $(1+\epsilon)$-approximation algorithms based on core-sets and $\epsilon$-net respectively. With these algorithms, we then show that the CCC problem admits $(1+\epsilon)$-approximation solutions for both cases. Our results use several interesting geometric techniques in high dimensional space, and are the first solutions to these problems.
Markus Chimani and Petr Hlineny. A Tighter Insertion-based Approximation of the Crossing Number
Abstract: Let $G$ be a planar graph and $F$ a set of additional edges not yet in $G$. The {\em multiple edge insertion} problem (MEI) asks for a drawing of $G+F$ with the minimum number of pairwise edge crossings, such that the subdrawing of $G$ is plane. As an exact solution to MEI is NP-hard for general $F$, we present the first approximation algorithm for MEI which achieves an additive approximation factor (depending only on the size of $F$ and the maximum degree of $G$). Our algorithm seems to be the first directly implementable one in that realm, too, next to the single edge insertion. It is also known that an (even approximate) solution to the MEI problem would approximate the crossing number of the \emph{$F$-almost-planar graph} $G+F$, while computing the crossing number of $G+F$ exactly is NP-hard already when $|F|=1$. Hence our algorithm induces new, improved approximation bounds for the crossing number problem of $F$-almost-planar graphs, achieving constant-factor approximation for the large class of such graphs of bounded degrees and bounded size of $F$.
Martin Delacourt. Rice's theorem for mu-limit sets of cellular automata
Abstract: Cellular automata are a parallel and synchronous computing model, made of infinitely many finite automata updating according to the same local rule. Rice's theorem states that any nontrivial property over computable functions is undecidable. It has been adapted by Kari to limit sets of cellular automata, that is the set of configurations that can be reached arbitrarily late. This paper proves a new Rice theorem for $\mu$-limit sets, which are sets of configurations often reached arbitrarily late.
Hans-Joachim Boeckenhauer, Dennis Komm, Rastislav Kralovic and Richard Kralovic. On the Advice Complexity of the k-Server Problem
Abstract: Competitive analysis is the established tool for measuring the output quality of algorithms that work in an online environment. Recently, the model of advice complexity has been introduced as an alternative measurement which allows for a more fine-grained analysis of the hardness of online problems. In this model, one tries to measure the amount of information an online algorithm is lacking about the future parts of the input. This concept was investigated for a number of well-known online problems including the k-server problem. In this paper, we first extend the analysis of the k-server problem by giving both a lower bound on the advice needed to obtain an optimal solution, and upper bounds on algorithms for the general k-server problem on metric graphs and the special case of dealing with the Euclidean plane. In the general case, we improve the previously known results by an exponential factor, in the Euclidean case we design an algorithm which achieves a constant competitive ratio for a very small (i.e., constant) number of advice bits per request. Furthermore, we investigate the relation between advice complexity and randomized online computations by showing how lower bounds on the advice complexity can be used for proving lower bounds for the competitive ratio of randomized online algorithms.
Stefan Mengel. Characterizing Arithmetic Circuit Classes by Constraint Satisfaction Problems
Abstract: We explore the expressivity of constraint satisfaction problems (CSPs) in the arithmetic circuit model. While CSPs are known to yield VNP-complete polynomials in the general case, we show that for different restrictions of the structure of the CSPs we get characterizations of different arithmetic circuit classes. In particular we give the first natural non-circuit characterization of VP, the class of polynomial families efficiently computable by arithmetic circuits.
Magnús Halldórsson and Pradipta Mitra. Nearly Optimal Bounds for Distributed Wireless Scheduling in the SINR Model
Abstract: We study the wireless scheduling problem in the physically realistic SINR model. More specifically: we are given a set of $n$ links, each a sender-receiver pair.  We would like to schedule the links using the minimum number of slots, given the SINR model of interference among simultaneously transmitting links.  In the basic problem, all senders transmit with the same uniform power. In this work, we provide a distributed $O(\log n)$-approximation for the scheduling problem, matching the best ratio known for centralized algorithms.  This is based on an algorithm of Kesselheim and V\"ocking, improving their analysis by a logarithmic factor.  We show this to be best possible for any such distributed algorithm. Our analysis extends also to linear power assignments, and as well as for more general assignments, modulo assumptions about message acknowledgement mechanisms.
Carsten Moldenhauer. Primal-Dual Approximation Algorithms for Node-Weighted Steiner Forest on Planar Graphs
Abstract: Node-Weighted Steiner Forest is the following problem: Given an undirected graph, a set of pairs of terminal vertices, a weight function on the vertices, find a minimum weight set of vertices that includes and connects each pair of terminals. We consider the restriction to planar graphs where the problem remains NP-complete. Demaine et al. [DHK09] showed that the generic primal-dual algorithm of Goemans and Williamson [GW97] is a 6-approximation on planar graphs. We present (1) a different analysis to prove an approximation factor of 3, (2) show that this bound is tight for the generic algorithm, and (3) show how the algorithm can be improved to yield a 9/4-approximation algorithm. We give a simple proof for the first result using contraction techniques and present an example for the lower bound. Then, we establish a connection to the feedback problems studied by Goemans and Williamson [GW98]. We show how our constructions can be combined with their proof techniques yielding the third result and an alternative, more involved, way of deriving the first result. The third result induces an upper bound on the integrality gap of 9/4. Our analysis implies that improving this bound for Node-Weighted Steiner Forest via the primal-dual algorithm is essentially as difficult as improving the integrality gap for the feedback problems in [GW98].
David Hopkins, Andrzej Murawski and Luke Ong. A Fragment of ML Decidable by Visibly Pushdown Automata
Abstract: The simply-typed, call-by-value language, RML, may be viewed as a canonical restriction of Standard ML to ground-type references, augmented by a "bad variable" construct in the sense of Reynolds. By a short type, we mean a type of order at most 2 and arity at most 1. We consider the fragment of (finitary) RML, RML_OStr, consisting of terms-in-context such that (i) the term has a short type, and (ii) every argument type of the type of each free variable is short. RML_OStr is surprisingly expressive; it includes several instances of (in)equivalence in the literature that are challenging to prove using methods based on (state-based) logical relations. We show that it is decidable whether a given pair of RML_OStr terms-in-context is observationally equivalent. Using the fully abstract game semantics of RML, our algorithm reduces the problem to the language equivalence of visibly pushdown automata. When restricted to terms in canonical form, the problem is EXPTIME-complete.
Chandra Chekuri and Alina Ene. Submodular Cost Allocation Problem and Applications
Abstract: We study the Submodular Cost Allocation problem (MCSA). In this problem we are given a finite ground set $V$ and $k$ non-negative submodular set functions $f_1,\ldots,f_k$ on $V$. The objective is to partition $V$ into $k$ (possibly empty) sets $C_1, \cdots, C_k$ such that the sum $\sum_{i = 1}^k f_i(C_i)$ is minimized. Several well-studied problems such as the non-metric facility location problem, multiway-cut in graphs and hypergraphs, and uniform metric-labeling and its generalizations can be shown to be special case of MCSA. In this paper we consider a convex-programming relaxation obtained via the Lov\'{a}sz-extension for submodular functions. This allows us to understand several previous relaxations and rounding procedures in a unified fashion and also develop new formulations and approximation algorithms for several problems. In particular, we give a $1.5$-approximation for the hypergraph multiway partition problem. We also give a $\min\{2(1-1/k), H_\Delta\}$-approximation for the hypergraph multiway cut problem when $\Delta$ is the maximum hyperedge degree. Both problems generalize the multiway cut problem in graphs and the hypergraph cut problem is approximation equivalent to the node-weighted multiway cut problem in graphs.
Kook Jin Ahn and Sudipto Guha. Linear Programming in the Semi-streaming Model with Application to the Maximum Matching Problem
Abstract: In this paper, we study linear programming based approaches to the maximum matching problem in the semi-streaming model. The semi-streaming model has gained attention as a model for processing massive graphs as the importance of such graphs has increased. This is a model where edges are streamed-in in an adversarial order and we are allowed a space proportional to the number of vertices in a graph. In recent years, there has been several new results in this semi-streaming model. However broad techniques such as linear programming have not been adapted to this model. We present several techniques to adapt and optimize linear programming based approaches in the semi-streaming model with an application to the maximum matching problem. As a consequence, we (almost) improve all previous results on this problem, and also prove new results on interesting variants.
Yuval Filmus, Toniann Pitassi and Rahul Santhanam. Exponential Lower Bounds for AC0-Frege Imply Superpolynomial Frege Lower Bounds
Abstract: We give a general transformation which turns polynomial-size Frege proofs to subexponential-size AC0-Frege proofs. This indicates that proving exponential lower bounds for AC0-Frege is hard, since it is a longstanding open problem to prove super-polynomial lower bounds for Frege. Our construction is optimal for tree-like proofs. As a consequence of our main result, we are able to shed some light on the question of weak automatizability for bounded-depth Frege systems. First, we present a simpler proof of the results of Bonet et al. showing that under cryptographic assumptions, bounded-depth Frege proofs are not weakly automatizable. Secondly, we show that because our proof is more general, under the right cryptographic assumptions, it could resolve the weak automatizability question for lower depth Frege systems.
Roberto Cominetti, Jose Correa and Omar Larre. Existence and uniqueness of equilibria  for flows over time
Abstract: Network flows that vary over time arise naturally when modeling rapidly evolving systems such as the Internet. In this paper, we continue the study of equilibria for flows over time in the deterministic queuing model proposed by Koch and Skutella. We give a constructive proof for the existence and uniqueness of equilibria for the case of a piecewise constant inflow rate, through a detailed analysis of the static flows obtained as derivatives of a dynamic equilibrium. We also provide a nonconstructive existence proof of equilibria when the inflow rate is a general function.
Maurice Jansen and Rahul Santhanam. Permanent Does Not Have Succinct Polynomial Size Arithmetic Circuits of Constant Depth
Abstract: We show that over fields of characteristic zero there does not exist a polynomial $p(n)$ and a constant-free succinct arithmetic circuit family $\{\Phi_n\}$, where $\Phi_n$ has size at most  $p(n)$ and  depth $O(1)$, such that $\Phi_n$ computes the $n\times n$ permanent. A circuit family $\{\Phi_n\}$ is succinct if there exists a {\em nonuniform} Boolean circuit family $\{C_n\}$ with $O(\log n)$ many inputs and size $n^{o(1)}$ such that that $C_n$ can correctly answer direct connection language queries about $\Phi_n$-succinctness is a relaxation of uniformity. To obtain this result we develop a novel technique that further strengthens the connection between black-box derandomization of polynomial identity testing and lower bounds for arithmetic circuits. From this we obtain the lower bound by explicitly constructing a hitting set against arithmetic circuits in the polynomial hierarchy. To the best of our knowledge, this is the first lower bound proof which takes advantage of (partial) uniformity without using diagonalization.
Vassilis Zikas and Martin Hirt. Player-Centric Byzantine Agreement
Abstract: Byzantine Agreement (BA) is one of the most important primitives in the area of secure distributed computation. It allows a set of players to agree on a value, even when some of the players are faulty. BA comes in two flavors: {\em Consensus} and {\em Broadcast}. Both primitives require that all non-faulty players agree on an output-value $y$ (consistency). However, in Consensus every player has an input, and it is required that if all players pre-agree on some $x$ then $y=x$, whereas in Broadcast only one player, the {\em sender}, has input and it is required that $y$ equals the sender's input, unless he is faulty (validity). Most of the existing feasibility results for BA are of an {\em all-or-nothing} fashion: in Broadcast they address the question whether or not there exists a protocol which allows {\em any} player to broadcast his input. Similarly, in Consensus the question is whether or not consensus can be reached which respects pre-agreement on the inputs of {\em all} correct players. In this work, we take a different approach motivated by the above observation, namely that Consensus and Broadcast are only two extreme sub-cases of a more general consistency primitive. In particular, we are interested in the question whether or not the validity condition of BA primitives can be satisfied {\em with respect to a specific sub-set    of the complete player set \PS}. To this direction, we introduce the natural notion of {\em player-centric BA} which is a class of BA primitives, denoted as $\pcba=\{\pba{\CC}\}_{\CC\subseteq\PS}$, parametrized by subsets $\CC$ of the player set. For each primitive $\pba{\CC}\in\pcba$ the validity is defined on the input(s) of the players in $\CC$.  Note that Broadcast (with sender $p$) and Consensus are special cases of \pcba primitives for $\CC=\{p\}$ and $\CC=\PS$, respectively. We study feasibility of $\pcba$ for arbitrary sets $\CC\subseteq\PS$ in a setting where a general (aka non-threshold) mixed (active/passive) adversary is considered.  Such an adversary is characterized by a so-called adversary structure, which is an enumeration of admissible adversary classes specifying the sets of players that can be actively and passively corrupted. We give a complete characterization of adversaries tolerable for $\pcba$ for all three security levels, i.e., perfect, statistical, and computational security. Note that with the exception of perfect security, exact bounds for this model are not even known for the traditional notions of BA.  Our results expose an asymmetry of Broadcast which has, so far, been neglected in the literature: there exist non-trivially adversaries which can be tolerated for Broadcast with sender some $p_i\in\PS$ but {\em not} for every $p_j\in\PS$ being the sender. Finally, we show how to extend the definition of \pcba in a setting where the adversary can actively, passively, and fail corrupt players, simultaneously. For this setting, we give exact feasibility bounds for computational secure \pba{\PS} (aka Consensus), assuming a public-key infrastructure (PKI).  This last result answers an open problem from ASIACRYPT~2008 that concerns feasibility of computationally secure multi-party computation in this model.
Konstantin Makarychev and Maxim Sviridenko. Maximizing a Polynomial Subject to Assignment Constraints
Abstract: We study the q-adic assignment problem. We first give an $O(n^{(q-1)/2})$-approximation algorithm for the Koopmans-Beckman version of the problem improving upon the result of Barvinok. Then, we introduce a new family of instances satisfying "tensor triangle inequalitiesa" and give a constant factor approximation algorithm for them. We show that many classical optimization problems can be modeled by q-adic assignment problems from this family. Finally, we give several integrality gap examples for the natural LP relaxations of the problem.
Danny Hermelin, Matthias Mnich, Erik Jan Van Leeuwen and Gerhard J. Woeginger. Domination when the Stars Are Out
Abstract: We algorithmize the recent structural characterization for claw-free graphs by Chudnovsky and Seymour. Building on this result, we show that Dominating Set on claw-free graphs is (i) fixed-parameter tractable and (ii) even possesses a polynomial kernel. To complement these results, we establish that Dominating Set is not fixed-parameter tractable on the slightly larger class of graphs that exclude K1;4 as an induced subgraph. Our results provide a dichotomy for Dominating Set in K1;L-free graphs and show that the problem is fixed-parameter tractable if and only if L <= 3. Finally, we show that our algorithmization can also be used to show that the related Connected Dominating Set problem is fixed-parameter tractable on claw-free graphs.
Ittai Abraham, Daniel Delling, Amos Fiat, Andrew Goldberg and Renato Werneck. VC-Dimension and Shortest Path Algorithms
Abstract: We explore the relationship between learning theory and graph algorithm design. In particular, we show that set systems induced by sets of vertices on shortest paths have VC-dimension at most two. This allows us to use a result from learning theory to improve time bounds on query algorithms for the point-to-point shortest path problem in networks of low highway dimension. We also refine the definitions of highway dimension and related concepts, making them more general and potentially more relevant to practice. In particular, we define highway dimension in terms of set systems induced by shortest paths, and give cardinality-based and average case definitions.
Isolde Adler, Stavros Kolliopoulos, Philipp Klaus Krause, Daniel Lokshtanov, Saket Saurabh and Dimitrios Thilikos. Tight Bounds for Linkages in Planar Graphs
Abstract: The {\sc Disjoint-Paths Problem} asks, given a graph $G$ and a set of pairs of terminals $(s_{1},t_{1}),\ldots,(s_{k},t_{k})$, whether there is a collection of $k$ vertex-disjoint paths linking  $s_{i}$ and $t_{i}$, for $i=1,\ldots,k$. In their $f(k)\cdot n^{3}$  algorithm for this problem, Robertson and Seymour introduced the {\sl irrelevant vertex technique} according to which in every instance of treewidth grater than $g(k)$ there is an ``irrelevant'' vertex whose removal creates an equivalent instance of the problem. This fact is based on the celebrated {\sl Unique Linkage Theorem}, whose -- very technical --  proof gives a function $g(k)$ that is responsible for an  immense parametric dependence in the running time of the algorithm. In this paper we prove this result for planar graphs achieving  $g(k)=2^{O(k)}$. Our bound is radically better than the bounds known for general graphs. Moreover, our proof is new and self-contained, and it strongly exploits the combinatorial properties of planar graphs. We also prove that our result is optimal, in the sense that the bound on $g(k)$ cannot become better than exponential. Our results suggest that any algorithm for the {\sc Disjoint-Paths Problem} that runs in time better than $2^{2^{o(k)}}\cdot n^{O(1)}$, should probably require drastically different ideas from those in the irrelevant vertex technique.
Olivier Carton, Thomas Colcombet and Gabriele Puppis. Regular Languages of Words Over Countable Linear Orderings
Abstract: We develop an algebraic model suitable for recognizing languages of words indexed by countable linear orderings. We prove that this notion of recognizability is effectively equivalent to definability in monadic second-order (MSO) logic. This reproves in particular the decidability of MSO logic over the rationals with order. Our proof also implies the first known collapse result for MSO logic over countable linear orderings.
Vince Barany, Balder Ten Cate and Luc Segoufin. Guarded negation
Abstract: We consider restrictions of first-order logic and of fixpoint logic in which all occurrences of negation are required to be guarded by an atomic predicate.   In terms of expressive power, the logics in question, called GNFO and GNFP, extend the guarded fragment of first-order logic and guarded least fixpoint logic, respectively. They also extend the recently introduced unary negation fragments of first-order logic and of least fixpoint logic. We show that the satisfiability problem for GNFO and for GNFP is 2ExpTime-complete, both on arbitrary structures and on finite structures. We also study the complexity of the associated model checking problems. Finally, we show that GNFO and GNFP are not only computationally well behaved, but also model theoretically: we show that GNFO and GNFP have the tree-like model property and that GNFO has the finite model property, and we characterize the expressive power of GNFO in terms of invariance for an appropriate notion of bisimulation.
Heng Guo, Pinyan Lu and Leslie Valiant. The Complexity of Symmetric Boolean Parity Holant Problems
Abstract: For certain subclasses of NP, $\oplus$P  or \#P characterized by local constraints, it is known that if there exist any problems that are not polynomial time computable within that subclass, then those problems are NP-,  $\oplus$P- or \#P-complete. Such dichotomy results have been proved for characterizations such as Constraint Satisfaction Problems, and directed and undirected Graph Homomorphism Problems,  often with additional restrictions. Here we give a dichotomy result for the more expressive framework of Holant Problems. These additionally allow for the expression of matching problems, which have had pivotal roles in complexity theory. As our main result we prove the dichotomy theorem that, for the class $\oplus$P, every set of boolean symmetric Holant signatures of any arities that is not polynomial time computable is $\oplus$P-complete. The result exploits some special properties of the class $\oplus$P and characterizes four distinct tractable subclasses within $\oplus$P. It leaves open the corresponding questions for NP, $\#$P and $\#_k$P for $k\neq 2$.
Sanjeev Arora and Rong Ge. New Algorithms for Learning in Presence of Errors
Abstract: We give new algorithms for a variety of randomly-generated instances of computational problems using a linearization technique that reduces to solving a system of linear equations. These algorithms are derived in the context of learning with structured noise, a notion introduced in this paper. This notion is best illustrated with the learning parities with noise (LPN) problem ---well-studied in learning theory and cryptography. In the standard version, we have access to an oracle that, each time we press a button, returns a random vector $ \vec{a} \in \GF(2)^n$ together with a bit $b \in \GF(2)$ that was computed as $\vec{a}\cdot \vec{u} +\eta$, where $\vec{u}\in \GF(2)^n$ is a {\em secret} vector, and $\eta \in \GF(2)$ is a noise bit that is $1$ with some probability $p$. Say $p=1/3$. The goal is to recover $\vec{u}$. This task is conjectured to be intractable. In the structured noise setting we introduce a slight (?) variation of the model: upon pressing a button, we receive (say) $10$ random vectors $\vec{a_1}, \vec{a_2}, \ldots, \vec{a_{10}} \in \GF(2)^n$, and corresponding bits $b_1, b_2, \ldots, b_{10}$, of which at most $3$ are noisy. The oracle may arbitrarily decide which of the $10$ bits to make noisy. We exhibit a polynomial-time algorithm to recover the secret vector $\vec{u}$ given such an oracle. We think this structured noise model may be of independent interest in machine learning. We discuss generalizations of our result, including learning with more general noise patterns. We also give the first nontrivial algorithms for two problems, which we show fit in our structured noise framework. We give a slightly subexponential algorithm for the well-known learning with errors (LWE) problem over $\GF(q)$ introduced by Regev for cryptographic uses. Our algorithm works for the case when the gaussian noise is small; which was an open problem. Our result also clarifies why existing hardness results fail at this particular noise rate. We also give polynomial-time algorithms for learning the MAJORITY OF PARITIES function of Applebaum et al. for certain parameter values. This function is a special case of Goldreich's pseudorandom generator.
Arash Farzan and Shahin Kamali. Compact Navigation and Distance Oracles for Graphs with Small Treewidth
Abstract: Given an unlabeled, unweighted, and undirected graph with $n$ vertices and small (but not necessarily constant) treewidth $k$, we consider the problem of preprocessing the graph to build space-efficient encodings (oracles) to perform various queries efficiently.  We assume the word RAM model where the size of a word is $\omegah{\log n}$ bits. The first oracle, we present, is the navigation oracle which facilitates primitive navigation operations of adjacency, neighborhood, and degree queries. By way of an enumerate argument, which is of independent interest, we show the space requirement of the oracle is optimal to within lower order terms for all treewidths. The oracle supports the mentioned queries all in constant worst-case time. The second oracle, we present, is an exact distance oracle which facilitates distance queries between any pair of vertices (i.e., an all-pair shortest-path oracle). The space requirement of the oracle is also optimal to within lower order terms. Moreover, the distance queries perform in $\oh{k^2\log^3 k}$ time. Particularly, for the class of graphs of our interest, graphs of bounded treewidth (where $k$ is constant), the distances are reported in constant worst-case time.
Ashwinkumar Badanidiyuru Varadaraja. Buyback Problem - Approximate matroid intersection with  cancellation costs
Abstract: In the buyback problem, an algorithm observes a sequence of bids and must decide whether to accept each bid at the moment it arrives, sub ject to some constraints on the set of accepted bids. Decisions to reject bids are irrevocable, whereas decisions to accept bids may be canceled at a cost that is a fixed fraction of the bid value. Previous to our work, deterministic and randomized algorithms were known when the constraint is a matroid constraint. We extend this and give a deterministic algorithm for the case when the constraint is an intersection of k matroid constraints. We further prove a matching lower bound on the competitive ratio for this problem and extend our results to arbitrary downward closed set systems. This problem has applications to banner advertisement, semi-streaming, routing, load balancing and other problems where preemption or cancellation of previous allocations is allowed.
Anand S, Naveen Garg and Nicole Megow. Meeting deadlines: How much speed suffices?
Abstract: We consider the online problem of scheduling real-time jobs with   hard deadlines on~$m$ parallel machines. Each job has a processing   time and a deadline, and the objective is to schedule jobs so that   they complete before the deadline. It is known that even when the   instance is feasible it may not be possible to meet all deadlines   when jobs arrive online over time. We therefore consider the setting   when the algorithm has available machines with speed~$s>1$.   We present a new online algorithm that finds a feasible schedule on   machines of speed~$e/(e-1)$ for any instance that is feasible on   unit speed machines. This improves on the previously best known   result which requires a speed of~$2-2/(m+1)$. Our algorithm only   uses the relative order of job deadlines and is oblivious of the   actual deadline values. It was shown earlier that the minimum   speed required for such algorithms is~$e/(e-1)$, and thus, our   analysis is tight. We also show that our new algorithm outperforms   two other well-known algorithms by giving the first lower bounds on   their minimum speed requirement.
Matthew Anderson, Dieter Van Melkebeek, Nicole Schweikardt and Luc Segoufin. Locality of queries definable in invariant first-order logic  with arbitrary built-in predicates
Abstract: We consider first-order formulas over relational structures which, in addition to the symbols in the relational schema, may use a binary relation interpreted as a linear order over the elements of the structures, and arbitrary numerical predicates (including addition and multiplication) interpreted over that linear order. We require that for each fixed interpretation of the initial relational schema, the validity of the formula is independent of the particular interpretation of the linear order and its associated numerical predicates. We refer to such formulas as Arb-invariant first-order.                                        Our main result shows a Gaifman locality theorem: two tuples of a structure with n elements, having the same neighborhood up to distance (\log n)^{\omega(1)}, cannot be distinguished by Arb-invariant first-order formulas. When restricting attention to word structures, we can achieve the same quantitative strength for Hanf locality: Arb-invariant first-order formulas cannot distinguish between two words having the same neighborhoods up to distance (\log n)^{\omega(1)}.  In both cases we show that our bounds are tight.                                     Our proof exploits the close connection between Arb-invariant first-order formulas and the complexity class AC^0, and hinges on the tight lower bounds for parity on constant-depth circuits.
Daniel Reichman and Uriel Feige. Recoverable Values for Independent Sets
Abstract: The notion of {\em recoverable value} was advocated in work of Feige, Immorlica, Mirrokni and Nazerzadeh [Approx 2009] as a measure of quality for approximation algorithms. There this concept was applied to facility location problems. In the current work we apply a similar framework to the maximum independent set problem (MIS). We say that an approximation algorithm has {\em recoverable value} $\rho$, if for every graph it recovers an independent set of size at least $\max_I \sum_{v\in I} \min[1,\rho/(d(v) + 1)]$, where $d(v)$ is the degree of vertex $v$, and $I$ ranges over all independent sets in $G$. Hence, in a sense, from every vertex $v$ in the maximum independent set the algorithm recovers a value of at least $\rho/(d_v + 1)$ towards the solution. This quality measure is most effective in graphs in which the maximum independent set is composed of low degree vertices. It easily follows from known results that some simple algorithms for MIS ensure $\rho \ge 1$. We design a new randomized algorithm for MIS that ensures an expected recoverable value of at least $\rho \ge 15/7$. In addition, we show that approximating MIS in graphs with a given $k$-coloring within a ratio larger than $2/k$ is unique games hard. This rules out a natural approach for obtaining $\rho \ge 2$.
Danny Hermelin, Avivit Levy, Oren Weimann and Raphael Yuster. Distance Oracles for Vertex-Labeled Graphs
Abstract: Given a graph G = (V,E) with non-negative edge-lengths whose vertices are assigned a label from L = {\lambda_1,...,\lambda_\ell}, we construct a compact distance oracle that answers queries of the form: "What is d(v,\lambda)?", where v is a vertex in the graph, \lambda a vertex-label, and d(v,\lambda) is the distance (length of the shortest path) between v and the closest vertex labeled \lambda in G. We give the first formalization of this natural problem and provide a hierarchy of approximate distance oracles that require subquadratic space, and return a distance of constant-stretch.We also extend our solution to dynamic oracles that can handle label changes in sublinear time.
Piotr Berman, Arnab Bhattacharyya, Elena Grigorescu, Sofya Raskhodnikova, David Woodruff and Grigory Yaroslavtsev. Linear Programming and Combinatorial Bounds on Steiner Transitive-Closure Spanners
Abstract: Given a directed graph G = (V,E) and an integer k >= 1, a k-transitive-closure-spanner (k-TC-spanner) of G is a directed graph H = (V, E_H) that has (1) the same transitive closure as G and (2) diameter at most k.  In some applications, the shortcut paths added to the graph in order to obtain small diameter can use Steiner vertices, that is, vertices not in the original graph G. The resulting spanner is called a Steiner transitive-closure spanner (Steiner TC-spanner). Motivated by applications to property reconstruction and access control hierarchies, we concentrate on Steiner TC-spanners of directed acyclic graphs or, equivalently, partially ordered sets. In these applications, the goal is to find a sparsest Steiner k-TC-spanner of a poset G for a given k and G. The focus of this paper is the relationship between the dimension of a poset and the size of its sparsest Steiner TC-spanner. The dimension of a poset G is the smallest d such that G can be embedded into a d-dimensional directed hypergrid via an order-preserving embedding. We present a nearly tight lower bound on the size of Steiner 2-TC-spanners of d-dimensional directed hypergrids. It implies better lower bounds on the complexity of local reconstructors of monotone functions and functions with low Lipschitz constant. The proof of the lower bound constructs a dual solution to a linear programming relaxation of the Steiner 2-TC-spanner problem. We also show that one can efficiently construct a Steiner 2-TC-spanner, of size matching the lower bound, for any low-dimensional poset. Finally, we present a lower bound on the size of Steiner k-TC-spanners of d-dimensional posets that shows that the best-known construction, due to De Santis et al., cannot be improved significantly.
Laurent Bulteau, Guillaume Fertin and Irena Rusu. Sorting by Transpositions is Difficult
Abstract: In comparative genomics, a transposition is an operation that exchanges two consecutive sequences of genes in a genome. The transposition distance, that is, the minimum number of transpositions needed to transform a genome into another, is, according to numerous studies, a relevant evolutionary distance. The problem of computing this distance when genomes are represented by permutations, called the Sorting by Transpositions problem (SBT), has been introduced by Bafna and Pevzner in 1995. It has naturally been the focus of a number of studies, but the computational complexity of this problem has remained undetermined for 15 years. In this paper, we answer this long-standing open question by proving that the Sorting by Transpositions problem is NP-hard. As a corollary of our result, we also prove that the following problem is NP-hard: given a permutation \pi, is it possible to sort \pi using d_b(\pi)/3 permutations, where d_b(\pi) is the number of breakpoints of \pi?
Shin-Ya Katsumata. Relating Computational Effects by TT-Lifting
Abstract: We consider the problem of establishing a relationship between two monadic semantics of the lambda_c-calculus extended with algebraic operations. We show that two monadic semantics of any program are related if the relation includes value relations and is closed under the algebraic operations in the lambda_c-calculus.
Jim Laird, Giulio Manzonetto and Guy Mccusker. Constructing differential categories and deconstructing categories of games
Abstract: We present an abstract construction for building differential categories useful to model resource sensitive calculi,  and we apply it to categories of games. In one instance, we recover a category previously used to give a fully abstract model of a nondeterministic imperative language. The construction exposes the differential structure already present in this model. A second instance corresponds to a new Cartesian differential  category of games. We give a model of a Resource PCF in this category and show that it enjoys the finite definability property. Comparison with a relational semantics reveals that the latter also possesses this property and is fully abstract.
Radu Mardare, Luca Cardelli and Kim Larsen. Modular Markovian Logic
Abstract: We introduce Modular Markovian Logic (MML) for compositional continuous-time and continuous-space Markov processes. MML combines operators specific to stochastic logics with operators that reflect the modular structure of the semantics, similar to those used by spatial and separation logics.We present a complete Hilbert-style axiomatization for MML, prove the small model property and analyze the relation between the stochastic bisimulation and the logical equivalence relation induced by MML on models.
Moran Feldman, Seffi Naor and Roy Schwartz. Nonmonotone Submodular Maximization via a Structural Continuous Greedy Algorithm
Abstract: Consider a situation in which one has a suboptimal solution $S$ to a maximization problem which only constitutes a weak approximation to the problem. Suppose that even though the value of $S$ is small compared to an optimal solution $OPT$ to the problem, $S$ happens to be structurally similar to $OPT$. A natural question to ask in this scenario is whether there is a way of improving the value of $S$ based solely on this information. In this paper we introduce the \emph{Structural Continuous Greedy Algorithm} which answers this question affirmatively in the context of the \textsc{Nonmonotone Submodular Maximization Problem}. Using this algorithm we are able to improve on the best approximation factor known for this problem. In the \textsc{Nonmonotone Submodular Maximization Problem} we are given a non-negative submodular function $f$, and the objective is to find a subset maximizing $f$. This is considered one of the basic submodular optimization problems, generalizing many well known problems such as the Max Cut problem in undirected graphs. The current best approximation factor for this problem is $0.41$ given by Gharan and Vondr\'{a}k. On the other hand, Feige et al. showed that no algorithm can give $0.5 + \ee$ approximation for it. Our method yields an improved $0.42$-approximation for the problem.
Robert Elsaesser and Tobias Tscheuschner. Settling the complexity of local max-cut (almost) completely
Abstract: We consider the problem of finding a local optimum for the max-cut problem with FLIP-neighborhood, in which exactly one node may change the partition. Schäffer and Yannakakis (SICOMP, 1991) showed PLS-completeness of this problem on graphs with unbounded degree. On the other side, Poljak (SICOMP 1995) showed that in cubic graphs every FLIP local search takes O(n^2) steps, where n is the size of the graph. Due to the huge gap between degree three and unbounded degree, Ackermann, Röglin, and Vöcking (JACM, 2008) asked for the smallest d such that on graphs with maximum degree d the local max-cut problem with FLIP-neighborhood is PLS-complete. In this paper, we prove that the computation of a local optimum on graphs with maximum degree five is PLS-complete. Thus, we solve the problem posed by Ackermann et al. almost completely, by showing that d is either 4 or 5 (unless PLS is in P). On the other side, we also prove that on graphs with degree O(logn) every FLIP local search has probably polynomial smoothed complexity. Roughly speaking, for any instance, in which the edge weights are perturbated by a (Gaussian) random noise with variance σ^2, every FLIP local search terminates in time polynomial in n and σ, with probability 1−n^{−Ω(1)}. Putting both results together, we may conclude that although local max-cut is likely to be hard on graphs with bounded degree, it can be solved in polynomial time for slightly perturbated instances with high probability.
Georg Zetzsche. On the capabilities of grammars, automata, and transducers controlled by monoids
Abstract: During the last decades, classical models in language theory have been extended by control mechanisms defined by monoids.  We study which monoids cause the extensions of context-free grammars, finite automata, or finite state transducers to exceed the capacity of the original model.  Furthermore, we investigate when, in the extended automata model, the nondeterministic variant differs from the deterministic one in capacity.  We show that all these conditions are in fact equivalent and present an algebraic characterization. In particular, the open question of whether every language generated by a valence grammar over a finite monoid is context-free is provided with a positive answer.
Nathalie Bertrand, Patricia Bouyer, Thomas Brihaye and Amélie Stainer. Emptiness and Universality Problems in Timed Automata with Positive Frequency
Abstract: The languages of infinite timed words accepted by timed automata are traditionally defined using Büchi-like conditions. These acceptance conditions focus on the set of locations visited infinitely often along a run, but completely ignore quantitative timing aspects. In this paper we propose a natural quantitative semantics for timed automata based on the so-called frequency, which measures the proportion of time spent in the accepting states. We study various properties of timed languages accepted with positive frequency, and in particular the emptiness and universality problems.
Magnus Bordewich and Ross J. Kang. Rapid mixing of subset Glauber dynamics on graphs of bounded tree-width
Abstract: Motivated by the `subgraphs world' view of the ferromagnetic Ising model, we develop a general approach to studying mixing times of Glauber dynamics based on subset expansion expressions for a class of graph polynomials.  With a canonical paths argument, we demonstrate that the chains defined within this framework mix rapidly upon graphs of bounded tree-width. This extends known results on rapid mixing for the Tutte polynomial, the adjacency-rank ($R_2$-)polynomial and the interlace polynomial.
Lei Huang and Toniann Pitassi. Automatizability and Simple Stochastic Games
Abstract: The complexity of simple stochastic games (SSGs) has been open since they were defined by Condon in 1992. Such a game is played by two players, Min and Max, on a graph consisting of max nodes, min nodes, and average nodes. The goal of Max is to reach the 1-sink, while the goal of Min is to avoid the 1-sink. When on a max (min) node, Max (Min) chooses the outedge, and when on an average node, they take each edge with equal probability. The complexity problem is to determine, given a graph, whether or not Max has a strategy that is guaranteed to reach the 1-sink with probability at least 1/2. Despite intensive effort, the complexity of this problem is still unresolved. In this paper, we establish a new connection between the complexity of SSGs and the complexity of an important problem in proof complexity--the proof search problem for low depth Frege systems. We prove that if depth-3 Frege systems are weakly automatizable, then SSGs are solvable in polynomial-time. Moreover we identify a natural combinatorial principle, which is a version of the well-known Graph Ordering Principle (GOP), that we call the integer-valued GOP (IGOP). This principle states that for any graph $G$ with nonnegative integer weights associated with each node, there exists a locally maximal vertex (a vertex whose weight is at least as large as its neighbors). We prove that if depth-2 Frege plus IGOP is weakly automatizable, then SSG is in P.
Holger Hermanns, David N. Jansen, Flemming Nielson and Lijun Zhang. Automata-based CSL model checking
Abstract: For continuous-time Markov chains, the model-checking problem with respect to continuous-time stochastic logic (CSL) has been introduced and shown to be decidable by Aziz, Sanwal, Singhal and Brayton in 1996. The presented decision procedure, however, has exponential complexity. In this paper, we propose an effective approximation algorithm for full CSL. The key to our method is the notion of stratified CTMCs with respect  to the CSL property to be checked. We present a measure-preservation theorem allowing us to reduce the problem to a transient analysis on stratified CTMCs. The corresponding probability can then be approximated in polynomial time (using uniformization). This makes the present work the centerpiece of a broadly applicable full CSL model checker. Recently, the decision algorithm by Aziz et al. was shown to work only for stratified CTMCs. As an additional contribution, our measure-preservation theorem can be used to ensure the decidability for general CTMCs.
Stefan Canzar, Khaled Elbassioni, Gunnar W. Klau and Julián Mestre. On Tree-Constrained Matchings and Generalizations
Abstract: We consider the following \textsc{Tree-Constrained Bipartite     Matching} problem: Given two rooted trees $T_1=(V_1,E_1)$,   $T_2=(V_2,E_2)$ and a weight function $w: V_1\times V_2 \mapsto   \mathbb{R}_+$, find a maximum weight matching $\mathcal{M}$ between   nodes of the two trees, such that none of the matched nodes is an   ancestor of another matched node in either of the trees.  This   generalization of the classical bipartite matching problem appears,   for example, in the computational analysis of live cell video   data. We show that the problem is $\mathcal{APX}$-hard and thus,   unless $\mathcal{P} = \mathcal{NP}$, disprove a previous claim that   it is solvable in polynomial time. Furthermore, we give a   $2$-approximation algorithm based on a combination of the local   ratio technique and a careful use of the structure of basic feasible   solutions of a natural LP-relaxation, which we also show to have an   integrality gap of $2-o(1)$.   In the second part of the paper, we consider a natural   generalization of the problem, where trees are replaced by partially   ordered sets (posets).  We show that the local ratio technique gives   a $2k\rho$-approximation for the $k$-dimensional matching   generalization of the problem, in which the maximum number of   incomparable elements below (or above) any given element in each   poset is bounded by $\rho$. We finally give an almost matching   integrality gap example, and an inapproximability result showing   that the dependence on $\rho$ is most likely unavoidable.
Johannes Dams, Martin Hoefer and Thomas Kesselheim. Convergence Time of Power Control Dynamics
Abstract: We study two (classes of) distributed algorithms for power control in a general model of wireless networks. There are n wireless communication requests or links that experience interference and noise. To be successful a link must satisfy an SINR constraint. The goal is to find a set of powers such that all links are successful simultaneously. A classic algorithm for this problem is the fixed-point iteration due to Foschini and Miljanic [1992], for which we prove the first bounds on worst-case running times -- after roughly O(n log n) rounds all SINR constraints are nearly satisfied. When we try to satisfy each constraint exactly, however, convergence time is infinite. For this case, we design a novel framework for power control using regret learning algorithms and iterative discretization. While the exact convergence times must rely on a variety of parameters, we show that roughly a polynomial number of rounds suffices to make every link successful during at least a constant fraction of all previous rounds.
Michael Benedikt, Gabriele Puppis and Cristian Riveros. The cost of traveling between languages
Abstract: We show how to calculate the number of edits per character needed to convert a string in one regular language to a string in another language. Our algorithm makes use of a local determinization procedure  applicable to a subclass of distance automata. We then show how to  calculate the same property when the editing needs to be done in streaming fashion, by a finite state transducer, using a reduction to mean-payoff games. Finally, we determine when the optimal editor can be approximated by a streaming editor -- the construction here makes use of a combination of games and distance automata.
Alexey Gotsman and Hongseok Yang. Liveness-preserving atomicity abstraction
Abstract: Modern concurrent algorithms are usually encapsulated in libraries, and complex algorithms are often constructed using libraries of simpler ones. We present the first theorem that allows harnessing this structure to give compositional liveness proofs to concurrent algorithms and their clients. We show that, while proving a liveness property of a client using a concurrent library, we can soundly replace the library by another one related to the original library by a generalisation of a well-known notion of linearizability. We apply this result to show formally that lock-freedom, an often-used liveness property of non-blocking algorithms, is compositional for linearizable libraries, and provide an example illustrating our proof technique.
Olaf Beyersdorff, Nicola Galesi, Massimo Lauria and Alexander Razborov. Parameterized Bounded-Depth Frege is Not Optimal
Abstract: A general framework for parameterized proof complexity was introduced by Dantchev, Martin, and Szeider [DMS07]. In that framework the parameterized version of any proof system is not fpt-bounded for some technical reasons, but we remark that this question becomes much more interesting if we restrict ourselves to those parameterized contradictions $(F,k)$ in which $F$ itself is a contradiction. We call such parameterized contradictions {\em strong}, and with one important exception (vertex cover) all interesting contradictions we are aware of are strong. It follows from the gap complexity theorem of [DMS07] that tree-like Parameterized Resolution is not fpt-bounded w.r.t. strong parameterized contradictions. The main result of this paper significantly improves upon this by showing that even the parameterized version of bounded-depth Frege is not fpt-bounded w.r.t. strong contradictions. More precisely, we prove that the pigeonhole principle requires proofs of size $n^{\Omega(k)}$ in bounded-depth Frege, and, as a special case, in dag-like Parameterized Resolution. This answers an open question posed in [DMS07]. In the opposite direction, we interpret a well-known FPT algorithm for vertex cover as a DPLL procedure for Parameterized Resolution. Its generalization leads to a proof search algorithm for Parameterized Resolution that in particular shows that tree-like Parameterized Resolution allows short refutations of all parameterized contradictions given as bounded-width CNF's.
Stephane Durocher, Meng He, Ian Munro, Patrick Nicholson and Matthew Skala. Range Majority in Constant Time and Linear Space
Abstract: Given an array $A$ of size $n$, we consider the problem of answering range majority queries: given a query range $[i..j]$ where $1 \le i \le j \le n$, return the majority element of the subarray $A[i..j]$ if it exists.  We describe a linear space data structure that answers range majority queries in constant time.  We further generalize this problem by defining range $\alpha$-majority queries: given a query range $[i..j]$, return all the elements in the subarray $A[i..j]$ with frequency greater than $\alpha(j-i+1)$.  We prove an upper bound on the number of $\alpha$-majorities that can exist in a subarray, assuming that query ranges are restricted to be larger than a given threshold.  Using this upper bound, we generalize our range majority data structure to answer range $\alpha$-majority queries in $O(\frac{1}{\alpha})$ time using $O(n \lg(1/\alpha + 1))$ space, for any fixed $\alpha \in (0, 1)$.  This result is interesting since other similar range query problems based on frequency have nearly logarithmic lower bounds on query time when restricted to linear space.
Thomas Brihaye, Laurent Doyen, Gilles Geeraerts, Joël Ouaknine, Jean-François Raskin and James Worrell. On Reachability for Hybrid Automata over Bounded Time
Abstract: This paper investigates the time-bounded version of the reachability problem for hybrid automata. This problem asks whether a given hybrid automaton can reach a given target location within B time units, where B is a given bound. We prove that, unlike the classical reachability problem, the timed-bounded version is decidable on rectangular hybrid automata if only nonnegative rates are allowed. This class is of practical interest as it subsumes for example the class of stopwatch automata. We also show that the problem becomes undecidable if either diagonal constraints or both negative and positive rates are allowed.
Endre Boros, Khaled Elbassioni, Mahmoud Fouz, Vladimir Gurvich, Kazuhisa Makino and Bodo Manthey. Stochastic Mean Payoff Games: Smoothed Analysis and Approximation Schemes
Abstract: We consider two-player zero-sum stochastic mean payoff games with perfect information modeled by a digraph with black, white, and random vertices. These BWR-games games are polynomially equivalent with the classical Gillette games, which include many well-known subclasses, such as cyclic games, simple stochastic games, stochastic parity games, and Markov decision processes. They can also be used to model parlor games such as Chess or Backgammon. It is a long-standing open question if a polynomial algorithm exists that solves BWR-games. In fact, a pseudo-polynomial algorithm  for these games with an arbitrary number of random nodes would already imply their polynomial solvability. Currently, only two classes are known to have such a pseudo-polynomial algorithm: BW-games (the case with no random nodes) and ergodic BWR-games (in which the game's value does not depend on the initial position) with constant number of random nodes. In this paper, we show that the existence of a pseudo-polynomial algorithm for BWR-games with constant number of random vertices implies smoothed polynomial complexity and the existence of absolute and relative polynomial-time approximation schemes. In particular, we obtain smoothed polynomial complexity and derive absolute and relative approximation schemes for BW-games and ergodic BWR-games (assuming a technical requirement about the probabilities at the random nodes).
Scott Aaronson and Andrew Drucker. Advice Coins for Classical and Quantum Computation
Abstract: We study the power of classical and quantum algorithms equipped with nonuniform advice, in the form of a coin whose bias encodes useful information. This question takes on particular importance in the quantum case, due to a surprising result that we prove: a quantum finite automaton with just two states can be sensitive to arbitrarily small changes in a coin's bias.  This contrasts with classical probabilistic finite automata, whose sensitivity to changes in a coin's bias is bounded by a classic 1970 result of Hellman and Cover. Despite this finding, we are able to bound the power of advice coins for space-bounded classical and quantum computation.  We define the classes BPPSPACE/coin and BQPSPACE/coin, of languages decidable by classical and quantum polynomial-space machines with advice coins.  Our main theorem is that both classes coincide with PSPACE/poly. Proving this result turns out to require substantial machinery.  We use an algorithm due to Neff for finding roots of polynomials in NC; a result from algebraic geometry that lower-bounds the separation of a polynomial's roots; and a result on fixed-points of superoperators due to Aaronson and Watrous, originally proved in the context of quantum computing with closed timelike curves.
Sze-Hang Chan, Tak-Wah Lam, Lap-Kei Lee, Chi-Man Liu and Hing-Fung Ting. Sleep Management on Multiple Machines for Energy and Flow Time
Abstract: In large data centers, determining the right number of working machines is often non-trivial, especially when the workload is unpredictable. Using too many machines would waste energy, while using too few would affect the performance. Motivated by such concern, we extend the traditional study of online flow-time scheduling on multiple machines to take sleep management and energy into consideration. Specifically, we study online algorithms that can determine dynamically when and which subset of machines should wake up (or sleep), how jobs are dispatched among the machines, and how each machine schedules its job. To understand the tradeoff between flow time and energy, we consider schedules whose objective is to minimize the sum of  flow time and energy. We study this new scheduling problem in two settings: the first one assumes machines running at a fixed speed, and the second one assumes the dynamic speed scaling model (i.e., each machine can scale its speed dynamically to further optimize its energy). The latter implies that the online scheduler also has to adjust the speed for each machine. For each speed model, we give an $O(1)$-competitive algorithm. Like the previous work on flow time and energy, the analysis of our algorithms is also based on potential functions. What is new is that the sleep management problem would allow the online and offline algorithms to use different subsets of machines at different times, and this pushes us to derive a more general potential analysis that can consider different match-up of machines between the two algorithms.
Sylvain Schmitz and Philippe Schnoebelen. Multiply-Recursive Upper Bounds with Higman’s Lemma
Abstract: We develop a new analysis for the length of controlled bad sequences in well-quasi-orderings based on Higman's Lemma. This leads to tight multiply-recursive upper bounds that readily apply to several verification algorithms for well-structured systems.
Subhash Khot and Per Austrin. A Simple Deterministic Reduction for the Gap Minimum Distance of Code Problem
Abstract: We present a simple deterministic gap-preserving reduction from SAT to the Minimum Distance of Code Problem over GF(2). We also show how to extend the reduction to work over any finite field. Previously a randomized reduction was known due to Dumer, Micciancio, and Sudan [8], which was recently derandomized by Cheng and Wan [6, 7]. These reductions rely on highly non-trivial coding theoretic constructions whereas our reduction is elementary. As an additional feature, our reduction gives a constant factor hardness even for asymptotically good codes, i.e., having constant rate and relative distance. Previously it was not known how to achieve deterministic reductions for such codes.
Lance Fortnow and Rahul Santhanam. Robust Simulations and Significant Separations
Abstract: We define and study a new notion of ``robust simulations'' between complexity classes which is intermediate between the traditional notions of infinitely-often and almost-everywhere, as well as a corresponding notion of ``significant separations''. A language L has a robust simulation in a complexity class C if there is a language in C which agrees with L on arbitrarily large polynomial stretches of input lengths. There is a significant separation of L from C if there is no robust simulation of L in C. The new notion of simulation is a cleaner and more natural notion of simulation than the infinitely-often notion. We show that various implications in complexity theory such as the collapse of PH if NP = P and the Karp-Lipton theorem have analogues for robust simulations. We then use these results to prove that most known separations in complexity theory, such as hierarchy theorems, fixed polynomial circuit lower bounds, time-space tradeoffs, and the recent theorem of Williams, can be strengthened to significant separations, though in each case, an almost everywhere separation is unknown. Proving our results requires several new ideas, including a completely different proof of the hierarchy theorem for non-deterministic polynomial time than the ones previously known.
Ryan O'Donnell, John Wright and Yuan Zhou. The Fourier Entropy-Influence Conjecture for certain classes of Boolean functions
Abstract: In 1996, Friedgut and Kalai made the "Fourier Entropy-Influence Conjecture": For every Boolean function f : {-1,1}^n -> {-1,1} it holds that H[\hat{f}^2] <= C I[f], where H[\hat{f}^2] is the spectral entropy of f, I[f] is the total influence of f, and C is a universal constant.  In this work we verify the conjecture for symmetric functions.  More generally, we verify it for functions with symmetry group S_{n_1} x ... x S_{n_d}$ where d is constant.  We also verify the conjecture for functions computable by read-once decision trees.
Frederic Magniez, Ashwin Nayak, Miklos Santha and David Xiao. Improved bounds for the randomized decision tree complexity of recursive majority
Abstract: We consider the randomized decision tree complexity of the recursive 3-majority function. For evaluating a height h formulae, we prove a lower bound for the δ-two-sided-error randomized decision tree complexity of (1−2δ)(5/2)^h, improving the lower bound of (1−2δ)(7/3)^h given by Jayram et al. (STOC ’03). We also state a conjecture which would further improve the lower bound to (1 − 2δ)2.54355^h. Second, we improve the upper bound by giving a new zero-error randomized decision tree algorithm that has complexity at most (1.007) · 2.64946^h, improving on the previous best known algorithm, which achieved (1.004) · 2.65622^h. Our lower bound follows from a better analysis of the base case of the recursion of Jayram et al. . Our algorithm uses a novel “interleaving” of two recursive algorithms.
Lorenzo Clemente. Buechi Automata can have Smaller Quotients
Abstract: We study novel simulation-like preorders for quotienting nondeterministic Buechi automata. We define fixed-word delayed simulation, a new preorder coarser than delayed simulation. We argue that fixed-word simulation is the coarsest forward simulation-like preorder which can be used for quotienting Buechi automata, thus improving our understanding of the limits of quotienting. Also, we show that computing fixed-word simulation is PSPACE-complete. On the practical side, we introduce proxy simulations, which are novel polynomial-time computable preorders sound for quotienting. In particular, delayed proxy simulation induce quotients that can be smaller by an arbitrarily large factor than direct backward simulation. We derive proxy simulations as the product of a theory of refinement transformers: A refinement transformer maps preorders non-decreasingly, preserving certain properties. We study under which general conditions refinement transformers are sound for quotienting.
Matthew Hennessy and Yuxin Deng. On the Semantics of Markov Automata
Abstract:   Markov automata describe systems in terms of events which may be   nondeterministic, may occur probabilistically, or may be subject to   time delays. We define a novel notion of weak bisimulation for such   systems and prove that this provides both a sound and complete proof   methodology for a natural extensional behavioural equivalence   between such systems, a generalisation of reduction barbed   congruence, the well-known touchstone equivalence for a large   variety of process description languages.
André Chailloux, Iordanis Kerenidis and Bill Rosgen. Quantum Commitments from Complexity Assumptions
Abstract: Bit commitment schemes are at the basis of modern cryptography. Since information-theoretic security is impossible both in the classical and the quantum regime, we need to look at computationally secure commitment schemes. In this paper, we study worst-case complexity assumptions that imply quantum bit-commitment schemes.  First, we show that QSZK \not\subseteq QMA implies a computationally hiding and statistically binding auxiliary-input quantum commitment scheme. We then extend our result to show that the much weaker assumption QIP \not\subseteq QMA (which is weaker than PSPACE \not\subseteq PP) implies the existence of auxiliary-input commitment schemes with quantum advice. Finally, to strengthen the plausibility of the first assumption, we  find a quantum oracle relative to which honest-verifier QSZK is not contained in QCMA, the class of languages that can be verified using a classical proof in quantum polynomial time.
Sebastian Faust, Krzysztof Pietrzak and Daniele Venturi. Tamper-Proof Circuits: How to Trade Leakage for Tamper-Resilience
Abstract: Tampering attacks are cryptanalytic attacks on the implementation of cryptographic algorithms (e.g. smart cards), where an adversary introduces faults with the hope that the tampered device will reveal secret information. Inspired by the work of Ishai et al. [Eurocrypt'06], we propose a compiler that transforms any circuit into a new circuit with the same functionality, but which is resilient against a well-defined and powerful tampering adversary. More concretely, our transformed circuits remain secure even if the adversary can adaptively tamper with every wire in the circuit as long as the tampering fails with some probability $\delta>0$. This additional requirement is motivated by practical tampering attacks, where it is often difficult to guarantee the success of a specific attack. Formally, we show that a $q$-query tampering attack against the transformed circuit can be ``simulated'' with only black-box access to the original circuit and $\log(q)$ bits of additional auxiliary information. Thus, if the implemented cryptographic scheme is secure against $\log(q)$ bits of leakage, then our implementation is tamper-proof in the above sense. Surprisingly, allowing for this small amount of information leakage --- and not insisting on perfect simulability like in the work of Ishai et al. --- allows for much more efficient compilers, which moreover do not require randomness during evaluation. Similar to earlier work our compiler requires small, stateless and computation-independent tamper-proof gadgets. Thus, our result can be interpreted as reducing the problem of shielding arbitrary complex computation to protecting simple components.
Sylvain Salvati and Igor Walukiewicz. Krivine machines and higher-order schemes
Abstract:   We propose a new approach to analysing higher-order recursive   schemes.  Many results in the literature use automata models   generalising pushdown automata, most notably higher-order pushdown   automata with collapse (CPDA). Instead, we propose to use the   Krivine machine model. Compared to CPDA, this model is closer to   lambda-calculus, and incorporates nicely many invariants of   computations, as for example the typing information. The usefulness   of the proposed approach is demonstrated with new proofs of two   central results in the field: the decidability of the local and   global model checking problems for higher-order schemes with respect   to the mu-calculus.
Benjamin Doerr and Mahmoud Fouz. Asymptotically Optimal Randomized Rumor Spreading
Abstract: We propose a new protocol solving the fundamental problem of disseminating a piece of information to all members of a group of $n$ players. It builds upon the classical randomized rumor spreading protocol and several extensions. The main achievements are the following: Our protocol spreads a rumor from one node to all other nodes in the asymptotically optimal time of $(1 + o(1)) \log_2 n$. The whole process can be implemented in a way such that only $O(n f(n))$ calls are made, where $f(n)= \omega(1)$ can be arbitrary. In spite of these quantities being close to the theoretical optima, the protocol remains relatively robust against failures. We show that for \emph{random} node failures, our algorithm again comes arbitrarily close to the theoretical optima. The protocol can be extended to also deal with \emph{adversarial} node failures. The price for this additional robustness is only a constant factor increase in the run-time, where the constant factor depends on the fraction of failing nodes the protocol is supposed to cope with. It can easily be implemented such that only $O(n)$ calls to properly working nodes are made. In contrast to the push-pull protocol by Karp et al. [FOCS 2000], our algorithm only uses push operations, i.e., only informed nodes take active actions in the network. To the best of our knowledge, this is the first randomized push algorithm that achieves an asymptotically optimal running time.
Eric Allender, Luke Friedman and William Gasarch. Limits on the Computational Power of Random Strings
Abstract: R, the set of Kolmogorov-random strings, is a central notion in the study of algorithmic information theory, and in recent years R has increasingly been studied in relation to computational complexity theory.  This work takes as its starting point three strange inclusions that have been proved since 2002: 1. NEXP is contained in the class of problems NP-Turing-reducible to R. 2. PSPACE is contained in the class of problems poly-time Turing-reducible to R. 3. BPP is contained in the class of problems poly-time truth-table-reducible to R. (These inclusions hold for both of the most widely-studied variants of Kolmogorov complexity: the plain complexity C(x) and the prefix-complexity K(x).  They also hold no matter which "universal" Turing machine is used in the definitions of the functions C and K.) These inclusions are "strange" since R is not even computable!  Thus it is not at all clear that these are meaningful upper bounds on the complexity of BPP, PSPACE, and NEXP, and indeed it is not at all clear that it is very interesting to consider efficient reductions to noncomputable sets such as R. Our main theorems are that, if we restrict attention to prefix complexity K and the corresponding set of random strings R_K, then the class of decidable problems that are in NP relative to R_K (no matter which universal machine is used to define K) lies in EXPSPACE, and the class of decidable problems that are poly-time truth-table reducible to R_K (no matter which universal machine is used to define K) lies in PSPACE. Thus we can "sandwich" PSPACE between the class of problems truth-table- and Turing-reducible to R_K, and the class of decidable problems that are in NP relative to R_K lies between NEXP and EXPSPACE.  The corresponding questions for plain Kolmogorov complexity C are wide open; no upper bounds are known at all for the class of decidable problems efficiently reducible to R_C. These results also provide the first quantitative limits on the applicability of uniform derandomization techniques.
Andreas Cord-Landwehr, Bastian Degener, Barbara Kempkes, Friedhelm Meyer Auf Der Heide, Sven Kurras, Matthias Fischer, Martina Hüllmann, Alexander Klaas, Peter Kling, Marcus Märtens, Kamil Swierkot, Christoph Raupach, Daniel Warner, Christoph Weddemann and Daniel Wonisch. A new Approach for Analyzing Convergence Algorithms for Mobile Robots
Abstract: Given a set of $n$ mobile robots in the d-dimensional Euclidean space, the goal is to let them converge to a single not predefined point. The challenge is that the robots are limited in their capabilities. We consider the well-studied family of robotic capabilities where a robot can, upon activation, compute the positions of all other robots using an individual affine coordinate system. The robots are indistinguishable, oblivious and may have different affine coordinate systems, which may even change over time. A very general discrete time model assumes that robots are activated in arbitrary order. Further, the computation of a new target point may happen much earlier than the movement so that the movement is based on outdated information about other robot's positions. Time is measured as the number of rounds, where a round ends as soon as each robot has moved at least once. In [CP05], the Center of Gravity is considered as target function. This target function is invariant against the choice of the affine coordinate system. Convergence is proven, and the number of rounds needed for halving the diameter of the convex hull of the robot's positions is shown to be O(n^2) and Omega(n). In this paper, we present an easy-to-check property of target functions that guarantees convergence and yields upper time bounds. This property intuitively says that when a robot computes a new target point, this point is significantly within the current axis-aligned minimal box containing all robots. (Note that this box is defined relative to a fixed coordinate system and cannot be computed by the robots.) This property holds, for example, for the above-mentioned target function, and improves the above O(n^2) to an asymptotically optimal O(n) upper bound. Our technique also yields a constant time bound for a target function that requires that all robots have the same coordinate system.
Ahmed Bouajjani, Roland Meyer and Eike Moehlmann. Deciding robustness against total store ordering
Abstract: Sequential consistency (sc) is the classical model for shared memory concurrent programs. It corresponds to the interleaving semantics where the order of actions issued by a component is preserved. For performance reasons, modern processors adopt relaxed memory models that may execute actions out of order. We address the issue of deciding robustness of a program against the popular total store ordering (tso) memory model, i.e., we check whether the behaviour under tso coincides with the expected sc semantics. We prove that this problem is decidable and only PSPACE-complete. Surprisingly, we detect violations to robustness on pairs of sc computations, without consulting the tso model.
Konstantinos Kollias and Tim Roughgarden. Restoring Pure Equilibria to Weighted Congestion Games
Abstract: Congestion games model several interesting applications, including routing and network formation games, and also possess attractive theoretical properties, including the existence of and convergence of natural dynamics to a pure Nash equilibrium. Weighted variants of congestion games that rely on sharing costs proportional to players' weights do not generally have pure-strategy Nash equilibria. We propose a new way of assigning costs to players with weights in congestion games that recovers the important properties of the unweighted model. This method is derived from the Shapley value, and it always induces a game with a (weighted) potential function. For the special cases of weighted network cost-sharing and atomic selfish routing games (with Shapley value-based cost shares), we prove tight bounds on the price of stability and price of anarchy, respectively.
Diana Fischer and Lukasz Kaiser. Model Checking the Quantitative mu-Calculus on Linear Hybrid Systems
Abstract: In this work, we consider the model-checking problem for a quantitative extension of the modal mu-calculus on a class of hybrid systems. Qualitative model checking has been proved decidable and implemented for several classes of systems, but this is not the case for quantitative questions, which arise naturally in this context. Recently, quantitative formalisms that subsume classical temporal logics and additionally allow to measure interesting quantitative phenomena were introduced. We show how a powerful quantitative logic, the quantitative mu-calculus, can be model-checked with arbitrary precision on initialised linear hybrid systems. To this end, we develop new techniques for the discretisation of continuous state spaces based on a special class of strategies in model-checking games and show decidability of a class of counter-reset games that may be of independent interest.
Konstantinos Tsakalidis and Gerth Stølting Brodal. Dynamic Planar Range Maxima Queries
Abstract: We consider the dynamic two-dimensional maxima query problem. Let $P$ be a set of $n$ two-dimensional points. A point is \emph{maximal} if it is not dominated by any other point in $P$. We describe two data structures that support the reporting of the $t$ maximal points that dominate a given query point, and allow for insertions and deletions of points in $P$. % In the pointer machine model we present a linear space data structure with $O(\log n + t)$ worst case query time and $O(\log n)$ worst case update time.  This is the first dynamic data structure for the planar maxima dominance query problem that achieves these bounds in the worst case.  The update time is optimal for the achieved query time. % The data structure also supports the more general query of reporting the maximal points among the points that lie in a given a 3-sided orthogonal range unbounded from above.  Queries take $O(\log n + t)$ worst case time, where $t$ is the number of reported points. We can support 4-sided queries in $O(\log^2 n + t)$ worst case time, and $O(\log^2 n)$ worst case update time, using $O(n\log n)$ space. This allows us to improve the worst case deletion time of the dynamic rectangular visibility query problem from $O(\log^3 n)$ to $O(\log^2 n)$. % Our data structure can be adapted to the RAM model with word size $w$, where the coordinates of the points are integers in the range $U\smash = \{0, \dots ,2^w\smash -1 \}$. We present a linear space data structure for 3-sided range maxima queries with $O( \frac{\log n}{\log   \log n } + t )$ worst case query time and $O( \frac{\log n}{\log   \log n } )$ worst case update time. This is the first data structure that achieves sublogarithmic worst case complexity for all operations in the RAM model. The query time is optimal for the achieved update time.
Tomas Brazdil, Vaclav Brozek, Kousha Etessami and Antonin Kucera. Approximating the Termination Value of One-counter MDPs and Stochastic Games
Abstract: One-counter MDPs (OC-MDPs) and one-counter simple stochastic games (OC-SSGs) are 1-player, and 2-player turn-based zero-sum, stochastic games played on the transition graph of classic one-counter automata (equivalently, pushdown automata with a 1-letter stack alphabet). A key objective for the analysis and verification of these games is the {\em termination} objective, where  the players aim  to maximize (minimize, respectively) the probability of hitting counter value $0$, starting at a given control state and given counter value. Recently \cite{BBEKW10,BBE10}, we studied {\em qualitative} decision problems (``is the optimal termination value  = 1?'')  for OC-MDPs (and OC-SSGs) and showed them to be decidable in P-time (in NP$\cap$coNP, respectively). However, {\em quantitative} decision and approximation problems (``is the optimal termination value $\geq p$'', or ``approximate the termination value within $\epsilon$'') are far more challenging.  This is so in part because  optimal strategies may not exist, and even when they do exist they can have a highly non-trivial structure. It thus remained open even whether any of these quantitative termination problems are computable. In this paper we show that all quantitative {\em approximation} problems for the termination value for OC-MDPs and OC-SSGs are computable.  Specifically, given a OC-SSG, and given $\epsilon > 0$, we can compute a value $v$ that approximates the value of the OC-SSG termination game within additive error $\epsilon$, and furthermore we can compute $\epsilon$-optimal strategies for both players in the game. A key ingredient in our proofs is a subtle   martingale, derived from solving certain LPs that we can associate with a maximizing OC-MDP. An application of Azuma's inequality on these martingales yields a computable bound for the ``wealth'' at which a ``rich person's strategy'' becomes $\epsilon$-optimal for OC-MDPs.
Anna Adamaszek, Artur Czumaj, Andrzej Lingas and Jakub Onufry Wojtaszczyk. Approximation schemes for capacitated geometric network design
Abstract: We study a capacitated network design problem in geometric setting. We assume that the input consists of an integral link capacity k and two sets of points on a plane, sources and sinks, each source/sink having an associated integral demand (amount of flow to be shipped from/to). The capacitated geometric network design problem is to construct a minimum-length network N that allows to route the requested flow from sources to sinks, such that each link in N has capacity k; the flow is splittable and parallel links are allowed in N. The capacitated geometric network design problem generalizes, among others, the geometric Steiner tree problem, and as such it is NP-hard. We show that if the demands are polynomially bounded and the link capacity k is not too large, the single-sink capacitated geometric network design problem admits a polynomial-time approximation scheme. If the capacity is arbitrarily large, then we design a quasi polynomial-time approximation scheme for the capacitated geometric network design problem allowing for arbitrary number of sinks. Our results rely on a derivation of an upper bound on the number of vertices different from sources and sinks (the so called Steiner vertices) in an optimal network. The bound is polynomial in the total demand of the sources.
Piotr Berman, Arnab Bhattacharyya, Konstantin Makarychev, Sofya Raskhodnikova and Grigory Yaroslavtsev. Improved Approximation for the Directed Spanner Problem
Abstract: We give a new $O(\sqrt{n}\log n)$ approximation algorithm for the problem of finding the sparsest  spanner of a given {\em directed} graph $G$ on $n$ vertices. A spanner of a graph is a sparse subgraph that approximately preserves distances in the original graph. More precisely, given a graph $G=(V,E)$ with nonnegative edge lengths $d:~E\rightarrow{\mathbb N}$ and a {\em stretch} $k\geq 1$, a subgraph $H = (V,E_H)$ is a {\em $k$-spanner} of $G$ if for every edge $(u,v) \in E$, the graph $H$ contains a path from $u$ to $v$ of length at most $k \cdot d(u,v)$. The previous best approximation ratio was $\tilde{O}(n^{2/3})$, due  to Dinitz and Krauthgamer~(STOC '11). We also present an improved algorithm for the important  special  case of directed 3-spanners with unit edge lengths. The approximation ratio of our algorithm is $\tilde{O}(n^{1/3})$ which almost matches the lower bound of Dinitz and Krauthgamer for the integrality gap of the linear program. The best previously known algorithms for this problem, due to Berman, Raskhodnikova and Ruan (FSTTCS '10) and Dinitz and Krauthgamer, had approximation ratio $\tilde{O}(\sqrt{n})$.
Ryan Harkins and John Hitchcock. Exact Learning Algorithms, Betting Games, and Circuit Lower Bounds
Abstract: This paper extends and improves work of Fortnow and Klivans   \cite{FortnowKlivans09}, who showed that if a circuit class $\calC$   has an efficient learning algorithm in Angluin's model of exact   learning via equivalence and membership queries \cite{Angluin88},   then we have the lower bound $\EXP^\NP \not\subseteq \calC$.  We use   entirely different techniques involving betting games   \cite{BvMRSS01} to remove the $\NP$ oracle and improve the lower   bound to $\EXP \not\subseteq \calC$.  This shows that it is even   more difficult to design a learning algorithm for $\calC$ than the   results of Fortnow and Klivans indicated.
Andrew Drucker. A PCP Characterization of AM
Abstract: We introduce a 2-round stochastic constraint-satisfaction problem, and show that its approximation version is complete for (the promise version of) the complexity class AM.  This gives a `PCP characterization' of AM analogous to the PCP Theorem for NP.  Similar characterizations have been given for higher levels of the Polynomial Hierarchy, and for PSPACE; however, we suggest that the result for AM might be of particular significance for attempts to derandomize this class. To test this notion, we pose some hypotheses related to our stochastic CSPs that (in light of our result) would imply collapse results for AM.  Unfortunately, the hypotheses may be over-strong, and we present evidence against them.  In the process we show that, if some language in NP is hard-on-average against circuits of size 2^{Omega(n)}, then there exist `inapproximable-on-average' optimization problems of a particularly elegant form. All our proofs use a powerful form of PCPs known as Probabilistically Checkable Proofs of Proximity, and demonstrate their versatility.  We also use known results on randomness-efficient soundness- and hardness-amplification.  In particular, we make essential use of the Impagliazzo-Wigderson generator; our analysis relies on a recent Chernoff-type theorem for expander walks.
Franck Van Breugel and Xin Zhang. A Progress Measure for Explicit-State Probabilistic Model-Checkers
Abstract:              Verification of the source code of a probabilistic system by means of an explicit-state model-checker is challenging. In most cases, the probabilistic model-checker will run out of memory due to the infamous state space explosion problem. We introduce the notion of a progress measure for such a model-checker. The progress measure returns a number in the interval [0, 1]. This number captures the amount of progress the model-checker has made towards verifying a particular linear-time property. The larger the number, the more progress the model-checker has made. We prove that the progress measure provides a lowerbound of the measure of the set of execution paths that satisfy the property. We also show how to compute the progress measure for checking a particular class of linear-time properties, namely invariants. Keywords: probabilistic model-checking, progress measure, line