2017
Programming distributed applications free from communication deadlocks and race conditions is complex. Preserving these properties when applications are updated at runtime is even harder. We present a choreographic approach for programming updatable, distributed applications. We define a choreography language, called Dynamic Interaction-Oriented Choreography (AIOC), that allows the programmer to specify, from a global viewpoint, which parts of the application can be updated. At runtime, these parts may be replaced by new AIOC fragments from outside the application. AIOC programs are compiled, generating code for each participant in a process-level language called Dynamic Process-Oriented Choreographies (APOC). We prove that APOC distributed applications generated from AIOC specifications are deadlock free and race free and that these properties hold also after any runtime update. We instantiate the theoretical model above into a programming framework called Adaptable Interaction-Oriented Choreographies in Jolie (AIOCJ) that comprises an integrated development environment, a compiler from an extension of AIOCs to distributed Jolie programs, and a runtime environment to support their execution.
The notion of Schnorr randomness refers to computable reals or computable functions. We propose a version of Schnorr randomness for subcomputable classes and characterize it in different ways: by Martin Löf tests, martingales or measure computable machines.
We study Hopf monoids in entropic semi-additive varieties with an emphasis on adjunctions related to the enveloping monoid functor and the primitive element functor. These investigations are based on the concept of the abelian core of a semi-additive variety variety and its monoidal structure in case the variety is entropic.
The rule labeling heuristic aims to establish confluence of (left-)linear term rewrite systems via decreasing diagrams. We present a formalization of a confluence criterion based on the interplay of relative termination and the rule labeling in the theorem prover Isabelle. Moreover, we report on the integration of this result into the certifier CeTA, facilitating the checking of confluence certificates based on decreasing diagrams. The power of the method is illustrated by an experimental evaluation on a (standard) collection of confluence problems.
We prove that all valid Herbrand equalities can be inter-procedurally inferred for programs where all assignments whose right-hand sides depend on at most one variable are taken into account. The analysis is based on procedure summaries representing the weakest pre-conditions for finitely many generic post-conditions with template variables. In order to arrive at effective representations for all occurring weakest pre-conditions, we show for almost all values possibly computed at run-time, that they can be uniquely factorized into tree patterns and a ground term. Moreover, we introduce an approximate notion of subsumption which is effectively decidable and ensures that finite conjunctions of equalities may not grow infinitely. Based on these technical results, we realize an effective fixpoint iteration to infer all inter-procedurally valid Herbrand equalities for these programs. Finally we show that an invariant candidate with a constant number of variables, can be verified in polynomial time.
In the semantics of programming languages one can view programs as state transformers, or as predicate transformers. Recently the author has introduced state-and-effect triangles which capture this situation categorically, involving an adjunction between state- and predicate-transformers. The current paper exploits a classical result in category theory, part of Jon Beck's monadicity theorem, to systematically construct such a state-and-effect triangle from an adjunction. The power of this construction is illustrated in many examples, covering many monads occurring in program semantics, including (probabilistic) power domains.
In sharp contrast to classical proof complexity we are currently short of lower bound techniques for QBF proof systems. In this paper we establish the feasible interpolation technique for all resolution-based QBF systems, whether modelling CDCL or expansion-based solving. This both provides the first general lower bound method for QBF proof systems as well as largely extends the scope of classical feasible interpolation. We apply our technique to obtain new exponential lower bounds to all resolution-based QBF systems for a new class of QBF formulas based on the clique problem. Finally, we show how feasible interpolation relates to the recently established lower bound method based on strategy extraction.
Many privacy-type properties of security protocols can be modelled using trace equivalence properties in suitable process algebras. It has been shown that such properties can be decided for interesting classes of finite processes (i.e., without replication) by means of symbolic execution and constraint solving. However, this does not suffice to obtain practical tools. Current prototypes suffer from a classical combinatorial explosion problem caused by the exploration of many interleavings in the behaviour of processes. Mödersheim et al. have tackled this problem for reachability properties using partial order reduction techniques. We revisit their work, generalize it and adapt it for equivalence checking. We obtain an optimisation in the form of a reduced symbolic semantics that eliminates redundant interleavings on the fly. The obtained partial order reduction technique has been integrated in a tool called APTE. We conducted complete benchmarks showing dramatic improvements.
We define a novel calculus that combines a call-by-name functional core with session-based communication primitives. We develop a typing discipline that guarantees both normalisation of expressions and progress of processes and that uncovers an unexpected interplay between evaluation and communication.
Disjoint union is a partial binary operation returning the union of two sets if they are disjoint and undefined otherwise. A disjoint-union partial algebra of sets is a collection of sets closed under disjoint unions, whenever they are defined. We provide a recursive first-order axiomatisation of the class of partial algebras isomorphic to a disjoint-union partial algebra of sets but prove that no finite axiomatisation exists. We do the same for other signatures including one or both of disjoint union and subset complement, another partial binary operation we define. Domain-disjoint union is a partial binary operation on partial functions, returning the union if the arguments have disjoint domains and undefined otherwise. For each signature including one or both of domain-disjoint union and subset complement and optionally including composition, we consider the class of partial algebras isomorphic to a collection of partial functions closed under the operations. Again the classes prove to be axiomatisable, but not finitely axiomatisable, in first-order logic. We define the notion of pairwise combinability. For each of the previously considered signatures, we examine the class isomorphic to a partial algebra of sets/partial functions under an isomorphism mapping arbitrary suprema of pairwise combinable sets to the corresponding disjoint unions. We prove that for each case the class is not closed under elementary equivalence. However, when intersection is added to any of […]
We present several new model-theoretic applications of the fact that, under the assumption that there exists a proper class of almost strongly compact cardinals, the powerful image of any accessible functor is accessible. In particular, we generalize to the context of accessible categories the recent Hanf number computations of Baldwin and Boney, namely that in an abstract elementary class (AEC) if the joint embedding and amalgamation properties hold for models of size up to a sufficiently large cardinal, then they hold for models of arbitrary size. Moreover, we prove that, under the above-mentioned large cardinal assumption, every metric AEC is strongly d-tame, strengthening a result of Boney and Zambrano and pointing the way to further generalizations.
Subtyping in concurrency has been extensively studied since early 1990s as one of the most interesting issues in type theory. The correctness of subtyping relations has been usually provided as the soundness for type safety. The converse direction, the completeness, has been largely ignored in spite of its usefulness to define the largest subtyping relation ensuring type safety. This paper formalises preciseness (i.e. both soundness and completeness) of subtyping for mobile processes and studies it for the synchronous and the asynchronous session calculi. We first prove that the well-known session subtyping, the branching-selection subtyping, is sound and complete for the synchronous calculus. Next we show that in the asynchronous calculus, this subtyping is incomplete for type-safety: that is, there exist session types T and S such that T can safely be considered as a subtype of S, but T < S is not derivable by the subtyping. We then propose an asynchronous subtyping system which is sound and complete for the asynchronous calculus. The method gives a general guidance to design rigorous channel-based subtypings respecting desired safety properties. Both the synchronous and the asynchronous calculus are first considered with lin ear channels only, and then they are extended with session initialisations and c ommunications of expressions (including shared channels).
We propose a distance between continuous-time Markov chains (CTMCs) and study the problem of computing it by comparing three different algorithmic methodologies: iterative, linear program, and on-the-fly. In a work presented at FoSSaCS'12, Chen et al. characterized the bisimilarity distance of Desharnais et al. between discrete-time Markov chains as an optimal solution of a linear program that can be solved by using the ellipsoid method. Inspired by their result, we propose a novel linear program characterization to compute the distance in the continuous-time setting. Differently from previous proposals, ours has a number of constraints that is bounded by a polynomial in the size of the CTMC. This, in particular, proves that the distance we propose can be computed in polynomial time. Despite its theoretical importance, the proposed linear program characterization turns out to be inefficient in practice. Nevertheless, driven by the encouraging results of our previous work presented at TACAS'13, we propose an efficient on-the-fly algorithm, which, unlike the other mentioned solutions, computes the distances between two given states avoiding an exhaustive exploration of the state space. This technique works by successively refining over-approximations of the target distances using a greedy strategy, which ensures that the state space is further explored only when the current approximations are improved. Tests performed on a consistent set of (pseudo)randomly generated […]
Generalizing standard monadic second-order logic for Kripke models, we introduce monadic second-order logic interpreted over coalgebras for an arbitrary set functor. We then consider invariance under behavioral equivalence of MSO-formulas. More specifically, we investigate whether the coalgebraic mu-calculus is the bisimulation-invariant fragment of the monadic second-order language for a given functor. Using automatatheoretic techniques and building on recent results by the third author, we show that in order to provide such a characterization result it suffices to find what we call an adequate uniform construction for the coalgebraic type functor. As direct applications of this result we obtain a partly new proof of the Janin-Walukiewicz Theorem for the modal mu-calculus, avoiding the use of syntactic normal forms, and bisimulation invariance results for the bag functor (graded modal logic) and all exponential polynomial functors (including the "game functor"). As a more involved application, involving additional non-trivial ideas, we also derive a characterization theorem for the monotone modal mu-calculus, with respect to a natural monadic second-order language for monotone neighborhood models.
We consider Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) objectives. There exist two different views: (i) the expectation semantics, where the goal is to optimize the expected mean-payoff objective, and (ii) the satisfaction semantics, where the goal is to maximize the probability of runs such that the mean-payoff value stays above a given vector. We consider optimization with respect to both objectives at once, thus unifying the existing semantics. Precisely, the goal is to optimize the expectation while ensuring the satisfaction constraint. Our problem captures the notion of optimization with respect to strategies that are risk-averse (i.e., ensure certain probabilistic guarantee). Our main results are as follows: First, we present algorithms for the decision problems which are always polynomial in the size of the MDP. We also show that an approximation of the Pareto-curve can be computed in time polynomial in the size of the MDP, and the approximation factor, but exponential in the number of dimensions. Second, we present a complete characterization of the strategy complexity (in terms of memory bounds and randomization) required to solve our problem.
The probability density function of a probability distribution is a fundamental concept in probability theory and a key ingredient in various widely used machine learning methods. However, the necessary framework for compiling probabilistic functional programs to density functions has only recently been developed. In this work, we present a density compiler for a probabilistic language with failure and both discrete and continuous distributions, and provide a proof of its soundness. The compiler greatly reduces the development effort of domain experts, which we demonstrate by solving inference problems from various scientific applications, such as modelling the global carbon cycle, using a standard Markov chain Monte Carlo framework.