2013

Editors: Pedro R. D'Argenio, Davide Sangiorgi, Hernán Melgratti

This volume contains a selection of the top papers presented at the 24th Conference on Concurrency Theory (CONCUR 2013) held in Buenos Aires, Argentina, during August 27--30, 2013.

CONCUR is an annual conference that brings together researchers, developers, and students in order to advance the theory of concurrency and promote its applications. The main topics include:

- Basic models of concurrency such as abstract machines, domain theoretic models, game theoretic models, process algebras, graph transformation systems and Petri nets;
- Logics for concurrency such as modal logics, probabilistic and stochastic logics, temporal logics, and resource logics;
- Models of specialized systems such as biology-inspired systems, circuits, hybrid systems, mobile and collaborative systems, multi-core processors, probabilistic systems, real-time systems, service-oriented computing, and synchronous systems;
- Verification and analysis techniques for concurrent systems such as abstract interpretation, atomicity checking, model checking, race detection, pre-order and equivalence checking, run-time verification, state-space exploration, static analysis, synthesis, testing, theorem proving, and type systems;
- Related programming models such as distributed, component-based, object-oriented, and web services.

The 2013 edition received 115 full submissions from which the Program Committee selected 34 papers for presentation at the conference that have also appeared in the conference proceedings.

Following the conference, we selected a few papers among the highest rated and invited their authors to submit full versions to this special issue of Logical Methods in Computer Science.

Eventually, it became the volume that you are now reading for which, not only authors but also reviewers contributed with their hard work. We would like to kindly thank all of them.

Pedro R. D'Argenio, Hernáan Melgratti, Davide Sangiorgi

Guest Editors for the CONCUR 2013 Special Issue

Guest Editors for the CONCUR 2013 Special Issue

We introduce Priority Channel Systems, a new class of channel systems where messages carry a numeric priority and where higher-priority messages can supersede lower-priority messages preceding them in the fifo communication buffers. The decidability of safety and inevitability properties is shown via the introduction of a priority embedding, a well-quasi-ordering that has not previously been used in well-structured systems. We then show how Priority Channel Systems can compute Fast-Growing functions and prove that the aforementioned verification problems are $\mathbf{F}_{\varepsilon_{0}}$-complete.

Probabilistic automata constitute a versatile and elegant model for concurrent probabilistic systems. They are equipped with a compositional theory supporting abstraction, enabled by weak probabilistic bisimulation serving as the reference notion for summarising the effect of abstraction. This paper considers probabilistic automata augmented with costs. It extends the notions of weak transitions in probabilistic automata in such a way that the costs incurred along a weak transition are captured. This gives rise to cost-preserving and cost-bounding variations of weak probabilistic bisimilarity, for which we establish compositionality properties with respect to parallel composition. Furthermore, polynomial-time decision algorithms are proposed, that can be effectively used to compute reward-bounding abstractions of Markov decision processes in a compositional manner.

Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on the identification of the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof.

In the standard testing theory of DeNicola-Hennessy one process is considered to be a refinement of another if every test guaranteed by the former is also guaranteed by the latter. In the domain of web services this has been recast, with processes viewed as servers and tests as clients. In this way the standard refinement preorder between servers is determined by their ability to satisfy clients. But in this setting there is also a natural refinement preorder between clients, determined by their ability to be satisfied by servers. In more general settings where there is no distinction between clients and servers, but all processes are peers, there is a further refinement preorder based on the mutual satisfaction of peers. We give a uniform account of these three preorders. In particular we give two characterisations. The first is behavioural, in terms of traces and ready sets. The second, for finite processes, is equational.

We develop a new thermodynamic approach to stochastic graph-rewriting. The ingredients are a finite set of reversible graph-rewriting rules called generating rules, a finite set of connected graphs P called energy patterns and an energy cost function. The idea is that the generators define the qualitative dynamics, by showing which transformations are possible, while the energy patterns and cost function specify the long-term probability $\pi$ of any reachable graph. Given the generators and energy patterns, we construct a finite set of rules which (i) has the same qualitative transition system as the generators; and (ii) when equipped with suitable rates, defines a continuous-time Markov chain of which $\pi$ is the unique fixed point. The construction relies on the use of site graphs and a technique of `growth policy' for quantitative rule refinement which is of independent interest. This division of labour between the qualitative and long-term quantitative aspects of the dynamics leads to intuitive and concise descriptions for realistic models (see the examples in S4 and S5). It also guarantees thermodynamical consistency (AKA detailed balance), otherwise known to be undecidable, which is important for some applications. Finally, it leads to parsimonious parameterizations of models, again an important point in some applications.