Multiparty testing preorders

Variants of the must testing approach have been successfully applied in service oriented computing for analysing the compliance between (contracts exposed by) clients and servers or, more generally, between two peers. It has however been argued that multiparty scenarios call for more permissive notions of compliance because partners usually do not have full coordination capabilities. We propose two new testing preorders, which are obtained by restricting the set of potential observers. For the first preorder, called uncoordinated, we allow only sets of parallel observers that use different parts of the interface of a given service and have no possibility of intercommunication. For the second preorder, that we call individualistic, we instead rely on parallel observers that perceive as silent all the actions that are not in the interface of interest. We have that the uncoordinated preorder is coarser than the classical must testing preorder and finer than the individualistic one. We also provide a characterisation in terms of decorated traces for both preorders: the uncoordinated preorder is defined in terms of must-sets and Mazurkiewicz traces while the individualistic one is described in terms of classes of filtered traces that only contain designated visible actions and must-sets.


Introduction
A desired property of communication-centered systems is the graceful termination of the processes involved in a multiparty interaction, i.e., every possible interaction ends successfully, in the sense that there are neither messages waiting forever to be sent nor sent messages which are never received.The theories of session types [THK94,HVK98] and of contracts [CGP08, CGP09, BZ09, LP07] are commonly used to ensure such kind of properties.The key idea behind both approaches is to associate each process a with type (or contract) that gives an abstract description of its external, visible behaviour and to use type checking to verify the correctness of behaviours.
Processes are often defined as sequential nondeterministic ccs processes [Mil89] describing the offered communications, and are built-up from send and receive actions.These activities are abstractly represented either as output and input actions that take place over a set of channels or as internal τ actions.Basic actions can be composed sequentially (prefix operator ".") or as alternatives (non deterministic choice "+").Typically, the language for describing processes does not have any operator for parallel composition.It is assumed that all possible interleavings are made explicit in the description of the service and communication is only used for modelling the interaction among different processes.
In client-server scenarios, i.e., in settings involving just two processes, variants of the must testing preorder has been used to compare alternative implementations of servers and clients [BH13].Technically, two given processes p and q are related via the must preorder (p must q) if q satisfies all observers that are satisfied by p. Consequently, p and q are considered equivalent (p ≈ must q) if they satisfy exactly the same observers.Standardly, an observer is a unique (sequential) process that runs in parallel with the tested process and, consequently, all interactions with the tested process are handled by a unique, central process, i.e., the observer itself.
If one considers a multiparty setting, each process may concurrently interact with several other partners and its interface is often (logically) partitioned by allowing each partner to communicate only through a dedicated part of the interface.Consider the following scenario involving three partners: an organisation (the broker) that sells goods produced by a different company (the producer) to a specific customer (the client).The behaviour of the broker can be described by the following process: B = req.order.inv.0 The broker accepts requests on channel req and then places an order to the producer with the message order and sends an invoice to the client with the message inv .A client may behave as the process C below, which first sends a request on channel req and then expects the invoice on channel inv , i.e., C = req.inv.0A producer may be modelled by a process that simply accepts an order over channel ord , i.e., P = order .0In this scenario, the broker uses the channels req and inv to interact with the client, while it interacts with the producer over the channel order .Moreover, the client and the producer do not know each other and are completely independent.Hence, the order in which messages order and inv are sent is completely irrelevant for them.In fact, they would be equally happy with a broker defined as follows: B = req.inv.order.0Nevertheless, these two different implementations are not considered must-equivalent.In these situations, the classical must testing preorder turns out to be unnecessarily discriminating.
The main goal of this paper is to introduce alternative, less discriminating, preorders that take into account the distributed nature of the observers and their possibly limited coordination and interaction capabilities.A first preorder, called uncoordinated must preorder, is obtained by assuming that a set of observers of a given process interact with it via fully disjoint sets of ports, i.e., they use different parts of its interface, have no possibility of intercommunication, and all of them terminate successfully in every possible interaction.It is however worth noting that these assumptions about the absence of communication among observers do not fully eliminate the possibility for observers of being mutually influenced, e.g., when one observer does not enable a communication on some ports.Due to this, it is possible to differentiate B from B above when one of the observers refuses to synchronise over a port, e.g., if the client does not enable the synchronisation over the channel inv .Consider a client C = req.0 that sends a request and terminates without accepting the invoice.While P and C always terminate their interaction with B, this is not the case when interacting with B because communication over channel order is never enabled.Consequently, these two implementations of the broker are distinguished by the uncoordinated must preorder.However, the uncoordinated must preorder allows for the reordering of actions.For instance, the following two implementations of the broker are considered equivalent under the uncoordinated must preorder.B = req.(order.inv.0+ order .0+ inv .0)B = req.(inv.order.0+ order .0+ inv .0) We remark that the processes C and P are not able to distinguish B from B , because they both terminate when interacting with either B and B .We also note that a client behaving as described by C will not be satisfied neither by B nor B because they both may decide not to communicate over inv .
The second preorder, which we call individualistic must preorder, allows observers to take for granted the execution of those actions of the process that are not explicitly of interest for them (i.e., not in their alphabet).For instance, a client in the previous scenario assumes that the producer will always enable the communication over the channel order .In the individualistic must preorder, the processes B and B turn out to be indistinguishable.
The preorders are, as usual, defined in terms of the outcomes of experiments by specific sets of observers.For defining the uncoordinated must preorder, we allow only sets of parallel observers that cannot intercommunicate and do challenge processes via disjoint parts of their interface.For defining the individualistic must preorder, we instead rely on parallel observers that, again, cannot intercommunicate but in addition perceive as silent all the actions that are not part of the interface of their interest.This is instrumental to avoid that a specific observer recovers information about other involved observers.As expected, we have that the uncoordinated preorder is coarser than the classical must testing preorder and finer than the individualistic one.
Just like for classical testing preorders, we provide a characterisation for both new preorders in terms of decorated traces, which avoids dealing with universal quantifications over the set of observers whenever a specific relation between two processes has to be established.The alternative characterisations make it even more evident that our preorders permit action reordering.Indeed, the uncoordinated preorder is defined in terms of Mazurkiewicz traces [Maz95] while the individualistic one is described in terms of classes of traces quotiented via specific sets of visible actions.We would like to remark that our preorders are different from those defined in [BZ08,Pad10,MYH09], which also permit action reordering by relying on buffered communication; additional details will be provided in Section 7.
Synopsis The remainder of this paper is organised as follows.In Section 2 we recall the basics of the classical must testing approach.In Section 3 and Section 4 we present the theory of uncoordinated and individualistic must testing preorders and their characterisation in terms of traces.In Section 5 we show that the uncoordinated preorder is coarser than the must testing preorder but finer than the individualistic one.In Section 6 we describe a Prolog implementation of the uncoordinated and individualistic preorders for the finite fragment of our specification language and use it for analysing a scenario involving a replicated data store.Finally, we discuss some related work and future developments in Section 7.
This paper is a revised and extended version of [NM15].We fix an incorrect characterisation of the uncoordinated preorder [NM15], and provide full proofs of previously published results.In addition, we give a prototype implementation in Prolog of the alternative characterisation of the proposed preorders for the fragment of the calculus with only finite processes and illustrate the usability of the proposed preorders by using them to reason on different implementations of components in a replicated data store (Section 6).

Processes and testing preorders
Let N be a countable set of action names, ranged over by a, b, . ... As usual, we write co-names in N as a, b, . . .and assume a = a.We will use α, β to range over Act = (N ∪ N ).Moreover, we consider a distinguished internal action τ not in Act and use µ to range over Act ∪ {τ }.We fix the language for defining processes as the sequential fragment of ccs extended with a success operator, as specified by the following grammar.
p, q The process 0 stands for the terminated process, 1 for the process that reports success and then terminates, and µ.p for a process that executes µ and then continues as p. Alternative behaviours are specified by terms of the form p + q, while recursive ones are introduced by terms like rec X .p.We denote by P the set of all processes.We write n(p) for the set of names a ∈ N such that either a or a occur in p.
The operational semantics of processes is given in terms of a labelled transition system (lts) p Multiparty applications, named configurations, are built by composing processes concurrently.Formally, configurations are given by the following grammar.
c, d, o ::= p | c d We denote by O the set of all configurations.We sometimes write Π i∈0..n p i for the parallel composition p 0 . . .p n .The operational semantics of configurations, which accounts for the communication between processes, is obtained by extending the lts in Definition 2.1 with the following rules: All rules are standard apart for the last one that is not present in [NH84].This rule states that the concurrent composition of processes can report success only when all processes in the composition do so.
We write c We write str(c) and init(c) to denote the sets of strings and enabled actions of c, defined as follows λ =⇒} As behavioural semantics, we consider the must-testing preorder [NH84], which is defined in terms of the computations of a process under test p and an observer o.A computation of p o is a sequence of τ transitions, i.e., We say it is observer-successful if there exists j ≥ 0 such that o j − →, and observer-unsuccessful otherwise.
Definition 2.2 (must).We write p must o iff for each maximal computation of p o is observer-successful.
The notion of passing a test (or satisfying an observer) represents the fact that an observer built-up from the parallel composition of processes is able to report success in every possible interaction with the process under test.It is then natural to compare processes according to their capacity to satisfy observers.
The standard framework of [NH84] can be recovered by considering only observers without parallel composition.
Definition 2.3 (must preorder).p must q iff ∀r ∈ P : p must r implies q must r.We write p ≈ must q when both p must q and q must p.
2.1.Semantic characterisation.The must testing preorder has been characterised in terms of (i) the sequences of actions that a process may perform, and (ii) the possible sets of actions that it may perform after executing a particular sequence of actions [NH84].This characterisation relies on a few auxiliary predicates and functions that are presented below.A process p diverges, written p ⇑, when it exhibits an infinite, internal computation p τ − → p 0 τ − → p 1 τ − → . ... We say p converges, written p ⇓, otherwise.For s ∈ Act * , the convergence predicate is inductively defined by the following rules: The residuals of a process p (or a set of processes P ) after the execution of s ∈ Act * is given by the following equations • (p after s) = {p | p Definition 2.5.p must q if for every s ∈ Act * , for all finite L ⊆ Act, if p ⇓ s then • q ⇓ s.
• (p after s) MUST L implies (q after s) MUST L.

A testing preorder with uncoordinated observers
The must testing preorder is defined in terms of the tests that each process is able to pass.Remarkably, the classical setting can be formulated by considering only sequential tests (see the characterisation of minimal tests in [NH84]).Each sequential test is a unique, centralised process that handles all the interaction with the process under test and, therefore, has a complete view of the externally observable behaviour of the process.For this reason, we refer to the classical must testing preorder as a centralised preorder.
Multiparty interactions are generally structured in such a way that pairs of partners communicate through dedicated channels, for example, when relying on partner links in service oriented models or buffers in communicating machines [BBO12].Conceptually, the interface (i.e., the set of channels) of a process is partitioned and the process interacts with each partner by using only specific sets of channels in its interface.In addition, there are scenarios (as the one discussed in Section 1) in which partners frequently do not know each other and cannot communicate directly.As a direct consequence, the partners of a process cannot establish causal dependencies among actions that take place over different parts of the interface.These constraints reduce the discriminating power of partners and call for equivalences that equate processes that cannot be distinguished by sets of independent sequential processes.
Example 3.1.Consider the classical scenario for planning a trip.A user U interacts with a broker B, which is responsible for booking flights provided by a service F and hotel rooms available at service H.The expected interaction can be described as follows: U makes a booking request by sending a message req to B (we will just describe the interaction and abstract away from data details such as trip destination, departure dates and duration).Depending on the request, B may internally decide to contact service F (for booking just a flight ticket), H (for booking rooms) or both.Service B uses channels reqF and reqH to respectively contact F and H (for the sake of simplicity, we assume that any request to F and H will be granted).Then, the expected behaviour of B can be described with the following process.(As usual, τ actions and + are combined to model internal, non-deterministic choices in a process.)This section is devoted to the definition and characterisation of a preorder, called uncoordinated must preorder, that is coarser than the classical must preorder and relates processes that cannot be distinguished by distributed contexts.The uncoordinated must preorder is obtained by restricting the set of observers to parallel processes that do not share channels.We will say I = {I i } i∈0...n is an interface whenever I is a partition of Act and ∀α ∈ Act, α ∈ I i implies α ∈ I i .In the rest of this paper we will usually write only the relevant part of an interface.For instance, we will write {{a}, {b}} for any interface {I 0 , I 1 } such that a ∈ I 0 and b ∈ I 1 .Then, the observers used by the uncoordinated must testing preorder are introduced by the following definition.
We say o is an uncoordinated observer and omit the interface when no confusion arises.In our setting, which does not involve name mobility, the fact that I = {I i } is a partition of Act and n(o i ) ⊆ I i suffices to avoid a direct communication among the processes of an uncoordinated observer.As a consequence, a distributed observer cannot impose a total order between actions that are controlled by different processes of the observer.Indeed, the executions of a distributed observer are the interleavings of the executions of all processes {o i } i∈0..n (this property is formally stated in Section 3.1, Lemma 3.6).We remark that a configuration does report success (i.e., perform action ) only when all processes in the composition do report success; consequently an uncoordinated observer reports success when all its components report success simultaneously.Our definition of success deviates from the original setup of [NH84].If success were not synchronised, e.g., every process would pass the observer o = a.0 || 1 because o would be able to report success immediately.This is not the case in our setting.In fact, each component of an uncoordinated observer accounts for the view that a particular partner has about the process under test, and we expect every component of the observer to be able to report success when a process passes a test.Definition 3.3 (Uncoordinated must preorder I unc ).Let I = {I i } i∈0...n be an interface.We say p I unc q iff for all uncoordinated observer o over I, p must o implies q must o.We write p ≈ I unc q when both p I unc q and q I unc p. Example 3.4.Consider the scenario presented in Example 3.1 and the following interface I = {{req}, {reqF }, {reqH }} for the process B that thus interacts with each of the other partners by using a dedicated part of its interface.It can be shown that the three definitions for B in Example 3.1 are equivalent when considering the uncoordinated must testing preorder, i.e., B 0 ≈ I unc B 1 ≈ I unc B 2 .The actual proof, which uses the (trace-based) alternative characterization of the preorder, is deferred to Example 3.14.
3.1.Semantic characterisation.We now characterise the uncoordinated must testing preorder in terms of traces and must-sets.In order to do that, we shift from strings to Mazurkiewicz traces [Maz86].A Mazurkiewicz trace is a set of strings, obtained by permuting independent symbols.Traces represent concurrent computations, in which commuting symbols stand for actions that execute independently of one another and non-commuting symbols are causally dependent actions.We start by summarising the basics of the theory of traces in [Maz86].
Let D ⊆ Act × Act be a finite equivalence relation, called the dependency relation, that relates the actions that cannot be commuted.Thus, if (α, β) ∈ D then α and β On the contrary, actions that take place over channels in the same part of the interface are dependent and cannot commute.Hence, req req ≡ D req req.
Now we are able to characterise the behaviour of an uncoordinated observer.We start by formally stating that an uncoordinated observer reaches the same configuration after executing any of the strings in the same equivalence class.This result is instrumental to the proof of the alternative characterisation of the uncoordinated preorder.
=⇒ o .Since z 2 and z 3 are independent they take part on different components of the observer and o 1 Proof.All items follow from Lemma 3.6.We illustrate (2).By contradiction.Assume o ⇓ s and o ⇑ t.Then, there exist t 1 , t 2 such that t = t 1 t 2 , o t 1 =⇒ o and o ⇑.Without loss of generality, we assume that o Then, take s 1 and s 2 such that s = s 1 s 2 and s 1 I h = t 1 I h .By the Levi's Lemma for traces [Maz95, Theorem 1], we have that The alternative characterisation for the uncoordinated preorder follows along the lines of the one for the classical must testing preorder, when Mazurkiewicz traces are considered instead of strings of actions.For this reason, we extend the notions of transition relation, convergence and residuals to Mazurkiewicz traces.
We now focus on the definition of the transition relation, which is instrumental for the next definitions.We first note that, differently from centralised must testing preorder, string inclusion may not hold for the uncoordinated preorders.For instance, take p = τ.a.0 + τ.b.0 and q = a.b.0 over the interface I = {{a}, {b}}.Note that p I unc q because any uncoordinated observer that passes p is happy regardless of whether a and b are executed.Moreover, such observer would be unable to detect if both a and b are executed because those actions take place over different parts of the interface.Hence, str(q) ⊆ str(p) because a.b ∈ str(q) but a.b ∈ str(p) despite p I unc q.Our notion of reduction w.r.t a Mazurkiewicz trace will account for such mismatch and, e.g., p =⇒ accounts for reductions in which s may be partially executed over some of the parts of the interface (but complete in at least one of them).
We write s < D t iff t ∈ [ss ] D for some s = , i.e., t extends s (up-to the swapping of independent actions), and s ≤ D t stands for s < D t or s ≡ D t.Then, the set of maximal reductions of p within a trace [s] D is  Let D be the dependency relation induced by the interface I. We let We adopt usual conventions for abbreviating notation when dealing with transition relations and write, e.g., p The following three lemmata are instrumental to the proof of the correspondence theorem and characterise the relation between the Mazurkiewicz traces of related processes.
Lemma 3.10.Let I be an interface and D the dependency relation induced by I.If p I unc q then for all s ∈ Act * we have that p (2) s ∈ str(q) implies that p  Since q ⇑ [s] D there exists tt ∈ [s] D such that q t =⇒ q 0 and q 0 ⇑.By induction on the length of t it follows that there exists o such that o t =⇒ o and o = Π i∈0,...n o i where o i = τ.1 or o i = (τ.1 + a.(. ..)) for all i.Then, there exists a maximal (divergent) observer-unsuccessful computation q o − → q 0 o − → q 1 o − → . . .− → q j o − → . ... Consequently, q must o, which contradicts the assumption p I unc q.
(2) By contradiction.Suppose that there exists s = a 1 . . .a m such that p ⇓ [s] D , s ∈ str(q) and p Since s ∈ str(q), q s =⇒ q .By (1) above, p ⇓ [s] D implies q ⇓ [s] D .Consequently, q ⇓.Then, there is a computation q s =⇒ q =⇒ q − →.Moreover, we can build an unsuccessful computation of o s =⇒ o = Π i∈0,...n o i − → where o j = 0 for j ∈ 0, . . ., n and b j k j = a n .By zipping the computations q s =⇒ q and o s =⇒ o , we obtain a maximal computation of q o that is observer-unsuccessful.Consequently, q must o.  unc q, q ⇓ [s] D .By straightforward induction on the length of the reduction, we can show that q t =⇒ q implies n(q ) ⊆ n(q) for all t (i.e., the names n(q ) of q are included in the names of q).Moreover, init(q) ⊆ n(q) trivially holds.
Consequently, {init(q ) | q [s] D =⇒ q } ⊆ n(q).Since, n(q) is finite, we can conclude that the set {init(q ) | q [s] D =⇒ q } is finite.Therefore, we can find an action a such that q sa =⇒.Then (q after [s] D ) MUST {a} while (p after [s] D ) MUST {a}, which contradicts the hypothesis p I unc q.
Theorem 3.13.I unc = I unc .Proof.(⊆) Actually we prove that p I unc q implies p I unc q.Let D be the dependency relation induced by I. Assume that there exists s = a 1 . . .a n and I j ∈ I and L ⊆ I j such that (1) p ⇓ [s] D and q ⇑ [s] D , or (2) s ∈ str(q) and ∀t ∈ [s] D .t∈ str(p) or (3) (p after [s] D ) MUST L and (q after [s] D ) MUST L For each case we show that there exists an observer such that p must o and q must o.For the two first cases, we take the observers as defined in proof of Lemma 3.10.For the third one, we take o = Π i∈0,...n o i with o i defined as follows We prove p I unc q implies p I unc q.Actually, the proof follows by showing that p I unc q and q must o imply p must o.Assume there exists an unsuccessful computation

Consider the following cases:
(1) The computation is finite, i.e., there exists n such that q n o n τ − →.By unzipping the computation we have q 0 s =⇒ q n and o 0 s =⇒ o n , which is unsuccessful, i.e., o i − → for all 0 ≤ i ≤ n.Moreover, q n MUST init(o n ).Hence (q after [s] D ) MUST init(o n ).(2) The computation is infinite.We consider two cases: (a) There exist s ∈ str(q) and s ∈ str(o) such that q ⇑ s or o ⇑ s.We proceed by case analysis.(i) q ⇑ [s] D : Since p I unc q, p ⇑ [s] D .Therefore, ∃t ∈ [s] D such that p ⇑ t.By Corollary 3.7 (4), o t =⇒ unsuccessful, and hence there is an unsuccessful computation of p o. (ii) q ⇓ [s] D (and o ⇑ s): By Lemma 3.12, ∃t ∈ [s] D : t ∈ str(p).By Corollary 3.7 (2), o ⇑ t, and hence there is an unsuccessful computation of p o.(b) ∀n.q n ⇓ and o n ⇓.For every n, take s ∈ Act * such that q s =⇒ q n and q ⇓ s (this is possible because q o =⇒ q n o n is unsuccessful and ∀i ≤ n.q i ⇓).By Lemma 3.10, q s =⇒ q n and q ⇓ s implies either (i =⇒.Since q n ⇓, there exists q m such that q n =⇒ q m − → and q m a − → q m+1 .Consequently, (q after [s] D ) MUST L for all L such that a ∈ L. Since p I unc q, then for all L such that a ∈ L, we have (p after Example 3.14.We take advantage of the alternative characterisation of the uncoordinated preorder to show that the three processes for the broker in Example 3.1 are equivalent when considering I = {{req}, {reqF }, {reqH }}.Actually, we will only consider B 0 ≈ I unc B 1 , as the proofs for B 0 ≈ I unc B 2 and B 1 ≈ I unc B 2 are analogous.Firstly, we have to consider that B 0 ⇓ s and B 1 ⇓ s for any s because B 0 and B 1 do not have infinite computations.The relation between must-sets are described in the two tables below.The first table shows the sets (B 0 after [s] D ) and L I B 0 ,[s] D .Note that [s] D in the first column will be represented by any string s ∈ [s] D .Moreover, we write "−" in the three last columns whenever L I B 0 ,[s] D does not exist.The second table does the same for B 1 .In the tables, we let B 0 stand for τ.reqF .0+ τ.reqH .0+ τ.reqH .reqF.0 and B 1 stand for τ.reqF .0+ τ.reqH .0+ τ.reqF .reqH.0.
By inspecting the tables, we can check that for any possible trace [s] D and I ∈ I, it holds that We now present two additional examples that help us understand the discriminating capability of the uncoordinated preorder and its differences with the classical must preorder.
The first of these examples shows that a process that does not communicate its internal choices over all parts of its interface is useless in a distributed context.
Example 3.15.Consider the process p = τ.a.0 + τ.b.0 that is intended to be used by two processes with the following interface: I = {{a}, {b}}.We show that this process is less useful than 0 in an uncoordinated context, i.e., τ.a.0 + τ.b.0 I unc 0. It is immediate to see that p and 0 strongly converge for any s ∈ Act * , then the minimal sets [s] D presented in the tables below are sufficient for proving our claim.
[s] D p after Note that differently from the classical must preorder, the uncoordinated preorder does not consider the must-set {a, b} to distinguish p from 0 because this set involves channels in different parts of the interface.The key point here is that each internal reduction of p is observed just by one part of the interface: the choice of branch a is only observed by one process and the choice of b is observed by the other one.Since uncoordinated observers do not intercommunicate, they can only report success simultaneously if they can do it independently from the interactions with the tested process, but such observers are exactly the ones that 0 can pass.
Like in the classical must preorder, we have that 0 I unc τ.a.0 + τ.b.0.This is witnessed by the observer o = a.0 + τ.1 1 that is passed by 0 but not by τ.a.0 + τ.b.0.
The second example shows that the uncoordinated preorder falls somehow short with respect to the target we set in the introduction of allowing processes to swap actions that are targeted to different partners.By inspecting traces and must-sets in the two tables below, where we use p and q to denote a.b.0 + a.0 + b.0 and b.a.0 + a.0 + b.0 it is easy to see that Note that o = a.1 1 actually interacts with the process under test by using just one part of the interface and relies on the fact that the remaining part of the interface stays idle.Thanks to this ability, uncoordinated observers have still a limited power to track some dependencies among actions on different parts of the interface.
The preorder presented in the next section limits further the discriminating power of observers and allows us to equate processes a.b.0 and b.a.0.

A testing preorder with individualistic observers
In this section we explore a notion of equivalence equating processes that can freely permute actions over different parts of their interfaces.As for the uncoordinated observers, the targeted scenario is that of a service with a partitioned interface interacting with two or more independent processes by using separate sets of ports.In addition, each component of an observer cannot exploit any knowledge about the design choices made by the other components, i.e., each of them has a local view of the behaviour of the process that ignores all actions controlled by the remaining components.Local views are characterised in terms of a projection operator defined as follows.
Definition 4.1 (Projection).Let V ⊆ N be a set of observable ports.We write p V for the process obtained by hiding all actions of p over channels that are not in V .Formally, .nbe an interface.We say p I ind q iff for all uncoordinated observer o = Π i∈0..n o i , for all i ∈ 0..n, p I i must o i implies q I i must o i .Basically, two traces are equivalent up-to I when they coincide after the removal of hidden actions.For instance, aa As for the distributed preorder, we extend the notions of reduction, convergence and residuals to equivalence classes of filtered traces.
• q The following auxiliary result establishes properties relating reductions, hiding and filtered traces, which will be useful in the proof of the correspondence theorem.Proof.The proof follows by induction on the length of s.
The alternative characterisation for the individualistic preorder is given in terms of filtered traces.Definition 4.6.Let p I ind q if for every I ∈ I, for every s ∈ I * , and for all finite We would like to draw attention to condition 2 above; it only considers must-sets that always include all the actions in (Act\I) to avoid the possibility of distinguishing reachable states because of actions that are not in I. Consider that this condition could be formulated as follows: for all finite L ⊆ Act, (p after [[s]] I ) MUST L implies ∃L such that (q after [[s]] I ) MUST L and L ∩ I = L ∩ I which makes evident that only the actions from the observable part of the interface are relevant.We adopted the first formulation because it resembles the original characterisation of the must preorder.
As for the classical must preorder, we implement I unc in terms of I unc , which is defined by the predicate notlequnc(P,Q,I,T,L), in which the additional parameter I stands for the interface.Its definition is below.In this case the variable CT in line 3 stands for the (relevant part of the) equivalence class of the trace T. Since the equivalence classes for the filtered case are all infinite and, hence, cannot be computed completely, the predicate filtered(T,PI,Ts,CT) simply generates the traces in the equivalence class of T that are also traces of at least one of the two processes under comparison (note that the residuals are empty for both processes in the remaining cases, and hence irrelevant).The definition of filtered(T,PI,Ts,CT) takes a part of the interface PI and a set of traces Ts, and returns CT which contains the set of traces in Ts whose projection over PI coincides with the projection of T. Note that Ts in line 3 corresponds to the traces in either P or Q (line 2).The remaining difference concerns to the generation of must-sets (line 4).In this case, each candidate must-set L contains a subset L1 of the part of the interface under analysis PI and the set C containing all actions in the interface I that are not in PI (this set is computed by the predicate complement(I, PI, C), whose definition has been omitted).
We can use this predicate to check, e.g., that a.We now illustrate the use of the introduced preorders and of our prototype implementation in a larger scenario.6.2.A case study.Distributed, non-relational databases such as Dynamo [DHJ + 07] and Cassandra [LM10] provide highly available storage by replicating data and relaxing consistency guarantees.Such databases store key-value pairs that can be accessed by using two operations: get to retrieve the value associated with a key, and put to store the value of a particular key.A client issuing an operation interacts with the closest server, which plays the role of a coordinator and mediates between the client and the replicas to complete the client request.Each client request is associated with a consistency level, which specifies the degree of consistency required over data.For a put operation, the consistency level states the number of replicas that must be written before sending an acknowledgement to the client.Similarly, the consistency level of a get operation specifies the number of replicas that must reply to the read request before returning the data to the client.Cassandra provides several consistency levels; for instance, an operation may request to be performed over just ONE or TWO replicas, or over the majority of the replicas (i.e., QUORUM) or over ALL the replicas.Consequently, depending on the consistency level required by the client, the coordinator chooses the replicas to contact.
We will now describe the behaviour of a node acting as coordinator in a configuration that involves two additional replicas.Then we will introduce alternative policies the coordinator might want to use when reacting to users request and will discuss their relationships.
For simplicity reasons, we just illustrate the protocol for processing the operation get and abstract away from the values exchanged during the communication (the put operation is analogous).The actual protocol for handling a get is described below as a CCS process.As stated in Coord, the coordinator after receiving the request get internally decides to either: • reply to the client with the error message err, e.g., when the available nodes are not enough to guarantee the requested consistency level; or • return the requested information by using just local information (message ret); or • retrieve information by contacting just one of the additional replicas following the protocol defined by Query i ; or • retrieve information from both replicas, following the protocol defined by Query i,j .The protocol followed by the coordinator when contacting replica i is modelled by process Query i : The coordinator sends a read request over the channel read i and awaits an answer on channel ret i , however it may internally decide not to wait for the answer from the replica and send an error to the client (e.g., in a timeout expires).When the coordinator receives the response from the replica, it may return the requested information to the client or signal an error (e.g., when the consistency level cannot be satisfied by the current state of the replicas).
The protocol followed by the coordinator when contacting both replicas is modelled by process Query i,j : When awaiting for their responses, the coordinator may internally decide to reply to the client before or after receiving any of the two answers.
Any equation name def = proc above can be defined in Prolog by using the predicate proc(name, proc) as shown below.A possible implementation of Coord may only provide the part of the protocol that always contact the two additional replicas regardless of the information and consistency level requested by the client.Such implementation can be described as follows, where Query 1,2 is as before.This defining equation is implemented in Prolog as follows, 1 proc(coord1, get * Query12) :-proc(query(1,2), Query12).
An alternative implementation of Coord may decide to communicate an error to the client but still accept responses from the replicas after this interaction.This feature allows the coordinator to update its local state with information that can be used when answering future requests.Such implementation can be described as follows: Note that Coord 2 accepts responses from the replicas even after it has replied to the client.As for Coord, the definition of Coord 2 in Prolog is straightforward (and omitted here).When considering the classical must testing preorder, it holds that Coord must Coord 2 .However, as far as the behaviour of the client and the replicas is concerned, the implementation of Coord 2 is harmless.In fact, we can prove that Coord I unc Coord 2 when I = {{get, ret, err}, {get, read 1 , ret 1 }, {get, read 2 , ret 2 }} For convenience, when querying the program we add the following definition rule for the interface.All of them show that Coord 2 is able to accept an answer from a replica even after signaling an error to the client, while Coord 1 is not.Consequently, a replica may distinguish the behaviours of the different implementations: when interacting with Coord 1 , a replica i may discover that the coordinator has sent the message err to the client because the interaction ret i cannot not take place.
We now consider a variant of Coord 2 that chooses a different order for contacting replicas, defined as follows: Coord 3 def = get.read 2 .read 1 .(τ.err.ret 1 .ret 2 .0+ Wait 1,2 + Wait 2,1 ) where Wait i,j is defined as before.The only difference between Coord 2 and Coord 3 is the order in which read 1 and read 2 are executed.
has among it solutions the following one: showing that Coord 2 I unc Coord 3 .The test associated with the above witness is built by preventing the interaction with the replica 2 (i.e., when the communication over read 2 is not enabled).However, if the interaction with the replicas is guaranteed, both implementations should be deemed as indistinguishable.In fact, Coord 2 and Coord 3 are indistinguishable in the individualistic preorder.We remark, however, that Coord 1 is still not equivalent to either Coord 2 or Coord 3 .For instance, the pair witnesses the fact that Coord 3 I ind Coord 1 .In fact, while Coord 3 ensures that it will always receive the reply from the replica 1 after sending the request read 1 .This is not the case for Coord 1 , which may refuse to communicate over read 1 , e.g., after an internal timeout.

Conclusions and related work
In this paper we have explored different relaxations of the must testing preorder tailored to define new behavioural relations that, in the framework of Service Oriented Computing, are better suited to study compliance between contracts exposed by clients and servers interacting via synchronous binary communication primitives.
In particular, we have considered two different scenarios in which contexts of a service are represented by processes with distributed control.The first variant, that we called uncoordinated preorder, corresponds to multiparty contexts without runtime communication between peers but with the possibility of one peer to block another if it does not perform the expected action.Indeed, the observations at the basis of our experiments are designed with the assumption that the users of a service interact only via dedicated ports but might be influenced by the fact that other partners do not perform the expected actions.The second preorder we introduced is called individualistic preorder.It accounts for partners that are completely independent from the behaviour of the other ones.Indeed, from a viewpoint of a client, actions by other clients are considered unobservable.
We have shown that the discriminating power of the induced equivalences decreases as observers become weaker; and thus that the individualistic preorder for a given interface is coarser than the uncoordinated preorder for the same interface, which in turn is coarser than the classical testing preorder.As future work we plan to consider different "real life" scenarios and to assess the impact of the different assumptions at the basis of the new preorders and the identifications/orderings they induce.We plan also to perform further studies to get a fuller account, possibly via axiomatisations, of their discriminating power.In the near future, we will also consider the impact of our testing framework on calculi based on asynchronous interactions.
Several variants of the must testing preorder, contract compliance and sub-contract relation have been developed in the literature to deal with different aspects of services compositions, such as buffered asynchronous communication [BZ08, Pad10, MYH09], fairness [Pad11], peer-interaction [BH13].We have however to remark that these approaches deal with aspects that are somehow orthogonal to the discriminating power of the distributed tests analysed in this work.Our preorders have some similarities with those relying on buffered communications in that both aim at guaranteeing that actions performed by independent peers can be reordered, but we rely on synchronous communication and, hence, message reordering is not obtained by swapping buffered messages but by relying on more generous observers.As mentioned above, we have left the study of distributed tests under asynchronous communication as a future work.However, we would like to remark that, even the uncoordinated and the individualistic preorders are different from those in [BZ08, Pad10, MYH09] that permit explicit action reordering.The paradigmatic example is the equivalence a.c + b.d ≈ {a,b},{c,d} ind a.d + b.c, which does not hold for any of the preorders with buffered communication.The main reason is that, even in presence of buffered communication, the causality, e.g., between a and c is always observed.

λ−
→ q with λ ∈ Act ∪ {τ, }, where signals the successful termination of an execution.Definition 2.1 (Transition relation).The transition relation on processes, noted λ − →, is the least relation satisfying the following rules
[a.b] D =⇒ 0 will hold.Intuitively, the relation [s] D where max < D denotes the maximal elements of a set according to the order < D .If t ∈ str(p, [s] D ) then t cannot be extended with any symbol a such that t ≡ D ta ∈ str(p, [s] D ) and p t =⇒.The following last ingredient allows us to ensure that different maximal prefixes in str(p, [s] D ) actually execute the actions in [s] D (although some prefixes can be partial).We recall that stands for the operation that projects a string over an alphabet.Let I be the interface that induces the dependency relation D, we write str(p, [s] D ) † if str(p, [s] D ) jointly-completes [s] D , which is defined by str(p, [s] D ) † ⇐⇒ ∀I ∈ I.∃t ∈ str(p, [s] D ).sI = t Example 3.8.Consider the processes p = τ.a.0 + τ.b.0 and q = a.b.0 and the interface I = {{a}, {b}}.Let D be the dependency relation induced by I.Then, we have str(p) = { , a, b} str(q) = { , a, b, a.b}The restriction of < D to the elements in str(q) is {( , a), ( , b), (a, a.b), (b, a.b)} Note that b < D ab because b can be extended with a and ab ∈ [ba] D .Then, , b, a.b} = {a.b}Both cases jointly-completes [a.b] D , i.e., str(p, [a.b] D ) † and str(q, [a.b] D ) † do hold.On the contrary, if we consider r = a.0 we have that str(r, [a.b] D ) = {a}, which does not jointly-complete [a.b] D , because none of the strings in str(r, [a.b] D ) matches b.
[s] D =⇒ instead of there exists p such that p [s] D =⇒ p .Analogously, p [s] D =⇒ stands for there does not exist p such that p [s] D =⇒ p .In the definition below, the condition L ⊆ I with I ∈ I captures the idea that each observation is relative to a specific part of the interface.Definition 3.9.Let I be an interface and D the dependency relation induced by I.Then, p I unc q if for every s ∈ Act * , for any part I ∈ I, for all finite L Assume I = {I i } i∈0...n and D the induced dependency relation.(1) By contradiction.Suppose there exists s = a 1 . . .a m such that p ⇓ [s] D and q ⇑ [s] D .Then, take the observer o = Π i∈0,...n o i with o i defined as follows o i = τ.1 + b i 1 .(τ.1 + . . .(τ.1 + b i k i .τ.1) . ..) with s I i = b i 1 . . .

[
t a] D =⇒ .Without lost of generality we assume a = a n (otherwise, the definition of o can be changed accordingly).By construction, o t =⇒ o implies o = Π i∈0,...n o i and o [t a] D =⇒ where for any i either (a) o i = τ.1 + b.(. ..) or (b) o i = 1.Case (a) is not possible, because we assume o − →.For case (b), we proceed by zipping the computations p t =⇒ p − → and o t =⇒ o − →, which is observer-successful.Consequently, every maximal computation of p o is observer-successful and p must o, which is in contradiction with the assumption p I unc q.Lemma 3.11.If (p after [s] D ) MUST L for some finite L ⊆ Act, then p [s] D =⇒. (p after [s] D ) = ∅ and, by definition, ∅ MUST L for every finite L ⊆ Act.Lemma 3.12.If p I unc q, s ∈ str(q) and p ⇓ [s] D then p [s] D =⇒.Proof.Assume that p [s] D =⇒.Then, (p after [s] D ) = ∅.Hence, (p after [s] D ) MUST L for every finite L ⊆ A. Since p ⇓ [s] D and p I (a) Case p ⇑ [s] D , i.e., ∃t ∈ [s] D and p ⇑ t.By Corollary 3.7 (4), o s =⇒ implies o t =⇒ also unsuccessful, and hence there is an unsuccessful computation of p o.(b) Case p ⇓ [s] D .Note that s ∈ str(q).By Lemma 3.10 (2), ∃t ∈ [s] D : t ∈ str(p).Hence, (p after [s] D ) = ∅.Since p I unc q, (q after [s] D ) MUST init(o n ) implies (p after [s] D ) MUST init(o n ).Therefore, exists some p ∈ (p after [s] D ) and p MUST init(o n ).Hence, ∃t ∈ [s] D .p t =⇒.By Corollary 3.7 (4), o t =⇒ unsuccessful, and hence there is an unsuccessful computation of p o.
[s] D ) MUST L.Moreover, o n =⇒ o m a − → o m+1 .Therefore, there exists p n ∈ (p after [s] D ) and p n a =⇒ p m+1 .Hence, for all n there is an unsuccessful computation p o =⇒ p n o n =⇒ p m+1 o m+1 .In the following we will write L I p,[s] D for the smallest set such that (p after [s] D ) MUST L and L ⊆ I imply L I p,[s] D ⊆ L.

Example 4. 2 .
Let p = a.p 1 + b.p 2 be a process.Note that p a − → p 1 and p b − → p 2 .Then, the projection of p over the channel a, i.e., p {a}, has the following two transitions: p {a} a − → p 1 {a} and p {a} τ − → p 1 {a} where the action a of p over the visible channel a is reflected on the label of the transition of p {a} while the action over the non-visible channel b is taken as an internal action.Definition 4.3 (Individualistic (must) preorder I ind ).Let I = {I i } i∈0.
Note that a.b.0 and b.a.0 cannot be distinguished anymore by the observer o = a.1 1 used in the previous section to prove a.b.0 {{a},{b}} unc b.a.0 (Example 3.16), because a.b.0 {a} must a.1, b.a.0 {a} must a.1, a.b.0 {b} must 1 and b.a.0 {b} must 1.Indeed, later (Example 4.11) we will see that: a.b.0 ≈ {{a},{b}} ind b.a.0.4.1.Semantic characterisation.In this section we address the characterisation of the individualistic preorder in terms of traces.We start by introducing an equivalence relation over traces that ignores hidden actions.Definition 4.4 (Filtered traces).Let I ⊆ Act.Two strings s, t ∈ Act * are equivalent up-to I, written s • ≡ I t, if and only if s I = t I.We write [[s]] I for the equivalence class of s.

1
S = [get, ~read(1)], L = [~ret(1)] Consequently, every maximal computation of p o is finite, i.e., p o =⇒ p o − →.By induction on the length of t, it follows that o Consequently, every maximal computation of p o is observer-successful and p must o.
b i k i For each maximal computation of p o, we proceed by unzipping the computation to conclude that o t =⇒ and p t =⇒ for some t.Note that o ⇓ t for all t ∈ Act * and o t =⇒ implies that there exists t such that tt ∈ [s] D .Since p ⇓ [s] D , we have p ⇓ t. t =⇒ o − → implies o = Π i∈0,...n 1.
Take a maximal computation of p o =⇒.By unzipping it, p implies that there exists t such that tt ∈ [s] D .From p ⇓ [s] D , we have p ⇓ and, consequently, p − →.Also o ⇓ t holds because o is finite; moreover, o − →.Hence, p o =⇒ p o is a finite maximal computation.By assumption, [s] D ∩ str(p) = ∅ holds (otherwise, (str(p, [s] D )) † should hold).Therefore t ∈ [s] D .Then, t is a prefix of a string in [s] D , i.e., there exists a and t such that tt a ∈ [s] D and for all p if p t =⇒ p and o t =⇒ o .r =⇒ iff r ∈ [s] D .Then, o t =⇒ o t =⇒ p then p By Corollary 3.7 (4), o (ii) p ⇓ [s] D and p [s] D