Modelling MAC-Layer Communications in Wireless Systems

We present a timed process calculus for modelling wireless networks in which individual stations broadcast and receive messages; moreover the broadcasts are subject to collisions. Based on a reduction semantics for the calculus we define a contextual equivalence to compare the external behaviour of such wireless networks. Further, we construct an extensional LTS (labelled transition system) which models the activities of stations that can be directly observed by the external environment. Standard bisimulations in this LTS provide a sound proof method for proving systems contextually equivalence. We illustrate the usefulness of the proof methodology by a series of examples. Finally we show that this proof method is also complete, for a large class of systems.


Introduction
Wireless networks are becoming increasingly pervasive with applications across many domains, [42,1]. They are also becoming increasingly complex, with their behaviour depending on ever more sophisticated protocols. There are different levels of abstraction at which these can be defined and implemented, from the very basic level in which the communication primitives consist of sending and receiving electromagnetic signals, to the higher level where the basic primitives allow the initiation of connections between nodes in a wireless system and the exchange of data between them [52].
Assuring the correctness of the behaviour of a wireless network has always been difficult. Several approaches have been proposed to address this issue for networks described at a high level [38,33,17,16,49,27,7,10]; these typically allow the formal description of protocols at the network layer of the TCP/IP reference model [52]. However there are few frameworks in the literature which consider networks described at the MAC-Sublayer of the TCP/IP reference model [28,34,8,54]. This is the topic of the current paper. We propose a process calculus for describing and verifying wireless networks at the MAC-Sublayer of the TCP/IP reference model. This calculus, called the Calculus of Collision-prone Communicating Processes (CCCP), has been largely inspired by TCWS [34]; in particular CCCP inherits its communication features but simplifies considerably the syntax, the reduction semantics, the notion of observation, and as we will see the behavioural theory. In CCCP a wireless system is considered to be a collection of wireless stations which transmit and receive messages. The transmission of messages is broadcast, and it is time-consuming; the transmission of a message v can require several time slots (or instants). In addition, wireless stations in our calculus are sensitive to collisions; if two different stations are transmitting a value over a channel c at the same time slot then a collision occurs; as a result, the content of the messages originally being transmitted is lost.
More specifically, in CCCP a state of a wireless network (or simply network, or system) will be described by a configuration of the form Γ ⊲ W where W describes the code running at individual wireless stations and Γ represents the communication state of channels. At any given point of time there may be exposed communication channels, that is channels containing messages (or values) in transmission; this information will be recorded in Γ.
Such systems evolve by the broadcast of messages between stations, the passage of time, or some other internal activity, such as the occurrence of collisions and their consequences. One of the topics of the paper is to capture formally these complex evolutions, by defining a reduction semantics, whose judgements take the form Γ 1 ⊲ W 1 Γ 2 ⊲ W 2 . We show that the reduction semantics we propose satisfies some desirable time properties such as time determinism, maximal progress and patience [39,22,56].
However the main aim of the paper is to develop a behavioural theory of wireless networks with time-consuming communications. To this end we need a formal notion of when two such systems are indistinguishable from the point of view of users. Having a reduction semantics it is now straightforward to adapt a standard notion of contextual equivalence: Γ 1 ⊲ W 1 ≃ Γ 2 ⊲ W 2 . Intuitively this means that either system, Γ 1 ⊲ W 1 or Γ 2 ⊲ W 2 , can be replaced by the other in a larger system without changing the observable behaviour of the overall system. Formally, we use the approach of [23,45], often called reduction barbed congruence, rather than that of [35] 1 . The only parameter in the definition of our contextual equivalence is the choice of primitive observation or barb; our choice is natural for wireless systems: the ability to transmit on an idle (or unexposed) channel, that is a channel with no active transmissions.
As explained in papers such as [43,21], contextual equivalences are determined by so-called extensional actions, that is the set of minimal observable interactions which a system can have with its external environment. For CCCP determining these actions is non-trivial. Although values can be transmitted and received on channels, the presence of collisions means that these are not necessarily observable. In fact the important point is not the transmission of a value, but its successful delivery. Also, although the basic notion of observation on systems does not involve the recording of the passage of time, this has to be taken into account extensionally in order to gain a proper extensional account of systems.
The extensional semantics determines an LTS (labelled transition system) over configurations, which in turn gives rise to the standard notion of (weak) bisimulation equivalence between configurations. This gives a powerful co-inductive proof technique: to show that two systems are behaviourally equivalent it is sufficient to exhibit a witness bisimulation which contains them.
One result of this paper is that weak bisimulation in the extensional LTS is sound with respect to the touchstone contextual equivalence: if two systems are related by some bisimulation in the extensional LTS then they are contextually equivalent. In order to show the effectiveness of our bisimulation proof method we prove a number of non-obvious system equalities. However, the main contribution of the current paper is that completeness holds for a large class of networks, called wellformed. If two such networks are contextually equivalent then there is some bisimulation, based on our novel extensional actions, which contains them.
To the best of our knowledge, this is the first result of full abstraction for weak barbed congruence, for a calculus of wireless systems where communication is subject to collisions. Also, the only other result in the field of which we are aware is the one illustrated in [34]. Here a sound but not complete bisimulation based proof method is developed for (a different form of) reduction barbed congruence. In this paper, both soundness and completeness are achieved by simplifying the calculus and isolating novel extensional actions.
We end this introduction with an outline of the paper. In Section 2 we present the calculus CCCP. More precisely, Section 2.1 contains the syntax of our language; Section 2.2 introduces the intensional semantics; here the adjective intensional is used to stress the fact that the actions of this semantics correspond to those activities which can be performed by a network. Section 2.3 provides the reduction semantics, which models the intra-actions that can be performed by a network when isolated from the external environment.; Section 2.4 defines our touchstone contextually-defined behavioural equivalence for comparing wireless networks.
In Section 3 we address the problem of defining the minimal observable activities of systems. These are defined as actions of an extensional semantics in Section 3.1, while in Section 3.2 we consider the bisimulation principle induced by such actions. Here the adjective extensional is used to stress the fact that the actions of such a semantics correspond to those activities which can be observed by the external environment of a network.
In Section 4 we present the main results of the paper. First we prove that our bisimulation proof technique is sound with respect to the contextual equivalence, Section 4.1. In Section 4.2 we prove that, for a large class of configurations, called well-formed, our proof technique is also complete.
The usefulness of our bisimulation proof technique is shown in Section 5, where we consider simple case studies which model common features of wireless networks at the Mac-Layer. Section 6 concludes the paper with a comparison with the related work.

The calculus
As already discusses a wireless system will be represented in our calculus as a configuration of the form Γ ⊲ W, where W describes the code running at individual wireless stations and Γ is a channel environment containing the transmission information for channels. A possible evolution of a system will then be given by a sequence of computation steps: where intuitively each step corresponds to either the passage of time, a broadcast from a station, or some unspecified internal computation; the code running at stations evolves as a computation proceeds, but so also does the state of the underlying channel environment. In the following we will use the meta-variable C to range over configurations. Formally we assume a set of channels Ch, ranged over by c, d, · · · , and a set of values Val, which contains a set of data-variables, ranged over by x, y, · · · and a special value err; this value will be used to denote faulty transmissions. The set of closed values, that is those not containing occurrences of variables, are ranged over by v, w, · · · . We also assume that every closed value v ∈ Val has an associated strictly positive integer δ v , which denotes the number of time slots needed by a wireless station to transmit v. Finally, we assume a language of expressions Exp which can be built from values in Val; we also assume a function · , for evaluating expressions with no occurrences of data-variables into closed values. A channel environment is a mapping Γ : Ch → N × Val. In a configuration Γ ⊲ W where Γ(c) = (n, v) for some channel c, there is a wireless station which is currently transmitting the value v for the next n time slots. We will use some suggestive notation for channel environments: Γ ⊢ t c : n in place of Γ(c) = (n, w) for some w, Γ ⊢ v c : w in place of Γ(c) = (n, w) for some n. If Γ ⊢ t c : 0 we say that channel c is idle in Γ, and we denote it with Γ ⊢ c : idle. Otherwise we say that c is exposed in Γ, denoted by Γ ⊢ c : exp. The channel environment Γ such that Γ ⊢ c : idle for every channel c is said to be stable. Often we will compare channel environments according to the amount of time instants for which channels will be exposed; we say that Γ ≤ Γ ′ if, for any channel c, Γ ⊢ t c : n implies Γ ′ ⊢ t c : m, for some m such that n ≤ m. The syntax for system terms W is given in Table 1, where P ranges over code for programming individual stations, which is also illustrated in Table 1. A system term W is a collection of individual threads running in parallel, with possibly some channels restricted. As we will see in Section 5, channel restriction can be used to model non-flat network topologies.
Each thread may be either an inactive piece of code P or an active code of the form c [x].P. This latter term represents a wireless station which is receiving a value from the channel c; when the value is eventually received the variable x will be replaced with the received value in the code P.
The syntax for station code is based on standard process calculus constructs. The main constructs are time-dependent reception from a channel ⌊c?(x).P⌋Q, explicit time delay σ.P, and broadcast along a channel c ! e .P; here the value being broadcast is the one obtained by evaluating e via the function · , provided that e does not contain any occurrence of data-variables. Of the remaining standard constructs the most notable is matching, [b]P, Q which branches to P or Q, depending on the value of the Boolean expression b. Such boolean expressions can be either equality tests of the form e 1 = e 2 , or terms of the form exp(c), which will be used to check whether channel c is exposed, that is it is being used for transmission.
In the construct fix X.P occurrences of the recursion variable X in P are bound; similarly in the terms ⌊c?(x).P⌋Q and c [x].P the data-variable x is bound in P. This gives rise to the standard notions of free and bound variables, α-conversion and capture-avoiding substitution; In a configuration of the form Γ ⊲ W, we assume that W is closed, meaning that all its occurrences of both data-variables and process variables are bound. In general, we always assume that a system term W is closed, unless otherwise stated. Sometimes we will need to consider system terms with free occurrences of process variables, we will explicitly say that they are open system terms. System terms, both open and closed, are identified up to α-conversion. We assume that all occurrences of recursion variables are guarded; they must occur within either a broadcast, input residual, timeout branch, time delay prefix, or within an execution branch of a matching construct. This ensures that recursive calls cannot be used to build up infinite loops within a time slot Example 2.1. Consider the configuration and Γ is the stable channel environment. Further, we assume that δ v 0 = 2 and δ v 1 = 1. This configuration contains two sender stations, running the code S 1 and S 2 , respectively, and a receiving station, running the code R 1 . In the first time slot, the station running the code S 1 broadcasts the value v 0 along channel c. The station running the code R 1 starts receiving such a value and it will be busy in receiving it for the next two time slots. In the first time slot the station running the code S 2 is idle. It is only in the second time slot that this station will broadcast a value along channel c. At this point the receiving station will be exposed to two transmissions; the transmission of value v 0 , which is still in progress, and the transmission of value v 1 . As a result, a collision happens, and the value received by the receiver will be at the end error value err.
The formal behaviour of the configuration C 1 will be explained in Example 2.17.
We use a number of notational conventions. i∈I W i means the parallel composition of all stations W i , for i ∈ I. We identify i∈I W i with nil if I = ∅. We will omit trailing occurrences of nil, render νc :(n, v).W as νc.W when the values (n, v) are not relevant to the discussion, and use νc.W as an abbreviation for a sequencec of such restrictions. We write ⌊c?(x).P⌋ for ⌊c?(x).P⌋nil. Finally, we abbreviate the recursive process fix X.⌊c?(x).P⌋X with c?(x).P; as we will see this is a persistent listener at channel c waiting for an incoming message.

Intensional semantics.
Our first goal is to formally define computation steps among configurations of the form Γ 1 ⊲ W 1 Γ 2 ⊲ W 2 . In order to do that, we first define the evolution of system terms with respect to a channel environment Γ via a set of SOS rules whose judgements take the were λ is an intensional action taking one of the following forms: (1) c!v, denoting a station starting broadcasting value v along channel c (2) σ, denoting the passage of one time slot, or time instant (3) τ, denoting an internal action (4) c?v, denoting a station in the external environment starting broadcasting value v on channel c. These actions λ will have an effect also on the channel environment, which we describe by means of a functional upd λ (·) : Env → Env, where Env is the set of channel environments.

Definition 2.2.
[Channel Environment update] Let Γ ∈ Env be an arbitrary channel environment and c ∈ Ch an arbitrary channel. Let t c and v c be the exposure time and the value transmitted along channel c in Γ, respectively, that is Γ ⊢ t c : t c and Γ ⊢ v c : v c . For any intensional action λ, we let upd λ (Γ) be the unique channel environment determined by the following definitions: Let us describe the intuitive meaning of this definition. When time passes, the time of exposure of each channel decreases by one time unit. The predicates upd c!v (Γ) and upd c?v (Γ) model how collisions are handled in our calculus. When a station begins broadcasting a value v over an idle channel c this channel becomes exposed for the amount of time required to transmit v, that is δ v . If the channel is not idle a collision happens. As a consequence, the value that will be received by a receiving station, when all transmissions over channel c terminate, is the error value err, and the exposure time is adjusted accordingly. Finally the definition of upd τ (Γ) reflects the intuition that internal activities do not affect the exposure state of channels.
Let us turn our attention to the intensional semantics of system terms. For the sake of clarity, the inference rules for the evolution of system terms, Γ ⊲ W 1 λ −−− → W 2 , are split in four tables, each one focusing on a particular form of activity. Table 2 contains the rules governing transmission. Rule (Snd) models a non-blocking broadcast of a message along channel c. The value v sent by process c ! e .P is the one obtained by evaluating an expression e; note that here we are assuming that e is closed, hence we can evaluate it to a closed value via the function · . A transmission can fire at any time, independently on the state of the network; the notation σ δ v represents the time delay operator σ iterated δ v times. So when the process c ! v .P broadcasts, it has to wait δ v time units (the time required to transmit v) before the residual P is activated. On the other hand, reception of a message by a time-guarded listener ⌊c?(x).P⌋Q depends on the state of the channel environment. If the channel c is free then rule (Rcv) indicates that reception can start and the listener evolves into the active receiver c[x].P.
Rule (RcvIgn) states that if a system term W is not waiting for a message along a channel c, or if c is already exposed, then any broadcast along c is ignored by the configuration Γ ⊲ W.
Here rcv(Γ ⊲ W, c) is a predicate which evaluates to true in the case that in Γ ⊲ W channel c is not exposed, and W contains among its parallel components at least one non-guarded receiver of the form ⌊c?(x).P⌋Q which is actively awaiting a message. Formally, we first define a predicate The remaining two rules in Table 2 (Sync) and (RcvPar) serve to synchronise parallel stations on the same transmission [20,39,40].
this station starts transmitting the value v 0 along channel c. Rule (RcvIgn) can be used to derive the transition Γ 0 ⊲ ⌊d?(x).nil⌋(⌊c?(x).Q⌋) .nil⌋(⌊c?(x).Q⌋), in which the broadcast of value v 0 along channel c is ignored. On the other hand, Rule (RcvIgn) cannot be applied to the configuration Γ 0 ⊲⌊c?(x).P⌋, since this station is waiting to receive a value on channel c; however we can derive the transition Γ 0 ⊲ ⌊c?(x).P⌋ We can put together the three transitions above using the rule (Sync), leading to the transition and Γ is such that Γ ⊢ c : exp, say Γ ⊢ t c : 1. Using the rules introduced so far we can derive describing the unblocked sending of the value v along the channel c. This can be inferred using which can be inferred using Rule (Snd), and the judgement P⌋Q. This latter can be inferred using Rule (RcvIgn), because Γ ⊢ c : exp means that rcv(Γ ⊲ ⌊c?(x).P⌋Q, c) = false.
In the transition (2.2) above the receiver ⌊c?(x).P⌋Q ignores the transmission of v along c. One might have expected it to accept this value. However the channel is already exposed, Γ ⊢ c : exp, and thus the receptor can not properly synchronise properly with the sender. We will see later, in Example 2.7, that a transmission errors actually occurs.
The transitions for modelling the passage of time, Γ ⊲ W σ −−− → W ′ , are given in Table 3. Rules (TimeNil) and (Sleep) are straightforward. In rules (ActRcv) and (EndRcv) we see that the active receiver c [x].P continues to wait for the transmitted value to make its way through the network; when the allocated transmission time elapses the value is then delivered and the receiver evolves to { w / x }P. Finally, Rule (Timeout) implements the idea that ⌊c?(x).P⌋Q is a time-guarded receptor; when time passes it evolves into the alternative Q. However this only happens if the channel c is not exposed. What happens if it is exposed is explained in Table 4.
.P is the system term derived in Example 2.3. We show how a σ-action can be derived for this configuration. First note that Γ 1 ⊲ σ 2 σ −−− → σ; this transition can be derived using Rule (Sleep). Since d is idle in Γ 1 , we can apply Rule (TimeOut) to infer the transition Γ 1 ⊲ ⌊d?(x).nil⌋(⌊c?(x).Q⌋) σ −−− → ⌊c?(x).Q⌋; time passed before a value could be broadcast along channel d, causing a timeout in the station waiting to receive a value along d. Finally, since Γ 1 ⊢ v c : 2, we can use Rule (ActRcv) At this point we can use twice Rule (TimePar) (which is given in Table 5) to infer a σ-action performed by C 1 . This leads to the transition C 1 Table 4 is devoted to internal transitions Γ ⊲ W τ −−− → W ′ . Let us first explain rule (RcvLate). Intuitively the process ⌊c?(x).P⌋Q is ready to start receiving a value on channel c. However if c is exposed this means that a transmission is already taking place. Since the process has therefore missed the start of the transmission it will receive an error value. Thus the rule (RcvLate) reflects the fact that in wireless systems a collision takes place if there is a misalignment between the transmission and reception of a message. The remaining rules are straightforward. Note that in the matching construct we use a channel environment dependent evaluation function for Boolean expressions b Γ (note that this has not to be confused with the function · , used to evaluate closed expressions), because of the presence of the exposure predicate exp(c) in the Boolean language. Formally we have that e 1 = e 2 Γ = true evaluates to true if and only if e 1 = e 2 , and exp(c) Γ = true if and only if Γ ⊢ c : exp. We remark that checking for the exposure of a channel amounts to listening on the channel for a value. But in wireless systems it is not possible to both listen and transmit within the same time unit, as communication is half-duplex, [42]. As a consequence in our intensional semantics, in the rules (Then) and (Else), the execution of both branches is delayed of one time unit. Example 2.6. Let Γ 2 be a channel environment such that Γ 2 (c) = (1, v), and consider the configur- .P has been defined in Example 2.5.
Note that this configuration contains both a receiver process and an active receiver along the exposed channel c. We can think of the receiver ⌊c?(x).Q⌋ as a process which missed the synchronisation with a broadcast which has been previously performed along channel c; as a consequence this process is doomed to receive an error value. This situation is modelled by Rule (RcvLate), which allows us to infer the transition Γ 2 ⊲ ⌊c?(x).Q⌋ τ −−− → c[x].{err/x}Q. As we will see, Rule (TauPar) which we introduce in Table 5, ensures that τ-actions are contextual. This means that the transition derived above allows us to infer the transition C 2 Note also that we could have applied Rule (RcvLate) directly to the initial configuration C = Γ ⊲ c! v | ⌊c?(x).P⌋Q, leading to the transition C .{err/x}P, again reflecting an error in transmission along the channel c due to the fact that it is already exposed. In fact we have the transition Γ ⊲ W | ⌊c?(x).P⌋Q .{err/x}P, regardless of the form of W. This emphasises the fact that the inability of the receiver to receive correctly the value being transmitted is because the channel is already exposed and not because another station is willing to broadcast along it.
Remark 2.8. The previous example together with Example 2.4 shows that there is a delicate interplay between the rules (RcvIgn) and (RcvLate), particularly when modelling the effect of an external broadcast on receivers in the presence of exposed channels. The overall goal of our intensional semantics is to ensure that it has certain natural properties, such as input-enabledness. This ensures that for any configuration Γ ⊲ W and any c?v there exists some transition Γ ⊲ W Here W ′ records the effect of an external broadcast of v along c has on the configuration; if the broadcast is actually ignored by all stations in the configuration then W ′ will coincide with W. Input-enabledness also helps us in ensuring that broadcasts are independent of their environment. For example, we require the configuration (Γ ⊲ c! v | W) to be able to perform the broadcast of value v along channel c, regardless of the structure of W, even if c is exposed in Γ. Such a transition can only be inferred from Rule (Sync) if we we match the output action along channel c performed by the configuration Γ ⊲ c! v with an input action performed by Γ ⊲ W. Input-enabledness will ensure that the latter input action is always possible.
In Section 2.2 we will show that our intensional semantics in fact satisfies a number of natural properties, including input-enabledness; see Lemma 2.9. This would obviously be not true if, by omitting Rule (Rcvlgn), we were to forbid inputs over exposed channels.
The final set of rules, in Table 5, are structural. Rule (TimePar) models how σ-actions are derived for collections of threads. Rules (TauPar), (Rec) and (Sum) are standard. Rule (SumTime) is necessary to ensure time determinism (see Proposition 2.10). Rule (SumRcv) guaranteed that only effective receptions can decide in a choice process. Finally Rules (ResI) and (ResV) show how restricted channels are handled. Intuitively moves from the configuration Γ ⊲ νc :(n, v).W are inherited from the configuration Γ[c → (n, v)]⊲W; here the channel environment Γ[c → (n, v)] is the same as Γ except that c has associated with it (temporarily) the information (n, v). However if this move mentions the restricted channel c then the inherited move is rendered as an internal action τ, (ResI). Moreover the information associated with the restricted channel in the residual is updated, using the function upd c!v (·) previously defined. Rules (TauPar), (Sum) and (SumRcv) have their symmetric counterparts.
In the remainder of this section we illustrate some of the main properties enjoyed by the intensional semantics illustrated in Section 2.2. The contents of this part are purely technical and needed only for the proofs of the results illustrated later in the paper: they may be safely skipped by the reader not interested in details.
In broadcast process calculi transmission of a value is usually modelled as a non-blocking action [40,34,10], meaning that all configurations should always be able to receive an arbitrary value along an arbitrary channel. This is a derived property of our calculus: Lemma 2.9. [Input enabledness] Let Γ ⊲ W be a configuration. Then for any channel c and value v Proof. See the Appendix, Page 47.
Our model of time also conforms to a well-established approach in the literature; see for example [39,56]: Proof. By induction on the proof of the transition C Another important property concerns the exposure state of channel environments. This property states that non-timed transitions are identified up-to channel environments which share the same set of idle channels.

Proposition 2.12.
[Exposure Consistency] Let Γ 1 , Γ 2 be two channel environments such that Γ 1 ⊢ c : exp if and only if Γ 2 ⊢ c : exp for every channel c. Then for any system term W and action λ σ, Proof. By Induction on the proof of the derivation Γ 1 ⊲ W λ −−− → W ′ . See the Appendix, Page 50 for details.
We end our discussion on the intensional semantics with a technical result on the interaction between stations in systems; this will be useful in later developments.
Proof. Details for (3) are given in the Appendix; see Page 51. The other three statements can be proved similarly.

Reduction semantics.
We are now in a position to formally define the individual computation steps for wireless systems, alluded to informally in (2.1) above.
The intuition here should be obvious; computation proceeds either by the transmission of values between stations, the passage of time, or internal activity; further, the exposure state of channels is updated according to the performed transition.
Sometimes it will be useful to distinguish between instantaneous reductions and timed reductions; instantaneous reductions, Γ 1 ⊲ W 1 i Γ 2 ⊲ W 2 , are those derived via clauses (i) or (iii) above; timed reductions are denoted with the symbol σ and coincide with reductions derived using clause (ii). We use the notation Let C i = Γ i ⊲ W i , i ∈ 0, .., 2, be as defined in the examples mentioned above. Note that Γ 1 = upd c!v 0 (Γ 0 ) and Γ 2 = upd σ (Γ 1 ). We have already shown that C 0 this transition, together with the equality Γ 1 = upd c!v 0 (Γ 0 ), can be used to infer the reduction C 0 i C 1 . A similar argument

Example 2.16.
[Time-consuming transmission] Consider a wireless system with two stations, that is a configuration C 1 of the form Γ 1 ⊲ P 1 | Q 1 . Let us suppose where Γ 1 is a stable channel environment and δ w = 2. Then where C 2 has the form Γ 2 ⊲ P 2 | Q 2 and The move from P 1 to P 2 is via an application of the rule (Snd), from Q 1 to Q 2 relies on (Rcv) and they are combined together using (Sync) to obtain The final step (2.3) results from (Transmission) in Definition 2.14.
The next step C 2 C 3 = Γ 3 ⊲ σ.R | Q 2 is via (Time) in Definition 2.14; here the only change to the channel environment is that Γ 3 ⊢ t c : 1. The inference of the transition uses the rules (Sleep), (ActRcv) and (TimePar).
The final move we consider, C 3 C 4 = Γ ⊲ R | { w / x }S , is another instance of (Time). However here the delay action is inferred using (Sleep), (EndRcv) and (TimePar). Thus in three reduction steps the value w has been transmitted from the first station to the second one along the channel c, in two units of time. Now suppose we change P 1 to P ′ 1 = σ.P 1 , obtaining thus the configuration C ′ Here an instance of the rule (Timeout) is used in the transition from Q 1 to T 1 . In C ′ 2 the station P 1 is now ready to transmit on channel c, but the second station has stopped listening. The next step depends on the exact form of T 1 ; if for example rcv(T 1 , c) is false then by an application of rule (RcvIgn) we can derive Here the transmission of w along c started but nobody was listening. Finally, suppose T 1 is a delayed listener on channel c, say σ.T 2 where T 2 is ⌊c?(y).S 2 ⌋U 2 . Then we have the (Time) step C ′ 3 C ′ 4 = Γ 3 ⊲ σ.R | T 2 and now the second station, T 2 , is ready to listen. However, as Γ 3 ⊢ c : exp, station T 2 is joining the transmission too late. To reflect this we can derive the (Internal) step .{ err / y }S 2 using the rules (RcvLate) and (TauPar), among others. At the end of the transmission, in one more time step, the second station will therefore end up with an error in reception.
In the revised system C ′ 1 = Γ 1 ⊲ σ.P ′ 1 | Q 1 the second station missed the delayed transmission from P ′ 1 . However we can change the code at the second station to accommodate this delay, by replacing Q 1 with the persistent listener Q ′ 1 = c?(x).S . We leave the reader to check that starting from the configuration Γ 1 ⊲ σ.P ′ 1 | Q ′ 1 the value w will be successfully transmitted between the stations in four reduction steps.

Example 2.17. [Collisions]
Let us now consider again the configuration C 1 = Γ ⊲ S 1 | S 2 | R 1 of Example 2.1. In this configuration the station S 1 can perform a broadcast, leading to the reduction .P, the derivation of which requires an instance of the rule (RcvIgn), In this configuration the second station is ready to broadcast value v 1 along channel c. Since there is already a value being transmitted along this channel, we expect this second broadcast to cause a collision; further, since the amount of time required for transmitting value v 1 is equal to the time needed to end the transmission of value v 0 , we expect that the broadcast performed by the first station does not affect the amount of time for which the channel c is exposed.
Formally this is reflected in the reduction C 3 Here the reduction of the system term uses the sub-inferences Γ 2 ⊲ σ .P; the first and the third of these transitions can be derived using Rule (RcvIgn), while the second one can be derived using Rule (Bcast). Consequently Γ ′ 2 = upd c!v 1 (Γ 2 ), and since Γ 2 ⊢ c : exp we obtain Γ ′ 2 (c) = (1, err); this represents the fact that a collision has occurred, and thus the special value err will eventually be delivered on c.
At this point we can derive the reductions C ′ 3 σ C 4 = Γ ⊲ nil | nil | {err/x}P, meaning that the transmission along channel c terminates in one time instant, leading the receiving station to detect a collision. The reduction above can be obtained from the transitions Table 3. Now, suppose we change the amount of time required to transmit value v 1 from 1 to 2, and consider again the configuration C 3 above. In this case the transmission of value v 1 will also cause a collision; however, in this case the transmission of value v 1 is long enough to continue after that of value v 0 has finished; as a consequence, we expect that the time required for channel c to be released rises when the broadcast of v 1 happens.
In fact, in this case we have the reduction . Now, two time instants are needed for the transmission along channel c to end, leading to the sequence of (timed) reductions C ′′ 3 σ σ C 4 .

Behavioural Equivalence.
In this section we propose a notion of timed behavioural equivalence for our wireless networks. Our touchstone system equality is reduction barbed congruence [23,46,35,25], a standard contextually defined process equivalence. Intuitively, two terms are reduction barbed congruent if they have the same basic observables, in all parallel contexts, under all possible computations. The formal definition relies on two crucial concepts, a reduction semantics to describe how systems evolve, which we have already defined, and a notion of basic observable which says what the environment can observe directly of a system. There is some choice as to what to take as a basic observation, or barb, of a wireless system. In standard process calculi this is usually taken to be the ability of the environment to receive a value along a channel. But the series of examples we have just seen demonstrates that this is problematic, in the presence of possible collisions and the passage of time. Instead we choose a more appropriate notion for wireless systems, one which is already present in our language for station code: channel exposure.

Definition 2.18. [Barbs]
We say the configuration Γ ⊲ W has a strong barb on c, written Γ ⊲ W ↓ c , if Γ ⊢ c : exp. We write Γ ⊲ W ⇓ c , a weak barb, if there exists a configuration C ′ such that Γ ⊲ W * C ′ and C ′ ↓ c . Note that we allow the passage of time in the definition of weak barb.
With these concepts we now have everything in place for a standard definition of contextual equivalence between systems: Definition 2.20. [Reduction barbed congruence], written ≃, is the largest symmetric relation over configurations which is barb preserving, reduction-closed and contextual.
In the remainder of this section we explore via examples the implications of Definition 2.20. The notion of a fresh channel will be important; we say that c is fresh for the configuration Γ ⊲ W if it does not occur free in W and Γ ⊢ c : idle. Note that we can always pick a fresh channel for an arbitrary configuration. Example 2.21. Let us assume that Γ ⊢ c : idle. Then it is easy to see that under the assumption that v 0 and v 1 are different values. For let T be the testing context where eureka is fresh, and ok is some arbitrary value. Then Γ ⊲ c ! v 0 .P | T has a weak barb on eureka which is not the case for Γ ⊲ c ! v 1 .P | T . Since ≃ is contextual and barb preserving, the statement (2.4) above follows. However such tests will not distinguish between Γ ⊲ Q 1 and Γ ⊲ Q 2 , where In both configurations Γ ⊲ Q 1 and Γ ⊲ Q 2 a collision will occur at channel c and a receiving station, such as T , will receive the error value err at the end of the transmission. So there is reason to hope that Γ ⊲ Q 1 ≃ Γ ⊲ Q 2 . However we must wait for for the proof techniques of the next section to establish this equivalence; see Example 3.5.
The above example suggests that transmitted values can be observed only at the end of a transmission; so if a collision happens, there is no possibility of determining the value that was originally broadcast. This concept is stressed even more in the following example. We already argued in Example 2.21 that these two configurations can be distinguished by the context ⌊c?(x).[x = v 0 ]eureka! ok , nil⌋ However, the two configurations above can be made indistinguishable if we add to each of them a parallel component that causes a collision on channel c. To this end, let One could hope that there exists a context which is able to distinguish these two configurations. However, before the transmission of v 0 ends in C 0 , a second broadcast along channel c will fire, causing a collision; the same happens before the end of transmission of value v 1 in C 1 . Further, the total amount of time for which channel c will be exposed is the same for both configurations, so that one can argue that it is impossible to provide a context which is able to distinguish C 0 from C 1 . In order to prove this to be formally true, we have to wait until the next section.
Collisions can also be used to merge two different transmissions on the same channel in a single corrupted transmission.
In Γ⊲W 0 a broadcast of value v 0 along channel c can fire; when the transmission of v 0 is finished, a second broadcast of value v 1 along the same channel can also fire. The behaviour of Γ ⊲ W 1 is similar, though the order of the two values to be broadcast is swapped. Note that it is possible to distinguish the two configurations Γ ⊲ W 0 and Γ ⊲ W 1 using the test we have already seen in the previous example.
However suppose now that we add a parallel component to both configurations which broadcasts another value along channel c before the transmission of value v 0 (v 1 ) has finished, and which terminates after the broadcast of value v 1 (v 0 ) has begun. More formally, let . In both configurations a collision occurs; further, once the transmission of value v 0 has begun in the former configuration, channel c will remain exposed until the transmission of value v 1 has finished. A similar behaviour can be observed on the second configuration. This leads to the intuition that A priori reductions ignore the passage of time, and therefore one might suspect that reduction barbed congruence is impervious to the precise timing of activities. But the next example demonstrates that this is not the case.

Example 2.24.
[Observing the passage of time] Consider the two processes Q 1 = c! v 0 and Q 2 = σ.Q 1 , and again let us assume that Γ ⊢ c : idle. There is very little difference between the behaviours of Γ⊲Q 1 and Γ⊲Q 2 ; both will transmit (successfully) the value v 0 , although the latter is a little slower. However this slight difference can be observed. Consider the test T defined by In fact, Γ ⊲ (Q 1 | T ) can start a transmission along channel c, after which the predicate exp(c) will be evaluated in the system term T . The resulting configuration is given by Γ ′ ⊲ σ δ v 0 | σ.eureka! ok ; at this point, it is not difficult to note that the configuration has a weak barb on eureka.
On the other hand, the unique reduction from C 2 = Γ ⊲ (Q 2 | T ) leads to the evaluation of the exposure predicate exp(c); since Γ ⊢ c : idle the only possibility for the resulting configuration is given by Since eureka is a fresh channel, it is now immediate to note that C ′ 2 ⇓ eureka and hence also C 2 ⇓ eureka . For the test to work correctly it is essential that Γ ⊢ c : idle. Here we would like to point out that using the proof methodology developed in Section 3.2 we are able to show that if Γ ′ ⊢ t c : n and n > δ v 0 then Behind this example is the general principle that reduction barbed congruence is actually sensitive to the passage of time; this is proved formally in Proposition 4.17 of Section 4.2. Example 2.25. As a final example we illustrate the use of channel restriction. Assume that v 1 and v 2 are some kind of values which can be compared via a (total) order relation . Consider the where the station code is given by Intuitively the receiver R waits indefinitely for two values along the restricted channel c and broadcasts the largest on channel d. Intuitively the use of channel restriction here shelters c from external interference. Assuming Γ ⊢ d : idle we will be able to show that

Extensional Semantics
Proving that two configurations C 1 and C 2 are barbed congruent can be difficult, due to the contextuality constraint imposed in Definition 2.20. Therefore, we want to give a co-inductive characterisation of the contextual equivalence ≃ between configurations, in terms of a standard bisimulation equivalence over some extensional LTS. In this section we first present the extensional semantics, then we recall the standard definition of (weak) bisimulation over configurations. We show, by means of a number of examples, the usefulness of the actions introduced in the extensional semantics.
3.1. Extensional actions. The extensional semantics is designed by addressing the question: what actions can be detected by an external observer? Example 2.24 indicates that the passage of time is observable. The effect of inputs received from the external environment also has to be taken into account. In contrast, the discussion in Example 2.21 indicates that, due to the possibility of collisions, the treatment of transmissions is more subtle. It turns out that the transmission itself is not important; instead we must take into consideration the successful delivery of the transmitted value.
In Table 6 we give the rules defining the extensional actions, C α −→ C ′ , which can take one of the forms: −→ C, a predicate indicating that channel c is not exposed, and therefore ready to start a potentially successful transmission.
Remark 3.1. The rules provided in Table 6 guarantee that τ-extensional actions coincide with instantaneous reductions. In fact, whenever The opposite implication can be proved analogously.
Similarly, it is easy to check extensional σ-actions coincide with timed reductions: 3.2. Bisimulation equivalence. The extensional actions of the previous section endows systems in CCCP with the structure of an LTS. Weak extensional actions in this LTS are defined as usual, formulation of bisimulations is facilitated by the notation Cα =⇒ C ′ , which is again standard: for α = τ this denotes C =⇒ C ′ while for α τ it is C α =⇒ C ′ . We now have the standard definition of weak bisimulation equivalence in the resulting LTS which for convenience we recall.

Definition 3.2.
Let R be a binary relation over configurations. We say that R is a bisimulation if for every extensional action α, whenever C 1 R C 2 Our goal is to demonstrate that this form of bisimulation provides a sound and useful proof method for showing behavioural equivalence between wireless systems described in CCCP; moreover for a large class of systems it will also turn out to be complete.
The next two examples show that the introduction of the actions ι(c) and γ(c, v) are necessary for soundness.
If we were to drop the actions ι(c) from the extensional semantics then the extensional LTSs generated by these two configurations would be isomorphic; recall that a broadcast action in the intensional semantics always corresponds to a τ action in its extensional counterpart. Thus they would be related by the amended version of bisimulation equivalence.
However, we also have that Γ 1 ⊲W 1 Γ 2 ⊲W 2 . This can be proved by exhibiting a distinguishing context. To this end, consider the system T = [exp(c)]nil, eureka! ok . Then Γ 2 ⊲ W 2 | T has a weak barb on the channel eureka, which obviously Γ 2 ⊲ W 1 | T can not match.

Example 3.4.
[On the rule (Deliver)] Consider the configuration Γ 2 ⊲W 2 from the previous example; consider also the configuration However, if we were to drop the rule (Deliver) in the extensional semantics, thereby eliminating the actions γ(c, v), then it would be straightforward to exhibit a bisimulation containing the pair (Γ 2 ⊲ W 3 , Γ 2 ⊲ W 2 ). Thus again the amended version of bisimulation equivalence would be unsound.
The two examples above show that both rules (Idle) and (Deliver) are necessary to achieve the soundness of our bisimulation proof method for reduction barbed congruence.
In the remainder of this section we give a further series of examples, showing that bisimulations in our extensional LTS offer a viable proof technique for demonstrating behavioural equivalence for at least simple wireless systems.

Example 3.5.
[Transmission] Here we revisit Example 2.21. Let Γ be a stable channel environment, and consider the configurations note that these two configurations are taken from the second part of Example 2.21.
Our aim is to show that C 0 ≈ C 1 , when δ v 0 = δ v 1 ; for convenience let us assume that δ v 0 = δ v 1 = 1. The idea here is to describe the required bisimulation by matching up system terms. To this end we define the following system terms: Then for any channel environment ∆ we have the following transitions in the extensional semantics: Here d ranges over arbitrary channel names, including c.
Then consider the following relation: Using the above tabulation of actions one can now show that S S is a bisimulation; for C S S C ′ each possible action of C can be matched by C ′ by performing exactly the same action, and vice-versa. Since (C 0 , C 1 ) ∈ S S , it follows that C 0 ≈ C 1 .
[Equators] Let us consider the configurations C 0 , C 1 of Example 2.22. Recall that recall that Γ is a stable channel environment and h, ok are a positive integer and a value, respectively, Without loss of generality, for this example we assume For the sake of convenience we define the following system terms: Let us consider the relation S S depicted in Table 7; note that (C 0 , C 1 ) ∈ S S , so that in order to prove that C 0 ≈ C 1 it is sufficient to show that S S is a bisimulation. Note that in the relation S S the system terms W ok , V ok are always associated with a channel environment in which the channel c is exposed. In fact, if Λ were a channel environment such that Λ ⊢ c : idle, it would not be difficult to prove that Λ ⊲ W err Λ ⊲ V err ; this is because the values broadcast by these two configurations are different.
∆ arbitrary channel environment, w arbitrary value (possibly err) and k ≥ 0.
Let us list the main the extensional actions from configurations using these system terms: Here ∆, Λ are two arbitrary channel environments, but Λ is subject to the constraint that Λ(c) = (k, w) for some value w and integer k ≥ 2. This last requirement ensures that (upd c!v 0 (Λ)) = (upd c!v 1 (Λ)). With the aid of this tabulation one can now show that S S is indeed a bisimulation and therefore that C 0 ≈ C 1 .

Example 3.7.
[Merging] The last example we provide considers the merging of two transmissions in a single transmission as suggested in the Example 2.23. Let Γ be a stable channel environment and v 0 , v 1 be two values such that δ v 0 = 1, δ v 1 = 2. Also let ok be a value such that δ ok = 3. Consider the configurations Then C 0 ≈ C 1 . As in previous examples, this statement can be proved formally by exhibiting a bisimulation that contains the pair (C 0 , C 1 ); to this end, define the following system terms: Consider now the relation S S depicted in Table 8; note that C 0 S S C 1 . Also, S S is a weak bisimulation. In order to show this, we list the non-trivial transitions for both configurations C 0 , C 1 and their derivatives, which are needed to perform the proof.

Full abstraction
In this section, we show that the co-inductive proof method based on the bisimulation of the previous section is sound with respect to the contextual equivalence of Section 2.4; this is the subject of Section 4.1. Moreover it is complete for a large class of systems. This class is isolated in Section 4.2.1, and the completeness result is then given in Section 4.2.2.

Soundness.
In this section we prove that (weak) bisimulation equivalence is contained in reduction barbed congruence. The main difficulty is in proving the contextuality of the bisimulation equivalence. But first some auxiliary results.
Proof. See the Appendix, Page 51.
Below we report a result on channel exposure for bisimilarity; a similar result for reduction barbed congruence will also be proved, in Proposition 4.13.
Proof. See the Appendix, Page 52 In order to prove that weak bisimulation is sound with respect to reduction barbed congruence we need to show that ≈ is preserved by parallel composition.
Proof. Let the relation S S over configurations be defined as follows: It is sufficient to show that S S is a bisimulation in the extensional semantics. To do so, by symmetry, we need to show that an arbitrary extensional action can be matched by Γ 2 ⊲ W 2 | W via a corresponding weak extensional action. The action (4.1) can be inferred by any of the six rules in Table 6. We consider only one case, the most interesting one (Shh). So here α is τ and for some c and v, and Γ 1 = upd c!v (Γ). This transition in turn can always be inferred by an application of the rule (Sync), or its symmetric counterpart, from Table 2. Here we only consider the former case; the proof for the second case is slightly different, though it uses the same proof strategies illustrated below. For the case we are considering, we have that Note that Lemma 4.2 ensures that whenever Γ 1 ⊢ d : exp then also Γ 2 ⊢ d : exp, for any channel d . Similarly, if Γ 1 ⊢ d : exp then Γ 2 ⊢ d : exp. That is, Γ 1 agrees with Γ 2 on the exposure state of each channel; the same applies to Γ 1 and Γ 2 .
Further, recall that Γ 1 = upd c!v (Γ 1 ). Therefore we have that, for any channel d c, Γ 1 ⊢ d : exp iff Γ 1 ⊢ d : exp; for channel c, we have that Γ 1 ⊢ c : exp. That is, the exposure states of Γ 1 and Γ 1 differ only in the entry at channel c, and only if such a channel was idle in Γ 1 .
Since Γ 1 and Γ 1 agree with Γ 2 , Γ 2 , respectively, on the exposure state of each channel, it has also to be that the exposure states of Γ 2 and Γ 2 differ only at the entry at channel c, and only when the latter is idle in Γ 2 ; formally Γ 2 ⊢ d : exp iff Γ 2 ⊢ d : exp when d c, and Γ 2 ⊢ c : exp.
Next we show that the action which is exactly what we want to prove. There are two possible cases, according to whether Γ 1 ⊲ W is able to detect a value broadcast along channel c: To this end, we prove a stronger statement: whenever we have a sequence of transitions The proof of the aforementioned statement is by induction on n.
(a) If n = 0 then there is nothing to prove. (b) Let n > 0. By inductive hypothesis we assume that the statement is true for n − 1. By and by hypothesis we get that Γ 0 ⊢ d : exp. Therefore we can apply the inductive hypothesis to obtain the sequence of transitions There are different ways in which this extensional transition could have been inferred: • if this transition has been obtained by an application of Rule (TauExt) of Table 6, • if the transition has been obtained by an application of Rule (Shh) of Table 6, then The latter can be converted into an extensional τ-transition (2) Suppose now that rcv(Γ 1 ⊲ W, c). By Lemma 2.9(2) the transition Also, in this case we have that Γ 1 ⊢ c : idle, which also gives Γ 2 ⊢ c : idle by Lemma 4.2. Since we have Γ 2 ⊢ c : exp, it has to be the case that we can unfold the weak transition on the exposure state of each channel, respectively. Now, in a way similar to the first case, we can prove that we have the following transitions: Finally, note that for any channel c, | W ′ , as we wanted to prove. We have built the sequence of transitions which is exactly the transition that we wanted to derive.
Proof. It suffices to prove that bisimilarity is reduction-closed, barb preserving and contextual.
Reduction Closure: Note that if C 1 C ′ 1 , then we have two possible cases; either By Remark 3.1 the last transition can be rewritten as a sequence of reductions C 2 * , from which it follows C 2 * C ′ 2 , Barb Preservation: Let C 1 = Γ 1 ⊲W 1 and C 2 = Γ 2 ⊲W 2 . Suppose that C 1 ↓ c for some channel c; by definition we have that Γ − 1 ⊢ c : exp. By Lemma 4.2 we also have that Γ 2 ⊢ c : exp. This ensures that C 2 ↓ c , and more generally C 2 ⇓ c .

Completeness.
Having proved soundness, it remains to check whether our bisimulation proof technique is also complete with respect to reduction barbed congruence; that is, whenever we have Γ 1 ⊲ W 1 ≃ Γ 2 ⊲ W 2 , then there exists a bisimulation that contains the pair (Γ 1 ⊲ W 1 , Γ 2 ⊲ W 2 ). Unfortunately, this is not true for arbitrary configurations, as shown by the following Example: Now note that, since any occurrence of channel d is restricted in both C 1 , C 2 , we cannot enable the passage of time for them via the composition with a system term T . That is, for any system term T , and configuration C 1 , C 2 , such that σ. Now it is not difficult to show that C 1 ≃ C 2 . At least informally, the only difference between these two configurations lies in the exposure state of channel c, and in the fact that C 2 can broadcast along channel c. Such a broadcast ensures that the strong barb at channel c, enabled in C 1 , can be matched by a weak barb enabled at C 2 . On the other hand, the difference in the exposure state of channel c in C 1 , C 2 could be detected via a test T which contains an exposure check exp(c); however, this construct requires the passage of time in order to determine that channel c is free (exposed) in C 1 | T (respectively, C 2 | T ). But, as we have already noticed, time is not allowed to pass in such configurations. Formally, to prove C 1 ≃ C 2 it suffices to show that the relation is barb-preserving, reduction closed and contextual.
Therefore we have shown that C 1 ≃ C 2 ; however, Γ 1 ⊢ c : idle, while Γ 2 ⊢ c : exp. Therefore, by Lemma 4.2 it also has to be C 1 C 2 .

4.2.1.
Well-formed systems. The counterexample to completeness illustrated in Example 4.5 relies on the existence of configurations which do not let time pass. These can be built by placing an active receiver along an idle, restricted channel. However, such configurations are not interesting per se, as it is counter-intuitive to allow wireless stations to receive a value along a channel, when there is no value being transmitted.
It is interesting, in fact, to ask ourselves if our proof methodology based on bisimulations is complete, if we were to restrict our focus to a setting where active receivers along idle channels were explicitly forbidden. These take the name of well-formed configurations, and can be defined as below:

Definition 4.6. [Well-formedness] The set of well-formed configurations WNets is the least set such that
Γ ⊲ P ∈ WNets for all processes P A configuration Γ ⊲ W is well-formed if it does not contain any receiving station along an idle channel. Note that the configurations from Example 4.5 are not well-formed. Clearly, well-formed configurations are preserved at runtime.

Lemma 4.7.
Suppose C is well-formed and C C ′ . Then C ′ is also well-formed.
Proof. See the Appendix, Page 52.
The main property of well-formed systems is that they allow the passage of time, so long as all internal activity has ceased: Proposition 4.8.
[Patience] Let C be a well-formed configuration for which there is no C ′ such that C i C ′ ; then C σ C ′′ , for some configuration C ′′ .
Proof. Details for the most important cases are given in the Appendix; see Page 52.
However, Patience alone does not preclude the possibility of exhibiting a configuration in which time never passes. In fact, it only ensures the passage of time when instantaneous reduction are not possible anymore. However, it could be the case that a configuration C enables an infinite sequence of instantaneous reductions, and by maximal progress (Proposition 2.11) the passage of time would be forbidden. As we will prove presently, this phenomenon does not arise for CCCP configurations; we recall in fact that, in recursive processes of the form fix X.P, we require all free occurrences of the process variable X in P to be guarded by a time-consuming construct. This limitation is sufficient to prevent the existence of configurations which do not allow time to pass; further, it is also necessary, as shown by the following example. Example 4.9. Suppose we remove the constraint in the syntax that process variables have to be guarded by time-consuming constructs in fixed point processes. Let W denote the code fix X.(τ.X).

Then we have an infinite sequence of internal actions
Indeed one can show that if Γ ⊲ W * C ′ then C ′ i . Maximal progress then ensures that C ′ σ .
Let us state precisely what we mean when we say that infinite sequences of instantaneous reductions are not allowed in our calculus. In practice, we give a slightly stronger definition, requiring that the amount of instantaneous reductions that can be performed in sequence by a configuration C is bounded. [32], if there exists an upper bound k ∈ N such that whenever C ( i ) h C ′ for some h ≥ 0, then h ≤ k.

Definition 4.11. [Well-timed configurations] A configuration C is well-timed,
Contrarily to well-formedness, which is a simple syntactic constraint, well-timedness means that the designer of the network has to ensure that the code placed at the station nodes can never lead to divergent behaviour. As we already argued, however, the constraint we have placed on the syntax of system terms that each recursive definition is weakly guarded in P, is sufficient to ensure well-timedness. One simple method for ensuring this is to only use recursive definitions fix X.P where X is weakly guarded in P; that is, every occurrence of X is within an input, output or time delay prefix, or it is included within a branch of a matching construct. These are exactly the conditions that we placed for recursion variables when defining our calculus. Thus, we would expect every configuration in our calculus to be well-timed. Proof. See the Appendix, Page 53.
Next we prove a very useful result for well-defined configurations; the proof emphasises the roles of well-formedness and well-timedness in the configurations being tested.
Proof. Let Γ 1 ⊲ W 1 ≃ Γ 2 ⊲ W 2 and suppose Γ 1 ⊢ c : idle for some channel c. Consider the testing code: for some Γ ′ and W ′ . Now, if we define C ′′ = upd eureka!ok (Γ ′ ) ⊲ W ′ and T ′ = σ.eureka! ok , it is easy to see that there exists a sequence of reductions of the following shape: Note that the existence of the sequence of reductions above relies on the fact that Γ 1 ⊲ W 1 is well-timed.The timed transition C | T ′ σ C ′ | eureka! ok in such a sequence is derived from the timed transitions performed by their components; if C were not able to perform a σ-transition, in fact, we would have not been able to derive the timed reduction for the overall configuration C | T ′ .
2 is a channel environment such that Γ ′ 2 ⊢ c : idle. From Lemma 4.1 (recall that τextensional actions coincide with instantaneous reductions) we get the required Γ 2 ⊢ c : idle.
We remark once again that restricting our attention to well-formed configurations is crucial in order to ensure the validity of Proposition 4.13. In fact, in Example 4.5 we have already provided an example of two (ill-formed) configurations which are reduction barbed congruent, but which differ in the exposure state of a channel.
Another important property that we will need from well-formed configurations concerns the definition of reduction barbed congruence itself; the reduction closure property which we used to define ≃ can be strengthened by requiring instantaneous reductions to be matched by sequences of instantaneous reductions, and timed reductions to be matched by timed reductions, possibly preceded and followed by sequences of instantaneous ones. To prove this property we will need the following technical result, which will also be used later: where each channel occurring free in T does not occur free in W 1 , nor in W 2 and is idle in both Γ 1 and Γ 2 ; then Proof. See the Appendix, Page 56, for an outline.
Proof. See the Appendix, Page 56.

Proving Completeness.
We are now in the position to prove that, for well-formed configurations, our proof methodology is also complete. Given two well-formed configurations C 1 ≃ C 2 , there exists a bisimulation S S such that C 1 S S C 2 .
To prove completeness, we show that reduction barbed congruence is a bisimulation. That is, we need to show that for any extensional action α, if C 1 ≃ C 2 and C 1 α −→ C ′ 1 , then there exists C ′ 2 such that C 2α =⇒ C ′ 2 and C ′ 1 ≃ C ′ 2 . The special cases α = τ and α = σ follow as a direct consequence of Proposition 4.15. However, we state the results for the sake of consistency.

Proposition 4.17. [Preserving extensional σs]
Let us turn our attention to the remaining cases α ∈ {c?v, ι(c), γ(c, v)}. For each of them we define a distinguishing context T α ; these are defined so that, given a well-formed configuration C, Cα =⇒ C ′ if and only if C | T α * C ′ | T α , where T α is uniquely determined by the action α. Intuitively, the latter corresponds to the first state reached by the testing component when it has detected that the configuration C has performed a weak α-action; the system T α is called the successful state for the action α.
The tests T α are defined below; here we assume that eureka, fail are fresh channels, while δ ok = δ no = 1.
We also list their respective successful states T α : As an example we consider in detail the behaviour of the testing context T γ(c,v) . This is designed to detect whether a configuration Γ ⊲ W has performed a weak γ(c, v)-action. Let us discuss informally how the testing context T γ(c,v) operates. The fresh channels eureka, fail play a different role: fail ensures that the reception along channel c has finished, while eureka guarantees that the received values is actually v.
We provide a possible evolution of the testing contexts T γ (c, v) when running in a channel environment Γ such that Γ(c) = (1, v), and then we discuss how it works.
Initially a configuration of the form Γ ⊲ W | T γ(c,v) has a weak barb at channel fail. Further, the testing component has an active receiver over channel c; note that the configuration Note that the component T 1 compares the received value along channel c with v; this test can only succeed, and as a consequence we obtain a further instantaneous reduction Γ 1 ⊲ W ′ | T 1 i Γ ⊲ W ′ | T ; In practice here we have Γ = Γ 1 ). At this point we have detected that the configuration Γ 1 ⊲ W 1 has performed the weak γ(c, v)-action, ending in Γ 1 ⊲ W ′ . The rest of the computation is already determined, at least for the part concerning the testing component T 1 , and leads Γ ⊲ W ′ | T to output a barb on eureka; further, in this configuration it is not possible to output a barb on fail anymore.
To see why this is true, note that in Γ ⊲ W ′ | T the testing component T is waiting for time to pass, before broadcasting value ok along a restricted channel d. Formally, we have the sequence ⊲ W 2 and W 3 = W 4 (note that each instantaneous reduction performed by the tested component does not affect the test at this point).
Finally, in Γ 4 ⊲ W 4 | T 4 the test checks whether the restricted channel d is exposed. As this channel is effectively restricted in T 4 , the test can only succeed, leading to Γ 4 ⊲ W 4 | T 4 i Γ 5 ⊲ W 5 | T 5 , where Γ 5 = Γ 4 and W 5 = W 4 . At this point we can let time pass, via a sequence of reductions of the form . Now it is trivial to see that this configuration has a barb on eureka.
Note that in the computation of Γ ⊲ W | T γ(c,v) discussed above, there are two crucial checks that lead to enabling a barb over channel eureka: • The received value is exactly v, • The check that a broadcast along the restricted channel d is performed after two time instants. Since the broadcast along channel d is performed only one time instant after value v has been delivered, this check ensures that such a value has been actually delivered after one time instant.

Proposition 4.18. [Detecting Inputs] For any well-formed configuration Γ⊲W we have that Γ⊲W
Proof. See the Appendix, Page 57.

Proposition 4.19. [Detecting Exposure Checks] For any well-formed configuration
Proof. See the Appendix, Page 58.

Proposition 4.20. [Detecting Delivery of Values] For any well-formed configuration
Note that in Propositions 4.18, 4.19 and 4.20, we emphasized whether the reductions needed to reach the successful configuration Γ ⊲ W ′ | T α from Γ ⊲ W | T α are instantaneous or timed.
We have stated all the results needed to prove completeness.
Proof. It is sufficient to show that the relation is a bisimulation. To do so, suppose that Γ 1 ⊲ W 1 Γ 2 ⊲ W 2 , and that Γ 1 ⊲ W 1 ≃ Γ 2 ⊲ W 2 . If α = τ or α = σ, the result follows directly from propositions 4.16 and 4.17, respectively. Now suppose that α = γ(c, v) for some channel c and value v.
. By the contextuality of reduction barbed congruence, and by Proposition 4.15, it follows that note that Γ ′ 1 ⊢ eureka : idle (recall that we assumed that eureka is a fresh channel), so that by Proposition 4.13 it follows that therefore, we also have that Γ ′ 2 ⊲Ŵ 2 ⇓ eureka and Γ ′ 2 ⊲Ŵ 2 ⇓ fail . Now, by inspecting all the possible evolutions of the configuration but this follows immediately from Lemma 4.14 and the fact that . It remains to check the cases α = c?v and α = ι(c); these can be proved analogously to the previous case, using proposition 4.18 and 4.19, respectively, in lieu of Proposition 4.20.

Applications
In this section, we show how our calculus CCCP can be used to model different interesting behaviours which arise at the MAC sub-layer [26]  Intuitively none of its behaviour should be observable. In CCCP this means that the system should be behaviourally equivalent to the empty system nil.
Formally consider the configuration Γ ⊲ nil where Γ is an arbitrary channel environment. This configuration has non-trivial extensional behaviour. For example it is input enabled, and so can perform all extensional actions of the form c?v. It can also perform σ actions, indicating the passage of time. Now let W be arbitrary station code such that fsn(W) = ∅, that is it can not broadcast on any free channel. The configuration Γ ⊲ W has similar behaviour. Indeed let S S be the relation Then it is straightforward to show that S S is a bisimulation in the extensional LTS. Our soundness result therefore ensures that Γ ⊲ W ≃ Γ ⊲ nil whenever fsn(W) = 0.
Next we consider what happens when a channel becomes permanently exposed. This situation can be modelled by using two stations s 0 , s 1 which repeatedly send a value along channel c; each broadcast performed by s 1 takes place before the transmission of s 0 ends, and vice versa. In this case we say that the channel c is corrupted. Clearly, if a system transmits only on corrupted channels; then it cannot be detected. Let us see how this scenario is reflected in our behavioural theory.

Example 5.2. [Noise obfuscates transmissions]
Let v be a value such that δ v = 2 and let Snd(c) denote the code fix X.c ! v .X, which continually broadcasts an arbitrary value v along c. To model the two stations s 0 and s 1 discussed informally above we use the code Noise(c) = Snd(c) | σ.Snd(c).
Then, consider a configuration Γ ⊲ W such that fsn(W) ⊆ {c}; that is does not transmit on free channels different from c. Then To prove this, it is sufficient to exhibit bisimulation containing the pair of configurations (Γ ⊲ W | Noise(c), Γ ⊲ Noise(c)).
We use the following abbreviations: Then let S S denote the following set of pairs of configurations: Then it is possible to check that S S is a weak bisimulation in the extensional LTS. At least intuitively, this is because in the extensional LTS all outputs fired along the obfuscated channel c corresponds to internal actions; further, in the configurations included in S S , channel c is never released, so that neither ι(c)-actions nor γ(c, v)-actions can be performed by any configuration included in S S .
The Carrier Sense Multiple Access (CSMA) scheme [24] is a widely used MAC-layer protocol in which a device senses the channel (physical carrier sense) before transmitting. More precisely, if the channel is sensed free the sender starts transmitting immediately, that is in the next instant of time 5 ; if the channel is busy, that is some other station is transmitting, the device keeps listening to the channel until it becomes idle and then starts transmitting immediately. This strategy is called 1-persistent CSMA and can be easily expressed in our calculus in terms of the following process: So, by definition CSMA transmissions are delayed whenever the channel is busy.
In the next example we prove a natural property of CSMA transmissions. 5 Recall that in wireless systems channels are half-duplex.
Intuitively, since Γ ⊢ t = n, the transmission of value v in Γ ⊲ c!! v .P can take place only after at least n instants of time. The same happens in Γ ⊲ σ k .c!! v .P. Formally, to prove (5.1) we need to exhibit a bisimulation S S which contains all pairs of the form (Γ ⊲ c!! v .P, σ k .c!! v .P), where Γ is such that Γ ⊢ t : n > 0 for some n satisfying k ≤ (n + 1). One possible S S takes the form R ∪ Id where Id is the identity relation over configurations and R is given by: In our calculus the network topology is assumed to be flat. However, we can exploit the presence of multiple channels to model networks with a more complicated topological structure. The idea is to associate a particular channel with a collection of stations which are in the same neighbourhood.

Example 5.4. [Network Topology
] Suppose that we want to model a network with two stations s, r with the following features: • the range of transmission of s is too short to reach external agents, • the station r is in the range of transmission of s, • the range of transmission of r is long enough to also reach external agents. A graphical representation of the network we want to model is given as N 0 of Table 9. We can model this network topology by using a specific restricted channel, say d, for the local communication between stations s and r. In CCCP a wireless system running on N 0 would therefore take the form • S represents the code running at station s; it can therefore only broadcast and receive along the restricted channel d (recall that we do not want station s to be able to communicate directly with the external environment) • R represents the code running at station r; it can only receive values along the restricted channel d (since in N 0 station r can receive messages broadcast by station r, but not by the external environment), while it is free to broadcast on other channels (since station r is able to broadcast messages to the external environment) As a specific example we could let S denote the single broadcast d! v , and R = fix X.⌊d?(x).c! x ⌋X. Then in the configuration C 0 the station s broadcasts as a value and station r acts as a forwarder; this behaviour is reminiscent of range repeaters in wireless terminology. Suppose now that we want to add a second station e to the above network topology, so that • broadcasts from e can be detected by r; this can be accomplished by allowing the process used to model station e to broadcasts along a restricted channel d. • broadcasts from e can not reach s, nor the external environment. For this to be true, it is sufficient to require that the process which models the behaviour of station e can broadcast values only along the restricted channel d; further, in order for ensuring that the station e cannot detect values broadcast by s, we require that the process used to represent station e does not use receivers along channel d. The network topology we wish to model is depicted as N 1 in Table 9 and so a wireless system running on this network takes the form where E is the code running at station e. As an example we could take E to be the faulty code d! v + τ.nil.
Then in C 1 station r still acts as a forwarder for station s; however station e can nondeterministically decide whether to corrupt the transmission from node s to r, causing a collision.
Let us assume that the transmission time of the value used in these networks, v, satisfies δ v = δ err . Then we can show Intuitively the reasons for these equivalences are obvious. The transmission along channel d is restricted in C 0 , so it cannot be observed by the external environment. The only activity which can be observed is the broadcast of value v along channel c, which takes place after δ v instants of time. For C 1 , a collision can happen along channel d, which is again restricted; the only activity that can be detected by the external environment is a transmission which takes place after δ v instants of time. Such a transmission will contain either the value v or an error message of length δ v .
The formal proof of these identities involves exhibiting two bisimulations, containing the relevant pairs of configurations. Here we exhibit a bisimulation for showing that C 1 ≃ Γ ⊲ τ.σ δ v .c! v + τ.σ δ v .c! err . For the sake of simplicity, let δ err = δ v = 1 and define the system terms Then it is easy to show that the relation is a weak bisimulation.
The next example shows how the TDMA modulation technique [52] can be described in CCCP. Time Division Multiple Access (TDMA) is a type of Time Division Multiplexing, where instead of having one transmitter connected to one receiver, there are multiple transmitters. TDMA is used in the digital 2G cellular systems such as Global System for Mobile Communications (GSM). TDMA allows several users to share the same frequency channel by dividing the signal into different time slots. The users transmit in rapid succession, one after the other, each using his own time slot. This allows multiple stations to share the same transmission medium (e.g. radio frequency channel) while using only a part of its channel capacity.
As a simple example let us describe how two messages v 0 and v 1 can be delivered in TDMA style; for simplicity, we assume δ v 0 = δ v 1 = 2. The main idea here is to split each of these values into two packets of length one, transmit the packets individually, which will then be concatenated together before being forwarded to the external environment. So let us assume values v 0 0 , v 1 0 , v 0 1 , v 1 1 , each of which requires one time instant to be transmitted, and a binary operator • for composing messages such that where v is an arbitrary value; in this case we assume that δ err = 2.
More specifically, for this example we assume four different stations, s 0 , s 1 , r 0 , r 1 , running the codeŜ 0 ,Ŝ 1 ,R 0 ,R 1 respectively. The network we consider for modelling the TDMA transmission is then given by The intuitive behaviour of this network is depicted in Table 10. Station s 0 wishes to broadcast value v 0 , while s 1 wishes to broadcast value v 1 . They both use the same (restricted) channel d to broadcast their respective values; however, both stations split the value to be broadcast in two packets. Value v 0 is split in v 0 0 and v 1 0 , while v 1 is split in v 0 1 and v 1 1 .  Table 11 Forwarding two messages to the external environment The two stations run a TDMA protocol with a time frame of length two. Station s 0 takes control of the first time frame, hence transmitting its two packets v 0 0 and v 1 0 in the first and the third time slot, respectively. Station s 1 takes control of the second time frame; hence the two packets v 0 1 and v 1 1 are broadcast in the second and fourth time slot, respectively. Stations r 0 and r 1 wait to collect the values broadcast along channel d. However, the former is interested only in packets sent in the first time frame, while the latter detects only values sent in the second time frame. At the end of their associated time frame the stations r 0 and r 1 have received two packets which are concatenated together and then broadcast to the external environment along channel c. Note that station r 1 is a little slower than r 0 , for we have added a delay of two time units before broadcasting the concatenated values.
As an alternative to TDMA, the two values v 0 , v 1 can be also be delivered to the external environment by means of a simple routing, along the lines suggested in Example 5.4. Here we consider the configuration Intuitively, the configuration C 1 models three wireless stations s 0 , s 1 , r, running the code S 0 , S 1 , R, respectively, and connected as in Table 11. Station s 0 waits four instants of time, then it broadcasts value v 0 directly to the external environment via the free channel c. Similarly, after four instants of time the station s 1 broadcasts value v 1 to station r via the restricted channel d. Finally, r forwards the message to the external environment via the free channel c.
From the point of view of the external environment the configuration C 1 performs the following activities: • it remains idle for the first four instants of time • it transmits value v 0 in the fifth and sixth time instants • it transmits value v 1 in the seventh and eighth time instants. In this manner, at least informally the observable behaviour of C 1 , which uses direct routing, is the same as that of C 0 , which uses TDMA.
Formally, we can prove However, instead of proving this by giving a bisimulation containing this pair of configurations, we prove them individually bisimilar to a simpler specification. Let S 1 denote the configuration Γ ⊲ S 1 where S 1 is the code Then we can show that C 0 ≈ S 1 and C 1 ≈ S 1 , from which (5.2) follows by soundness. Let us show that C 0 ≃ S 1 ; for the sake of simplicity, it will be convenient to define the following system terms: Then the relation , ∆ ⊲ σ) , (∆ ⊲ νd : (0, ·).(nil | nil | nil | nil) , ∆ ⊲ nil) , | ∆ arbitrary channel environment } is a bisimulation. Below we also show that C 1 ≃ S 1 ; for the sake of simplicity, define the following terms: for any n ∈ N. Then the relation , is a relation which contains the most relevant couples needed for showing that C 1 ≈ S 1 .
Example 5.5. As a final example we can modify the behaviour of the two configurations C 0 and C 1 seen above by adding the possibility of getting a collision when delivering values v 0 , v 1 to the external environment. In the routing case, this is accomplished by requiring that both stations s 0 , s 1 can either broadcast their value directly to the external environment or to the forwarder node r, while in the TDMA case it is sufficient to allow both the stations s 0 , s 1 to non-deterministically choose the time slot to be used to broadcast packets. To this end, let and consider the configurations It is not difficult to see informally that the observable behaviour of these two configurations is the same. Specifically • either value v 0 is broadcast in the fifth and sixth time slots and v 1 is broadcast in the seventh and eighth instants of time slots, or • value v 1 is broadcast in the fifth and sixth time slots, while value v 0 is broadcast in the seventh and eighth time slots, or • a collision occur in the fifth and sixth time slots, or • a collision occur in the seventh and eighth time slots. This informal behaviour can be described by the term 4 .c! err + τ.σ 6 .c! err and once more we can exhibit bisimulations to establish Γ ⊲ S 2 ≈ C c 0 and Γ ⊲ S 2 ≈ C c 1 . Then soundness again ensures that C c 0 ≃ C c 1 6. Conclusions and related work In this paper we have given a behavioural theory of wireless systems at the MAC level. In our framework individual wireless stations broadcast information to their neighbours along virtual channels. These broadcasts take a certain amount of time to complete, and are subject to collisions. If a broadcast is successful a recipient may choose to ignore the information it contains, or may act on it, in turn generating further broadcasts. We believe that our reduction semantics, given in Section 2, captures much of the subtlety of intensional MAC-level behaviour of wireless systems. Then based on this reduction semantics we defined a natural contextual equivalence between wireless systems which captures the intuitive idea that one system can be replaced by another in a larger network without affecting the observable behaviour of the original network. In the main result of the paper, we then gave a sound and complete characterisation of this behavioural equivalence in terms of extensional actions. This characterisation is important for two reasons. Firstly it gives an understanding of which aspects of the intensional behaviour is important from the point of view of external users of these wireless systems. Secondly it gives a powerful sound and complete coinductive proof method for demonstrating that two systems are behaviourally equivalent. We have also demonstrated the viability of this proof methodology by a series of examples.
Let us now examine some relevant related work. We start with the literature on process calculi for wireless systems. Nanz and Hankin [37] have introduced the first (untimed) calculus for Mobile Wireless Networks (CBS ♯ ), relying on a graph representation of node localities. The main goal of that paper is to present a framework for specification and security analysis of communication protocols for mobile wireless networks. Merro [33] has proposed an untimed process calculus for mobile ad-hoc networks with a labelled characterisation of reduction barbed congruence, while [17] contains a calculus called CMAN, also with mobile ad-hoc networks in mind. This latter paper also gives a characterisation of reduction barbed congruence, this time in terms of a contextual bisimulation. It also contains a formalisation of an attack on the cryptographic routing protocol ARAN. Kouzapas and Philippou [27] have developed a theory of confluence for a calculus of dynamic networks and they use their machinery to verify a leader-election algorithm for mobile ad hoc networks.
Singh, Ramakrishnan and Smolka [48] have proposed the ω-calculus, a conservative extension of the π-calculus. A key feature of the ω-calculus is the separation of a node's communication and computational behaviour from the description of its physical transmission range. The authors provide a labelled transition semantics and a bisimulation in open style. The ω-calculus is then used for modelling the AODV ad-hoc routing protocol. Another extension of the π-calculus for modelling mobile wireless systems may be found in [7]; the calculus is used to verify reachability properties of the ad-hoc routing protocol LUNAR. Fehnker et al. [13] have proposed a process algebra for wireless mesh networks that combines novel treatments of local broadcast, conditional unicast and data structures. In this framework, they also model the AODV routing protocol and (dis)prove crucial properties such as loop freedom and packet delivery. Vigo et al. [53] have proposed a calculus of broadcasting processes that enables to reason about unsolicited messages and lacking of expected communication. Moreover, standard cryptographic mechanisms can be implemented in the calculus via term rewriting. The modelling framework is complemented by an executable specification of the semantics of the calculus in Maude.
All the calculi, mentioned up to now, except for [37], represent topological changes of mobile networks in the syntax. In contrast Ghassemi et al. [14] have proposed a process algebra called RBPT where topological changes to the connectivity graph are implicitly modelled in the operational semantics rather than in the syntax. They propose a notion of bisimulation for networks parametrised on a set of topological invariants that must be respected by equivalent networks. This work in then refined in [15] where the authors propose an equational theory for an extension of RBPT. Godskesen and Nanz [18] have proposed a simple timed calculus for wireless systems to express a wide range of mobility models.
A simple notion of time is also adopted in the calculus for wireless systems by Macedonio and Merro [31] to verify key management protocols for wireless sensor networks by applying semanticsbased techniques. In [30] this notion of time is extended with probabilities. In this paper a probabilistic simulation theory is proposed to evaluate the performances gossip protocols in the context of wireless sensor networks. Paper [50] also presents a probabilistic broadcast calculus for wireless networks where, unlike [30], nodes are mobile; due to mobility the connection probabilities may change. The authors examine the relation between a notion of weak bisimulation and a minor variant of PCTL*. Paper [10] investigate in detail the probabilistic behaviour of wireless networks. The paper presents a compositional theory based on a probabilistic generalisation of the well known may-testing and must-testing pre-orders. Also, it provides an extensional semantics to define both simulation and deadlock simulation preorders for wireless networks. Gallina et al. [8] propose a process algebraic model targeted at the analysis of both connectivity and communication interference in ad hoc networks. The framework includes a probabilistic process calculus and a suite of analytical techniques based on a probabilistic observational congruence and an interference-sensitive preorder. In particular, the preorder makes it possible to evaluate the interference level of different, behaviourally equivalent, networks. They use their framework to analyse the Alternating Bit Protocol. Song and Godskesen [51] introduce a continuous time stochastic broadcast calculus for mobile and wireless networks. The mobility between nodes in a network is modelled by a stochastic mobility function which allows to change part of a network topology depending on an exponentially distributed delay and a network topology constraint. They define a weak bisimulation congruence and apply their theory on a leader election protocol.
All the calculi mentioned up to now abstract away from the possibility of interference between broadcasts. Lanese and Sangiorgi [28] have instead proposed the CWS calculus, a lower level untimed calculus to describe interferences in wireless systems. In their operational semantics there is a separation between the beginning and ending of a broadcast, so there is some implicit representation of the passage of time. A more explicit timed generalisation of CWS is given [34] to express MAC-layer protocols such as CSMA/CA, where the authors propose a bisimilarity which is proved to be sound but not complete with respect to a notion of reduction barbed congruence. We view the current paper as a simplification and generalisation of [34].
The research we have mentioned so far has been focused on formalising various aspects of ad-hoc networks. However other than [18,34], these various calculi abstract away from time. Nevertheless there is an extensive literature on timed process algebras, which we now briefly review. From a purely syntactic point of view, the earliest proposals are extensions of the three main process algebras, ACP, CSP and CCS. For example, [2] presents a real-time extension of ACP, [44] contains a denotational model for a timed extension of CSP, while CCS is the starting point for [36]. In [2] and [44] time is real-valued, and at least semantically, associated directly with actions. The other major approach to representing time is to introduce a special action to model the passage of time, and to assume that all other actions are instantaneous. This approach is advocated in [19,5,36,39] and [55,56], although the basis for this approach may be found in [6]. The current paper shares many of the assumptions of the languages presented in these papers; in particular we have been influenced by [22] which contains a timed version of CCS enjoying time determinism, maximal progress and patience. All the just mentioned papers assume that actions are instantaneous and only the extension of ACP presented in [19] does not incorporate time determinism; however maximal progress is less popular and patience is even rarer.
From this early work on timed process calculi a flourishing literature has emerged. Here we briefly mention some highlights of this research. Prasad [41] has proposed a timed variant of his CBS [40], called TCBS. In TCBS a timeout can force a process wishing to speak to remain idle for a specific interval of time; this corresponds to have a priority. TCBS also assumes time determinism and maximal progress. Corradini et al. [11] deal with durational actions proposing a framework relying on the notions of reduction and observability to naturally incorporate timing information in terms of process interaction. Our definition of timed reduction barbed congruence takes inspiration from theirs. Corradini and Pistore [12] have studied durational actions to describe and reason about the performance of systems. Actions have lower and upper time bounds, specifying their possible different durations. Their time equivalence refines the untimed one. Baeten and Middelburg [3] consider a range timed process algebras within a common framework, related by embeddings and conservative extensions relations. These process algebras, ACP sat , ACP srt , ACP dat and ACP drt , allow the execution of two or more actions consecutively at the same point in time, separate the execution of actions from the passage of time, and consider actions to have no duration. The process algebra ACP sat is a real-time process algebra with absolute time, ACP srt is a realtime process algebra with relative time. Similarly, ACP dat and ACP drt are discrete-time process algebras with absolute time and relative time, respectively. In these process algebra the focus is on unsuccessful termination or deadlock. In [4] Baeten and Reniers extend the framework of [3] to model successful termination for the relative-time case. Laneve and Zavattaro [29] have proposed a timed extension of π-calculus where time proceeds asynchronously at the network level, while it is constrained by the local urgency at the process level. They propose a timed bisimilarity whose discriminating is weaker when local urgency is dropped.
• W = X. This case is vacuous, as it contains an unguarded free occurrence of a process variable. There exists an open system term W ′ such that, for any process environment ρ for which (Wρ) is closed, then W ′ ρ is also closed, and Γ ⊲ Wρ Proof. Note that if rcv(Γ ⊲ (Wρ), c = false) for some environment ρ, it suffices to choose W ′ = W. In fact, by Lemma A.2 we have that rcv(Γ ⊲ Wρ ′ , c) = false for any environment ρ ′ such that Wρ ′ is closed. By applying Rule (RcvIgn) we obtain the transition Γ ⊲ (Wρ ′ ) . Therefore, suppose that W is such that rcv(Γ ⊲ (Wρ), c) = true for some process environment ρ (and, as a consequence of Lemma A.2, rcv(Γ ⊲ (Wρ ′ ), c) = true for any other process environment ρ ′ ). Note that in this case we have that Γ ⊢ c : idle, and W cannot take the form ! c .eP, τ.P, σ.P, [b]P, Q, nil or d [x].P. We check the remaining cases, by performing an induction on W. In the following ρ is an arbitrary process environment.
• Suppose that W = ⌊c?(x).P⌋Q for some processes P, Q. , c) = true for any other process environment ρ ′ , as a consequence of Lemma A.2, so that the choice of W ′ is independent from the process environment.
• Suppose that W = fix X.P. By inductive hypothesis, there exists a process W ′′ such that, where ρ is an arbitrary process environment. We obtain that Γ ⊲ Pρ[X → Proof of Lemma 2.9. Let Γ ⊲ W be a configuration. First note that W is a closed system term, hence Wρ = W for any process environment ρ. Given an arbitrary channel c and an arbitrary value v, The case where rcv(Γ ⊲ W, c) = true is slightly more complicated. In practice, we define a function #Rcv(·, c) which maps any system term into its number of active receivers along channel c and we show that, whenever Γ ⊲ W c?v −−−− → W ′ , then #Rcv(W ′ ) > #Rcv(W). As an immediate consequence, W ′ W. Formally, the function #Rcv(·, c) is defined inductively on the structure of system terms, by letting for any process P and system terms W 1 , W 2 , where the inequality #Rcv(W 1 , c) > #Rcv(W ′ 1 , c) follows from the inductive hypothesis, • the last case to analyse is the one in which Rule (RcvPar) has been applied last in the proof Further, since we are assuming that rcv(Γ ⊲ W 1 | W 2 , c) = true, then either rcv(Γ ⊲ W 1 , c) = true or rcv(Γ ⊲ W 2 , c) = true. Without loss of generality, suppose that rcv(Γ ⊲ W 1 , c) = true. Note that in this case, if rcv(Γ ⊲ W 2 , c) = false then we know that W ′ 2 = W 2 , hence #Rcv(W ′ 2 , c) = #Rcv(W 2 , c). Otherwise, by inductive hypothesis it follows that #Rcv(W ′ 2 , c) > #Rcv(W 2 , c). In any case, we obtain that #Rcv(W ′ 2 , c) ≥ #Rcv(W 2 , c). Also, by inductive hypothesis we have that #Rcv(W ′ 1 , c) > #Rcv(W 1 , c). By these two statements, and the definition of #Rcv(·, c), it follows that #Rcv( (i) if W = P+ Q for some processes P, Q then there exists two processes P ′ , Q ′ such that Γ⊲P Proof. Both statements can be proved by induction on the structure of W. We only provide the details for (i), since the proof for (ii) is identical in style.
• First note that if W is a basic process, that is it has either the form nil, c ! e .P, [b]P, Q, ⌊c?(x).P⌋Q, τ.P, fix X.P or σ.P then there is nothing to prove, as the assumption that W = P + Q for some processes P, Q is not valid; • suppose then that W = P + Q for some processes P, Q, and that Γ ⊲ P + Q σ −−− → W ′ . By inspecting the rules of the intensional semantics, it is clear that the last Rule applied in a proof of the transition above is (SumTime). Thus, there exist processes P 1 , Q 1 , P ′ 1 , Q ′ 1 such that P + Q = P 1 + Q 1 , We need to show that there exist two processes P ′ , Q ′ such that Γ ⊲ P Note that the assumption P + Q = P 1 + Q 1 leads to three possible cases: (1) there exists a process R such that P 1 = P + R, Q = R + Q 1 ; In this case we can apply the inductive hypothesis to the system term P 1 (note that P 1 is a smaller term than P + Q, as P + Q = P 1 + Q 1 ). Thus the transition Γ ⊲ P 1 as we wanted to prove, (2) otherwise P = P 1 and Q = Q 1 ; this case is trivial, as it suffices to choose P ′ = P ′ 1 , Q ′ = Q ′ 1 , (3) the last case possible is that there exists a process R such that P = P 1 + R, the proof here is symmetrical to the first case, as now it is necessary to apply the inductive hypothesis to Q 1 , rather than to P 1 , • the last remaining cases are those in which either W = νc.W 1 or W = W 1 | W 2 . Again, these cases invalidate the hypothesis that W is a non-deterministic choice of processes, hence there is nothing to prove.
Proof of Proposition 2.10. We proceed by induction on the proof of the derivation C σ −−− → W 1 .
• The last rule applied in the derivation C .P for some channel c, process P, channel environment Γ for which Γ ⊢ t c : 1 and Γ ⊢ v c : w for some closed value w. Also W 1 = {w/x}P. Suppose now that C σ −−− → W 2 for some system term W 2 . By inspecting the rules of the intensional semantics we have that the only rule which could have been applied to infer this transition is again Rule (EndRcv). It follows that W 2 = W 1 = {w/x}P, • the cases where the last rule applied in the proof of C σ −−− → W 1 is either (TimeNil), (Sleep), (ActRcv) or (Timeout) can be proved similarly to the previous one, • if the last rule applied in the proof of C σ −−− → W 1 is (SumTime), then C = Γ ⊲ P + Q for some processes P, Q. By Lemma A.4(i) we also know that W 1 = P 1 + Q 1 for some P 1 , But by the inductive hypothesis we have that P 1 = P 2 , (Rec) has been applied last, then W = fix X.P for some process variable X and process P; further, Γ ⊲ {fix X.P/X}P λ −−− → W 1 . Suppose now hat Γ ⊲ fix X.P σ −−− → W 2 for some W 2 ; then again the last rule applied has been (Rec), so that con f Γ{fix X.P/X}P λ −−− → W 2 . Now, by the inductive hypothesis, we get that W 1 = W 2 , • the case where (ResV) is the last one in the derivation C σ −−− → W 1 is similar in style to the previous one, and is therefore left to the reader, • the last case is the one in which the last rule applied for deriving C σ −−− → W 1 is Rule (TimePar); the proof in this case is analogous to the one where C = Γ ⊲ P + Q, using Lemma A.4(ii) instead of A.4(i).
Proof of Proposition 2.11. By induction on the proof of the transition. We only supply the details for the most interesting cases.
• The last Rule applied in the proof of the derivation C σ −−− → W 1 is Rule (TimeOut). It follows that C = Γ ⊲ ⌊c?(x).P⌋Q for some Γ, channel c and processes P, Q such that Γ ⊢ c : idle. By inspecting the rules of the intensional semantics we note that no Rule can be applied to obtain a transition of the form C c!v −−−− → W 2 , nor a transition of the form C τ −−− → W 2 ; for this last case, note in fact that a τ-action can be inferred for a configuration of the form Γ ⊲ ⌊c?(x).P⌋Q only via Rule (RcvLate), which however requires Γ ⊢ c : exp. This is in contrast with our assumption that Γ ⊢ c : idle.
• The last Rule applied in the proof of the transition C σ −−− → W 1 is Rule (SumTime). Then We show, by contradiction, that Γ ⊲ P + Q • The last rule applied in the proof of Here it is necessary to make a case analysis on the form of the boolean expression b; the most interesting case, and the only one which we analyse, is b = exp(c) for some channel c. Since b Γ 1 = true then Γ 1 ⊢ c : exp. By hypothesis it follows that Γ 2 ⊢ c : exp, therefore b Γ 2 = true. Now we can apply Rule (Then) • The last rule applied in the proof of Γ 1 ⊲ W λ −−− → W ′ is Rule (Sync). It follows that λ = c!v for some channel c and value v, W = W 1 | W 2 and W ′ = W ′ 1 | W ′ 2 for some W 1 , • via an application of Rule (Shh), applied to a transition of the form Γ⊲W c!v −−−− → W ′ ; it follows that Γ ′ = upd c!v (Γ), from which we obtain that Γ ≤ Γ ′ . Now suppose that Γ ⊲ W τ =⇒ Γ ′ ⊲ W ′ . By definition, there exists an integer n ≥ 0 such that Γ ⊲ W = Γ 0 ⊲ W 0 τ −→ Γ 1 ⊲ W 1 τ −→ · · · τ −→ Γ n ⊲ W n = Γ ′ ⊲ W ′ . By applying the result proved above to each step in this sequence, we obtain Γ = Γ 0 ≤ Γ 1 ≤ · · · ≤ Γ n = Γ ′ , hence Γ ≤ Γ ′ .  Proof of Lemma 4.7. We have to show that if C is well-formed and C λ −−− → W ′ , then C ′ = upd λ (Γ) ⊲ W ′ is also well-formed. We provide the details of the most interesting cases of a rule induction on the proof of the aforementioned transition.
• The last rule applied is Rule (Rcv). Then λ = c?v for some channel c and closed value v. .P, it suffices to prove that upd σ (Γ) ⊢ c : exp; but this is true, since by Definition of upd σ (·) we have that upd σ (Γ) ⊢ t c : n − 1, and now n − 1 > 0, • the last rule applied is Rule (Sync). Then λ = c!v, W = W 1 | W 2 , W ′ = W ′ 1 | W ′ 2 for some By inductive hypothesis the configurations C 1 = upd c!v (Γ) ⊲ W ′ 1 and C 2 = upd c?v (Γ) ⊲ W ′ 2 are well formed, so by the third equation in Definition 4.6 we have that C ′ ∈ Wnets 6 . Proof of Proposition 4.8. Let Γ ⊲ W be a well-formed configuration. We give the details of the most important cases of a structural induction performed on the structure of a system term W.
• W = c ! v .P, or W = τ.P; this case is vacuous, since by definition of instantaneous reductions Γ ⊲ W i , • W = σ.P; this case is trivial, since by applying Rule (Sleep) we infer Γ ⊲ W • W = fix X.P. Recall that in this case every occurrence of the process variable X in P is (time) guarded, so that we can apply the inductive hypothesis to the term {fix X.P/X}P. Now suppose that Γ ⊲ fix X.P i . Then it follows that Γ ⊲ {fix X.P/X}P i , and by inductive hypothesis Γ ⊲ {fix X.P/X}P σ . Now it is easy to show that Γ ⊲ fix X.P σ . • W = P + Q. Suppose that Γ ⊲ P + Q i . That is, Γ ⊲ P i , Γ ⊲ Q i , By inductive hypothesis we have that Γ ⊲ P σ −−− → P ′ , Γ ⊲ Q σ −−− → Q ′ for some P ′ , Q ′ . It follows from Rule (SumTime) that Γ ⊲ P + Q σ −−− → P ′ + Q ′ , hence Γ ⊲ P + Q σ upd σ (Γ) ⊲ P ′ + Q ′ . Proposition A.6. For any channel environment Γ, (possibly open) process P and process environment ρ such that Pρ is closed, then Γ ⊲ Pρ is well-timed.
Proof. We give the details of the most important cases of an induction performed on the structure of the process W. In the following we assume that ρ is a process environment such that Wρ is closed; recall that we are assuming that free occurrences of process variables are time guarded in W.
• W = ⌊c?(x).P⌋Q. Then we have that Γ ⊲ (⌊c?(x).P⌋Q)ρ i ; it follows that Γ ⊲ (⌊c?(x).P⌋Q)ρ is well-timed. • W = X for some process variable X; this case is vacuous, since it violates the assumption that free occurrences of process variables are (time) guarded in W, • W = fix X.P for some process P. Let ρ ′ be the environment defined as ρ[X → (fix X.P)ρ].
By inductive hypothesis they are well timed, meaning that there exists k P ≥ 0 such that whenever Γ ⊲ Pρ h Γ ′ ⊲ P ′ then h ≤ k P ; similarly, there exists k Q ≥ 0 such that whenever Γ ⊲ Qρ h Γ ′ ⊲ Q ′ for some h, then h ≤ k Q . Choose k = max(k P , k Q ). It is easy to show that whenever Γ ⊲ (P + Q)ρ h Γ ′ ⊲ W ′ then either Γ ⊲ Pρ h Γ ′ ⊲ W ′ , in which case h ≤ k P ≤ k, or Γ ⊲ Qρ h Γ ′ ⊲ W ′ , in which case h ≤ k P ≤ k. It follows that Γ ⊲ (P + Q)ρ is well-timed.
Proof of Proposition 4.12. We give the proof for a fragment of the language where channel restriction is omitted. This limitation is needed only to avoid technical complications in the proof of the statement. In fact, when channel restriction is present, we need to introduce a structural congruence ≡ between system terms; the main property required by this relation is that it preserves transitions of configurations, meaning that whenever W 1 ≡ W 2 and Γ ⊲ W 1 Also, the relation ≡ needs to be defined so that any system term W can be rewritten in the form νc. n i=1 P i . See [9], Definition 9.1.2 at Page 174, for the definition of the structural congruence .
Let us focus on the case in which channel restriction is not present in our language First note that the result holds for any well-formed configuration of the form Γ⊲P, where P is a closed process; in fact we have that, Γ ⊲ P = Γ ⊲ Pρ for any process environment ρ, and the latter is well-timed by Proposition A.6.
Otherwise, we can rewrite Γ ⊲ W as Γ ⊲ n i=1 P i , for some processes P 1 , · · · , P n . Note that each configuration Γ i ⊲ P i is well-formed, hence well-timed; by definition there exists an index k P i ≥ 0 such that, whenever Γ ⊲ P i h i Γ ′ ⊲ P ′ i , then h ≤ k P i . Now suppose that Γ ⊲ n i=1 P i h i Γ ′ ⊲ n i=1 P ′ i ; we show that h ≤ n i=1 k P i by induction on h. The case h = 0 is trivial; suppose then that h > 0, and the statement is valid for h−1; in this case we can rewrite the (weak) reduction above as , and Γ ′′ = Γ; in this case it is not difficult to note that there exists an index j : 1 ≤ j ≤ n such that Γ ⊲ P j τ −−− → P ′′ j , and for any index i j, 1 ≤ i ≤ n, P ′′ i = P i . In this case we have that k P ′′ j ≤ k P j − 1 Without loss of generality, let j = 1. Then we have that Hence h ≤ n i=1 k P i , as we wanted to prove; (ii) Otherwise Γ ⊲ n i=1 P i c!v −−−− → n i=1 P ′′ i , and Γ ′′ = upd c!v| (Γ). In this case we can partition the set {1, · · · , n} into three sets {l}, I and J such that (a) Γ ⊲ P l c!v −−−− → Γ ′′ ⊲ P ′′ l and P ′′ = σ δ v .Q for some process Q, (b) for any i ∈ I, rcv(Γ ⊲ P i , c) = true and P ′′ i = c [x].Q i for some process Q i , (c) for any j ∈ J, rcv(Γ ⊲ P j , c) = false and P ′′ j = P j . Note that (a) implies that k P ′′ l = 0 and 1 ≤ k P l , (b) implies that k P ′′ i = 0 for any i ∈ I and (c) implies that k P ′′ j = k P j for any j ∈ J. Without loss of generality, suppose that l = 1, I = {2, · · · , m} for some m ≤ n, and J = {m + 1, ·n}. In this case we have Again the last inequation gives h ≤ n i=1 k P i . Lemma A.7. Let us say that a system term T is behaviourally independent from W if each channel name appearing free in T does not appear free in W, and vice versa.
Proof. Suppose that T is a system term independent from a configuration Γ ⊲ W, and that Γ ⊲ W | T i C. By the definition of instantaneous reductions, there are two possibilities: only if c appears free in W, which by assumption gives that c does not appear free in T ; it follows that rcv(Γ ⊲ T, c) = false, and by Lemma 2.9 we obtain that T ′ = T . By converting the intensional transition in a reduction (recalling that Γ ′ = upd c!v (Γ)), we obtain that this case can be handled symmetrically to the previous one, and leads to Γ ⊲ W | T i Γ ′ ⊲ W | T ′ . Lemma A.8. Let Γ 1 ⊲ W be a configuration, and let Γ 2 be a channel environment such that, for any channel c appearing free in W, Γ 2 (c) = Γ 1 (c). Then if Γ 1 ⊲ W Γ ′ 1 ⊲ W ′ , there exists a channel environment Γ ′ 2 such that Γ 2 ⊲ W Γ ′ 2 ⊲ W ′ 2 , and Γ ′ 1 (c) = Γ ′ 2 (c) for any c appearing free in W. Outline of the proof. The reduction Γ 1 ⊲ W i Γ ′ 1 ⊲ W ′ can be converted in a transition of the form Γ 1 ⊲ W λ −−− → W ′ , where λ takes either the form τ, c!v or σ. Note here that if λ takes the form c!v, then c appears free in W. By performing an induction on the proof of the derivation of this transition we can infer a transition for the configuration Γ 2 ⊲ W, namely Γ 2 ⊲ W λ −−− → W ′ . Also, by letting Γ ′ 2 = upd λ (Γ 2 ), we obtain the reduction Γ 2 ⊲ W Γ ′ 2 ⊲ W ′ . Now it remains to note that if c appears free then, by hypothesis, Γ 1 (c) = Γ 2 (c); hence Γ ′ 1 (c) = upd λ (Γ 1 )(c) = upd λ (Γ 2 )(c) = Γ ′ 2 (c). Corollary A.9. [Independence of Computations] Let Γ ⊲ W be a configuration, and let T be a system term which only uses fresh channels. Then whenever Γ ⊲ W | T * Γ ′′ ⊲ W it follows that W = W ′ | T ′ for some W ′ , T ′ such that Γ ⊲ W ′ * Γ ′ ⊲ W ′ , where Γ ′ is such that Γ ′ (c) = Γ ′′ (c) for any c appearing free in W.
Outline. By induction on the number of derivations k in a sequence of k reductions, Γ ⊲ W | T k Γ ′′ ⊲ W; in the inductive step it is necessary to distinguish whether the first reduction of the sequence is instantaneous or timed. In the first case, the result follows from lemmas A.7 and A.8. In the second case, we need to recover the timed transitions for the individual components Γ ⊲ W and Γ ⊲ T , then apply Lemma A.8. the broadcast performed by T c?v must happen before time elapses; formally, we have the sequence