Independence and concurrent separation logic

A compositional Petri net-based semantics is given to a simple language allowing pointer manipulation and parallelism. The model is then applied to give a notion of validity to the judgements made by concurrent separation logic that emphasizes the process-environment duality inherent in such rely-guarantee reasoning. Soundness of the rules of concurrent separation logic with respect to this definition of validity is shown. The independence information retained by the Petri net model is then exploited to characterize the independence of parallel processes enforced by the logic. This is shown to permit a refinement operation capable of changing the granularity of atomic actions.


Introduction
The foundational work of Hoare on parallel programming [Hoa72] identified the fact that attributing an interleaved semantics to parallel languages is problematic. Three areas of difficulty were isolated, quoted directly: • That of defining a 'unit of action'.
• That of implementing the interleaving on genuinely parallel hardware.
• That of designing programs to control the fantastic number of combinations involved in arbitrary interleaving. The significance of these problems increases with developments in hardware, such as multiple-core processors, that allow primitive machine actions to occur at the same time.
As Hoare went on to explain, a feature of concurrent systems in the physical world is that they are often spatially separated, operating on completely different resources and not interacting. When this is so, the systems are independent of each other, and therefore it is unnecessary to consider how they interact. This perspective can be extended by regarding computer processes as spatially separated if they operate on different memory locations. The problems above are resolved if the occurrence of non-independent parallel actions is prohibited except in rare cases where atomicity may be assumed, as might be enforced using the constructs proposed in [Dij68,Bri72].
Independence models for concurrency allow semantics to be given to parallel languages in a way that can tackle the problems associated with an interleaved semantics. The common core of independence models is that they record when actions are independent, and that independent actions can be run in either order or even concurrently with no consequence on their effect. This mitigates the increase in the state space since unnecessary interleavings of independent actions need not be considered (see e.g. [CGMP99] for applications to model checking). Independence models also permit easier notions of refinement which allow the assumed atomicity of actions to be changed.
It is surprising that, to our knowledge, there has been no comprehensive study of the semantics of programming languages inside an independence model. The first component of our work gives such a semantics in terms of a well-known independence model, namely Petri nets. Our model isolates the specification of the control flow of programs from their effect on the shared state. It indicates what appears to be a general method (an alternative to Plotkin's structural operational semantics) for giving a structural Petri net semantics to a variety of languages -see the Conclusion, Section 7.
The language that we consider is motivated by the emergence of concurrent separation logic [O'H07], the rules of which form a partial correctness judgement about the execution of pointer-manipulating concurrent programs. Reasoning about such programs has traditionally proved difficult due to the problem of variable aliasing. For instance, Owicki and Gries' system for proving properties of parallel programs that do not manipulate pointers [OG76] essentially requires that the programs operate on disjoint collections of variables, thereby allowing judgements to be composed. In the presence of pointers, the same syntactic condition cannot be imposed to yield a sound logic since distinct variables may point to the same memory location, thereby allowing arbitrary interaction between the processes. To give a specific example, Owicki and Gries' system would allow a judgement of the form {x → 0 ∧ y → 0} x := 1 y := 2 {x → 1 ∧ y → 2}, indicating that the result of assigning 1 to the program variable x concurrently with assigning 2 to y from a state where x and y both initially hold value 0 is a state where x holds value 1 and y holds value 2. The judgement is sound because the variables x and y are distinct. If pointers are introduced to the language, however, it is not sound to conclude that , which would indicate that assigning 1 to the location pointed to by x and 2 to the location pointed to by y yields a state in which x points to a location holding 1 and y points to a location holding 2, since x and y may both point to the same location.
At the core of separation logic [Rey00,IO01], initially presented for non-concurrent programs, is the separating conjunction, ϕ * ψ, which asserts that the state in which processes execute may be split into two parts, one part satisfying ϕ and the other ψ. The separating conjunction was used by O'Hearn to adapt Owicki and Gries' system to provide a rule for parallel composition suitable for pointer-manipulating programs [O'H07].
As we shall see, the rule for parallel composition is informally understood by splitting the initial state into two parts, one owned by the first process and the other by the second. Ownership can be seen as a dynamic constraint on the interference to be assumed: parallel processes always own disjoint sets of locations and only ever act on locations that they own. As processes evolve, ownership of locations may be transferred using a system of invariants (an example is presented in Section 4). A consequence of this notion of ownership is that the rules discriminate between the parallel composition of processes and their interleaved expansion. For example, the logic does not allow the judgement {ℓ → 0} [ℓ] := 1 [ℓ] := 1 {ℓ → 1}, which informally means that the effect of two processes acting in parallel which both assign the value 1 to the memory location ℓ from a state in which ℓ holds 0 is to yield a state in which ℓ holds 1. However, if we adopt the usual rule for the nondeterministic sum of processes, the corresponding judgement is derivable for their interleaved expansion, One would hope that the distinction that the logic makes between concurrent processes and their interleaved expansion is captured by the semantics; the Petri net model that we give does so directly.
The rules of concurrent separation logic contain a good deal of subtlety, and so lacked a completely formal account until the pioneering proof of their soundness due to Brookes [Bro07]. The proof that Brookes gives is based on a form of interleaved trace semantics. The presence of pointers within the model alongside the possibility that ownership of locations is transferred means, however, that the way in which processes are separated is absolutely non-trivial, which motivates strongly the study of the language within an independence model. We therefore give a proof of soundness using our net model and then characterize entirely semantically the independence of concurrent processes in Theorem 5.4.
It should be emphasized that the model that we present is different from Brookes' since it provides an explicit account of the intuitions behind ownership presented by O'Hearn. It involves taking the original semantics of the process and embellishing it to capture the semantics of the logic. The proof technique that we employ defines validity of assertions in a way that captures the rely-guarantee reasoning [Jon83] emanating from ownership in separation logic directly, and in a way that might be applied in other situations.
In [Rey04], Reynolds argues that the separation of parallel processes arising from the logic allows store actions that were assumed to be atomic, in fact, to be implemented as composite actions (seen as a change in their granularity) with no effect on the validity of the judgement. Independence models are suited to modeling situations where actions are not atomic, a perspective advocated by Lamport and Pratt [Pra86,Lam86]. We introduce a novel form of refinement, inspired by that of [vGG89], and show how this may be applied to address the issue of granularity using our characterization of the independence of processes arising from the logic.

Terms and states
Concurrent separation logic is a logic for programs that operate on a heap. A heap is a structure recording the values held by memory locations that allows the existence of pointers as well as providing primitives for the allocation and deallocation of memory locations. A heap can be seen as a finite partial function from a set of locations Loc to a set of values Val: Heap def = Loc ⇀ fin Val We will use ℓ to range over elements of Loc and v to range over elements of Val. As stated, a heap location can point to another location, so we require that Loc ⊆ Val. We shall say that a location is current (or allocated) in a heap if the heap is defined at that location. The procedure of making a non-current location current is allocation, and the reverse procedure is called deallocation. If h is a heap and h(ℓ) = ℓ ′ , there is no implicit assumption that h(ℓ ′ ) is defined. Consequently, heaps may contain dangling pointers.
In addition to operating on a heap, the programs that we shall consider shall make use of critical regions [Dij68] protected by resources. The mutual exclusion property that they provide is that no two parallel processes may be inside critical regions protected by the same resource. We will write Res for the set of resources and use r to range over its elements. Critical regions are straightforwardly implemented by recording, for each resource, whether the resource is available or unavailable. A process may enter a critical region protected by r only if r is available; otherwise it is blocked and may not resume execution until the resource becomes available. The process makes r unavailable upon entering the critical region and makes r available again when it leaves the critical region. The language also has a primitive, resource w do t od, which says that the variable w represents a resource local to t.
The syntax of the language that we will consider is presented in Figure 1. The symbol α is used to range over heap actions, which are actions on the heap that might change the values held at locations but do not affect the domain of definition of the heap. That is, they neither allocate nor deallocate locations. We reserve the symbol b for boolean guards, which are heap actions that may proceed without changing the heap if the boolean b holds.
Provision for allocation within our language is made via the alloc(ℓ) primitive for ℓ ∈ Loc, which makes a location current and sets ℓ to point at this location. For symmetry, dealloc(ℓ) makes the location pointed to by ℓ non-current if ℓ points to a current location. Writing a heap as the set of values that it holds for each allocated location, the effect of the command alloc(ℓ) on the heap {ℓ → 0} might be to form a heap {ℓ → ℓ ′ , ℓ ′ → 1} if the location ℓ ′ is chosen to be allocated and is assigned initial value 1. The effect of the command dealloc(ℓ) on the heap {ℓ → ℓ ′ , ℓ ′ → 1} would be to form the heap {ℓ → ℓ ′ }.
The guarded sum α.t + α ′ .t ′ is a process that executes as t if α takes place or as t ′ if α ′ takes place. We refer the reader to Section ? for a brief justification for disallowing non-guarded sums.
As mentioned earlier, critical regions are provided to control concurrency: the subprocess t inside with r do t od can only run when no other process is inside a critical region protected by r. The term resource w do t od has the resource variable w bound within t, asserting that a resource is to be chosen that is local to t and used for w. Consequently, in the process (resource w do with w do t 1 od od) (resource w do with w do t 2 od od) the sub-processes t 1 and t 2 may run concurrently since they must be protected by different resources, one local to the process on the left and the other local to the process on the right. To model this, we shall say that the construct resource w do t od binds the variable w within t, and the variable w is free in with w do t od. We write fv(t) for the free variables in t and say that a term closed if it contains no free resource variables; we shall restrict attention to such terms. We write [r/w]t for the term obtained by substituting r for free occurrences of the variable w within t. As standard, we will identify terms 'up to' the standard alpha-equivalence ≡ induced by renaming bound occurrences of variables. The notation res(t) is adopted to represent the resources occurring in t.
The semantics of the term resource w do t od will involve first picking a 'fresh' resource r and then running [r/w]t. It will therefore be necessary to record during the execution of INDEPENDENCE AND CONCURRENT SEPARATION LOGIC * 5 Terms: t ::= α heap action | alloc(ℓ) heap allocation with r do t od critical region | with w do t od critical region (local).
The way in which we shall formally model the state in which processes execute is motivated by the way in which we shall give the net semantics to closed terms. We begin A state σ is defined to be a tuple (D, L, R, N ) where D ⊆ D represents the values held by locations in the heap; L ⊆ L represents the set of current, or allocated, locations of the heap; R ⊆ R represents the set of available resources; and N ⊆ N represents the set of current resources. The sets D, L, R and N are disjoint, so no ambiguity arises from writing, for example, (ℓ, v) ∈ σ.
The interpretation of a state for the heap is that (ℓ, v) ∈ D if ℓ holds value v and that curr(ℓ) ∈ L if ℓ is current. For resources, r ∈ R if the resource r is available and curr(r) ∈ N if r is current. It is clear that only certain such tuples of subsets are sensible. In particular, the heap must be defined precisely on the set of current locations, and only current resources may be available. It is clear to see that the L component of any given consistent state may be inferred from the D component. It will, however, be useful to retain this information separately for when the net semantics is given. We shall call D ⊆ D a heap when it is a finite partial function from locations to values, and shall write ℓ → v for its elements rather than (ℓ, v). We shall frequently make use of the following definition of the domain of a heap D:

Process models
The definition of state that we have adopted permits a net semantics to be defined. Before doing so, we shall define how heap actions are to be interpreted and then give a transition semantics to closed terms.
3.1. Actions. The earlier definition of state allows a very general form of heap action to be defined that forms a basis for both the transition and net semantics. We assume that we are given the semantics of primitive actions α as A α comprising a set of heap pairs: We require that whenever (D 1 , D 2 ) ∈ A α , it is the case that D 1 and D 2 are (the graphs of) partial functions with the same domain.
The interpretation is that α can proceed in heap D if there are (D 1 , D 2 ) ∈ A α such that D has the same value as D 1 wherever D 1 is defined. The resulting heap is formed by updating D to have the same value as D 2 wherever it is defined. It is significant that this definition allows us to infer precisely the set of locations upon which an action depends. The requirement on the domains of D 1 and D 2 ensures that actions preserve consistent markings (Lemma 3.25).
Example 3.1 (Assignment). For any two locations ℓ and ℓ ′ , let [ℓ] := [ℓ ′ ] represent the action that copies the value held at location ℓ ′ to location ℓ. Its semantics is as follows: Following the informal account above of the semantics of actions, because in the semantics we have Example 3.2 (Booleans). Boolean guards b are actions that wait until the boolean expression holds and may then take place; they do not update the state. A selection of literals may be defined. For example: The first gives the semantics of an action that proceeds only if ℓ holds value v and the second gives the semantics of an action that proceeds only if the locations ℓ and ℓ ′ hold the same value.
Since boolean actions shall not modify the heap, they shall possess the property that: This is preserved by the operations defined below. For heaps D and D ′ , we use D ↑ D ′ to mean that D and D ′ are compatible as partial functions and D ↑ D ′ otherwise, i.e. if they disagree on the values assigned to a common location.
By insisting on minimality in the clause for ¬b, we form an action that is defined at as few locations as possible to refute all grounds for b.
3.2. Transition semantics. As an aid to understanding the net model, and in particular to give a model with respect to which we can prove its correspondence, a transition semantics for closed terms (terms such that fv(t) = ∅) is given in Figure 2. A formal relationship between the two semantics is presented in Theorem 3.27. The transition semantics is given by means of labelled transition relations of the forms t, σ λ −→ t ′ , σ ′ and t, σ λ −→ σ ′ . As usual, the first form of transition indicates that t performs an action labelled λ in state σ to yield a resumption t ′ and a state σ ′ . The second indicates that t in state σ performs an action labelled λ to terminate and yields a state σ ′ . Labels follow the grammar resource acquisition (critical region entry) | rel(r) resource release (critical region exit).
In the transition semantics, we write σ ⊕ σ ′ for the union of the components of two states where they are disjoint and impose the implicit side-condition that this is defined wherever it is used. For example, this implicit side-condition means, in the rule (Alloc), that for alloc(ℓ, v, ℓ ′ , v ′ ) to occur we must have curr(ℓ ′ ) ∈ σ, and hence ℓ ′ was initially non-current. Similarly, the rule (Res) can only be applied to derive a transition labelled decl(r) if the resource r was not initially current. The syntax of terms is extended temporarily to include rel r and end r which are special terms used in the rules (Rel) and (End). These, respectively, are attached to the ends of terms protected by critical regions and the ends of terms in which a resource was declared.
For conciseness, we do not give an error semantics to situations in which non-current locations or resources are used; instead, the process will become stuck. We show in Section 4.3 that such situations are excluded by the logic.
3.3. Petri nets. Petri nets, introduced by Petri in his 1962 thesis [Pet62], are a well-known model for concurrent computation. It is beyond the scope of the current article to provide a full account of the many variants of Petri net and their associated theories; we instead refer the reader to [BRR87] for a good account. Roughly, a Petri net can be thought of as a transition system where, instead of a transition occurring from a single global state, an occurrence of an event is imagined to affect only the conditions in its neighbourhood. Petri nets allow a derived notion of independence of events; two events are independent if their neighbourhoods of conditions do not intersect.
We base our semantics on the following well-known variant of Petri net (cf. the 'basic' nets of [CW01] and [WN95]): The set B comprises the conditions of the net, the set E consists of the events of the net, and M 0 is the subset of B of marked conditions (the initial marking). The maps are the precondition and postcondition maps, respectively.
Petri nets have an appealing graphical representation, with: • circles to represent conditions, • bold lines to represent events, • arrows from conditions to events to represent the precondition map, (Act) :  • arrows from events to conditions to represent the postcondition map, and • tokens (dots) inside conditions to represent the marking. Action within nets is defined according to a token game which defines how the marking of the net changes according to firing of the events. An event e can fire if all its preconditions are marked and, following their un-marking, all the postconditions are not marked. That is, in marking M , Such an event is said to have concession or to be enabled. The marking following the occurrence of e is obtained by removing the tokens from the preconditions of e and placing a token in every postcondition of e. We write M If constraint (2) does not hold but constraint (1) does, so the preconditions are all marked (have a token inside) but following removal of the tokens from the preconditions there is a token in some postcondition, there is said to be contact in the marking and the event cannot fire. Consider the following example Petri net, with its transition system between markings derived according to the token game.  The event e 1 is the only event with concession in the initial marking {a, g}. Its occurrence yields the marking obtained by un-marking its preconditions and marking its postconditions, namely {b, c, g}. In the marking {b, c, g}, contact prevents the occurrence of e 4 since its postcondition g is marked following removal of the token from its precondition c. However, in the marking {b, c, g} both event e 2 and event e 3 can occur. Note that the occurrence of e 2 in marking {b, c, g} does not affect the occurrence of e 3 and vice versa since the two events operate on completely disjoint sets of conditions. For any event e ∈ E, define the notation The standard notion of independence within this form of Petri net is to say that two events e 1 and e 2 are independent, written e 1 Ie 2 , if their neighbourhoods are disjoint. That is, It is easy to see in general that the occurrences of independent events in a marking do not affect each other. 3.4. Overview of net semantics. Before giving the formal definition of the net semantics of closed terms, by means of an example we shall illustrate how our semantics shall be defined. First, we shall draw the semantics of an action toggle(ℓ, 0, 1) that toggles the value held at a location ℓ between 0 and 1.
terminal conditions initial conditions Notice that in the above net there are conditions to represent the shared state in which processes execute, including for example the values held at locations (we have only drawn conditions that are actually used by the net). There are also conditions to represent the control point of the process. The net pictured on the left is in its initial marking of control conditions and the net on the right is in its terminal marking of control conditions, indicating successful completion of the process following the toggle of the value; the marking of the net initially had the state condition ℓ → 0 marked and finished with the condition ℓ → 1 marked. There is an event present in the net for each way that the action could take place: one event for toggling the value from 0 to 1 and another event for toggling the value from 1 to 0. Only the first event could occur in the initial marking of the net on the left, and no event can occur in the marking on the right since the control conditions are not appropriately marked. The parallel composition toggle(ℓ, 0, 1) toggle(ℓ, 0, 1) can be formed by taking two copies of the net toggle(ℓ, 0, 1) and forcing them to operate on disjoint sets of control conditions. An example run of this net would involve first the top event changing the value of ℓ from 0 to 1 and then the bottom event changing ℓ back from 1 to 0. The resulting marking of control conditions would be equal to the terminal conditions of the net, so no event would have concession in this marking. The net representing the sequential composition (toggle(ℓ, 0, 1) toggle(ℓ, 0, 1)); (toggle(ℓ, 0, 1) toggle(ℓ, 0, 1)) is formed by a 'gluing' operation that joins the terminal conditions of one copy of the net for toggle(ℓ, 0, 1) to the initial conditions of another copy of the net for toggle(ℓ, 0, 1). (In this example net, for clarity we shall not show the state conditions.) initial conditions terminal conditions "gluing" 3.5. Net structure. As outlined above, within the nets that we give for processes we distinguish two forms of condition, namely control conditions and state conditions. The markings of these sets of conditions determine the control point of the process and the state in which it is executing, respectively. When we give the net semantics, we will make use of the closure of the set of control conditions under various operations.
Definition 3.5 (Conditions). Define the set of control conditions C, ranged over by c, to be the least set such that: • C contains distinguished elements i and t, standing for 'initial' and 'terminal', respectively.
• If c ∈ C then r:c ∈ C for all r ∈ Res and i:c ∈ C for all i ∈ {1, 2}, to distinguish processes working on different resources or arising from different subterms. • If c, c ′ ∈ C then (c, c ′ ) ∈ C to allow the 'gluing' operation above. Define the set of state conditions S to be D ∪ L ∪ R ∪ N.
A state σ = (D, L, R, N ) corresponds to the marking D∪L∪R∪N of state conditions in the obvious way. Similarly, if C is a marking of control conditions and σ is a state, the pair (C, σ) corresponds to the marking C ∪ σ. We therefore use the notations interchangeably.
The nets that we form shall be extensional in the sense that two events are equal if they have the same preconditions and the same postconditions. An event can therefore be regarded as a tuple e = (C, σ, C ′ , σ ′ ) with preconditions • e def = C ∪σ and postconditions e • def = C ′ ∪σ ′ . To obtain a concise notation for working with events, we write C e for the pre-control conditions of e:

13
We likewise define notations e C , D e, L e etc., and call these the components of e by virtue of the fact that it is sufficient to define an event through the definition of its components. The pre-state conditions of e are S e = D e ∪ L e ∪ R e ∪ N e, and we define e S similarly.
Two markings of control conditions are of particular importance: those marked when the process starts executing and those marked when the process has terminated. We call these the initial control conditions I and terminal control conditions T , respectively. We shall call a net with a partition of its conditions into control and state with the subsets of control conditions I and T an embedded net. For an embedded net N , we write Ic(N ) for I and Tc(N ) for T , and we write Ev(N ) for its set of events. Observe that no initial marking of state conditions is specified.
The semantics of a closed term t shall be an embedded net, written N t . No confusion arises, so we shall write Ic(t) for Ic(N t ), and Tc(t) and Ev(t) for Tc(N t ) and Ev(N t ), respectively. The nets formed shall always have the same sets of control and state conditions; the difference shall arise in the events present in the nets. It would be a trivial matter to restrict to the conditions that are actually used.
As we give the semantics of closed terms, we will make use of several constructions on nets. For example, we wish the events of parallel processes to operate on disjoint sets of control conditions. This is conducted using a tagging operation on events. We define 1:e to be the event e changed so that but otherwise unchanged in its action on state conditions. We define the notations 2:e and r:e where r ∈ Res similarly. The notations are extended pointwise to sets of events: Another useful operation is what we call gluing two embedded nets together. For example, when forming the sequential composition of processes t 1 ; t 2 , we want to enable the events of t 2 when t 1 has terminated. This is done by 'gluing' the two nets together at the terminal conditions of t 1 and the initial conditions of t 2 , having made them disjoint on control conditions using tagging. Wherever a terminal condition c of Tc(t 1 ) occurs as a pre-or a postcondition of an event of t 1 , every element of the set {1:c} × (2:Ic(t 2 )) would occur in its place. Similarly, the events of t 2 use the set of conditions (1:Tc(t 1 )) × {2:c ′ } instead of an initial condition c ′ of Ic(t 2 ). A variety of control properties that the nets we form possess (Lemma 3.11), such as that all events have at least one pre-control condition, allows us to infer that it is impossible for an event of t 2 to occur before t 1 has terminated, and thereon it is impossible for t 1 to resume. An example follows shortly.
Assume a set P ⊆ C × C. Useful definitions to represent gluing are: and (c 1 , c 2 ) ∈ P } ∪ {c 2 | c 2 ∈ C and ∄c 1 .(c 1 , c 2 ) ∈ P } The first definition, P ⊳ C, indicates that an occurrence of c 1 in C is to be replaced by occurrences of (c 1 , c 2 ) for every c 2 such that (c 1 , c 2 ) occurs in P . The second definition, P ⊲ C, indicates that an occurrence of c 2 in C is to be replaced by occurrences of (c 1 , c 2 ) for every c 1 such that (c 1 , c 2 ) occurs in P .
The notation is extended to events to give an event P ⊳ e in the following way, recalling that gluing will only affect the control conditions used by an event and in particular not its state conditions: The notation P ⊲e is defined similarly, and it is also extended to sets of events in the obvious pointwise manner. For any marking M = (C, σ), we will write P ⊳ M for (P ⊳ C, σ) and similarly write P ⊲ M for (P ⊲ C, σ).
To give an example, consider the gluings P ⊳ C 1 and P ⊲ C 2 where C 1 = {a, b} and C 2 = {c, d} are joined at P = C 1 × C 2 . Applying P ⊳ C 1 to the left net and P ⊲ C 2 to the right net below, this indicates how gluing is used to sequentially compose embedded nets: The operations of gluing and tagging affect only the control flow of events, not their effect on the marking of state conditions.
Proof. The first and fourth items are straightforward to prove. The remaining properties may be shown using the following easily-demonstrated equations, along with their counterparts for ⊲, for any subset of control conditions C: 3.6. Net semantics. The net semantics that we now give for closed terms is defined by induction on the size of terms, given in the obvious way. The reason why it is not given by induction on terms is that the semantics of resource w do t od is given according to the semantics of [r/w]t for all resources r. ⊲ Heap action: Let act (C,C ′ ) (D 1 , D 2 ) denote an event e with C e = C e C = C ′ D e = D 1 e D = D 2 and all other components empty, i.e. L e = e L = R e = e R = N e = e N = ∅. For an action α, we define: ⊲ Allocation and deallocation: The command alloc(ℓ) activates, by making current and assigning an arbitrary value to, a non-current location and sets ℓ to point at it. For symmetry, dealloc(ℓ) deactivates the current location pointed to by ℓ. We begin by defining two further event notations. First, alloc (C,C ′ ) (ℓ, v, ℓ ′ , v ′ ) is the event e such that C e = C and e C = C ′ and and otherwise empty components, which changes ℓ ′ from being non-current to current, gives it value v ′ and changes the value held at ℓ from v to ℓ ′ . If the condition curr(ℓ ′ ) is marked before the event takes place, contact occurs, so the event has concession only if the location ℓ ′ is not initially current. Second, dealloc (C,C ′ ) (ℓ, ℓ ′ , v ′ ) is the event e such that C e = C and e C = C ′ and which does the converse of allocation. The location ℓ is left with a dangling pointer to ℓ ′ . The two events may be drawn as: The semantics of allocation is given by: Note that there is an event present for every value that ℓ might initially hold and every value that ℓ ′ might be assumed to take initially.
The semantics of disposal is given by: ⊲ Sequential composition: The sequential composition of terms involves gluing the terminal marking of the net for t 1 to the initial marking of the net for t 2 . The operation is therefore performed on the set P = 1:Tc(t 1 ) × 2:Ic(t 2 ).
The formation of the sequential composition on control conditions may be drawn schematically as: ⊲ Parallel composition: The control flow of the parallel composition of processes is autonomous; interaction occurs only through the state. We therefore force the events of the two processes to work on disjoint sets of control conditions by giving them different tags: Note that the definition of the semantics parallel composition is associative and commutative only if we regard nets up to isomorphism on the control conditions. ⊲ Guarded sum: Let t be the term α 1 .t 1 + α 2 .t 2 . The sum is formed by prefixing the actions onto the tagged nets representing the terms and then gluing the sets of terminal conditions. Let P = (1:Tc(t 1 )) × (2:Tc(t 2 )). Define: The net may be pictured schematically as follows, in which we have drawn only one representative event for each of α 1 and α 2 , and have elided the effect of these events on state conditions. On a technical point, one may wonder why the syntax of the language requires that sums possess guards. This is seemingly curious since the category of safe Petri nets, which intuitively underlies a category of embedded nets, has a coproduct construction. However, as remarked in Section 5 of [Win87], there are cases where the coproduct of nets does not coincide with the usual interpretation of nondeterministic sum. In Section 3.3 of [Win86], this is explained as the occurrence net unfolding (the 'behaviour') of the coproduct of two nets not being equal to the coproduct of their respective unfoldings. To repeat an example given there, letting + represent coproduct in the category of safe nets, we have: Consequently, using this coproduct as a definition of general sum, the runs of the net representing α + (while true do α ′ ) would consist of some finite number of executions of α ′ followed, possibly, by one of α. Quite clearly, this does not correspond to the normal understanding of nondeterminism presented in the transition semantics. The restriction of processes to only use guarded sums allows us to recover the standard interpretation of sums (hence allowing the standard structural operational rule for sums). As stated in [Win87,Win86], another alternative would be to ensure that no event has a postcondition inside the initial conditions of the net. This would necessitate a different semantics for while loops, possibly along the lines of [vGV87] which would unfold one iteration of any loop. ⊲ Iteration: To form the net for while b do t od we glue the initial and the terminal conditions of b.t together and then add events to exit the loop when ¬b holds. Let P = {i} × 1:Tc(t). Define: The loop can be visualized in the following way (in which we only present one event, e b , for the boolean b and one event, e ¬b , for the boolean ¬b): Observe that the event decl (C,C ′ ) (r) will avoid contact, and thus be able to occur, only if the resource r is initially non-current.
First consider resource w do t od. Its initial and terminal conditions are defined as: Its events are defined as: As such, the semantics of resource variable binding is a representation of the nondeterministic choice of resource to be selected to be used for the variable. Only one resource shall be chosen for the variable, and it will initially have been non-current thanks to contact described above. Note that the semantics is invariant under α-equivalence ≡. Now consider the term with r do t od. Its semantics is, informally, to acquire the resource r, then to execute t, and finally to release the resource r: 3.7. Runs of nets. A well-known property of independence models is that they support a form of run of the net in which independent actions are not interleaved: Given any sequence of events of the net between two markings, we can swap the consecutive occurrences of any two independent events to yield a run between the same two markings. As seen in for example [WN95], this allows us to form an equivalence class of runs between the same markings, generating a Mazurkiewicz trace. This yields a partially ordered multiset, or pomset, run [Pra86], in which the independence of event occurrences is captured through them being incomparable.
• ≤ is a partial order on X; The elements of X can be thought of via λ as occurrences of events. Where two occurrences are unrelated through the order ≤, they can be thought of as occurring concurrently. Their independence ensures that the effect of this is defined simply as any sequential occurrence of the events.
Definition 3.9. A sequence is a path π = (X, ≤, λ) in which ≤ is a total order on X. Let x 1 be the event occurrence least in X according to ≤; let x 2 be the least event occurrence strictly greater than x 1 ; and so on, all the way up to x n which is the greatest event occurrence according to ≤ for n equal to the size of X (assumed to be finite). The sequence π can be written as e 1 , . . . , e n , where λ(x i ) = e i for all 0 < i ≤ n. Say that a sequence π = e 1 , . . . , e n is from marking Note that the empty path is from marking M to marking M for any marking M . We shall say that a pomset path (X, ≤, λ) is from marking M to M ′ if there exists any extension of ≤ to a total order ≤ ′ such that (X, ≤ ′ , λ) is a sequence from M to M ′ . As discussed, it is a standard result that any other extension of ≤ to a total order also yields a path from M to M ′ .
In fact, when we consider concurrent separation logic, we will only need to consider paths that are sequences, so in the rest of this paper we shall restrict attention to them; all our results generalize straightforwardly to pomsets. From now on, we shall therefore use the terms 'sequence', 'path' and 'run' interchangeably. We have chosen to highlight pomset runs (for conciseness, we have not presented other forms of 'run' of a net, such as causal nets) simply to show that Petri nets possess a notion of run that is non-interleaved.
Write () for the path comprising no events and write e for the path with just a single event e. We introduce the notation π : M − ։ * M ′ to mean that π is a path from marking M to marking M ′ , and write M − ։ * M ′ if there exists a path from marking M to marking M ′ . We shall also write π 1 · π 2 for the composition of sequential paths; clearly, M 3.8. Structural properties. Here we establish characterizations of the runs of the net N t according to the structure of t. The reader may wish to pass over these technical, but important, details and go directly to Section 3.9.
A complicating factor in characterizing the runs is that that we cannot describe a priori the markings reachable in the net for t from an initial state simply from the markings reachable from the nets representing the subterms of t (allowing for the substitution of resources for resource names) running from suitable initial states; this property, as one would expect, fails for parallel composition. However, we can establish properties about the control flow of programs. Since such properties are insensitive to the interaction through shared state of parallel processes, they may be established inductively on (the size of) terms. For an event e and markings of control conditions C and C ′ , we write C e − ։ C C ′ if the event e has concession in the marking C when considering only its control conditions, and its occurrence would result in the marking of control conditions C ′ : We write σ e − ։ S σ ′ if the event e has concession on state conditions in the marking σ and its occurrence yields the marking of state conditions σ ′ Lemma 3.10. For any event e and markings C, C ′ of control conditions and σ, σ ′ of state Following the above notation, we shall write π : C − ։ * C C ′ if the path π is from the control marking C to C ′ , defined in the obvious way. We shall say that a marking C ′ is We begin with some fairly straightforward properties about the initial and terminal markings and the sets of pre-and postconditions of each event being nonempty. The first and second items of the lemma below could even be seen as part of the definition of embedded net since nonemptiness is necessary for the constructions above to result in nets with the expected behaviours. With the final property, they can be used to show that no event has concession in the terminal marking of the net. The third property eases the definitions constructing N t .
The following property, that any event occurring from the initial marking of a net has a precondition in the set of initial conditions (and the corresponding property that any event into the terminal marking of the net has a postcondition inside the terminal conditions), follows immediately from the previous lemma. It will be used frequently; for instance, to show that in the net N t 1 ; t 2 if e 1 is an event from N t 1 and e 2 is an event from N t 2 and e 2 immediately follows e 1 in some sequential run, then there is a control condition that occurs in both the postconditions of e 1 and the preconditions of e 2 . This property is used in Theorem 5.4. Lemma 3.12. For any closed term t, event e and marking C of control conditions of N t : Another important technical property that the embedded nets formed possess is that the marking of control conditions is equal to the set of initial conditions if either only initial conditions are marked or if all initial conditions are marked, for any reachable marking, and the similar statement for the terminal conditions of the net.
Definition 3.13. Say that an embedded net N is clear if, for any marking of control conditions C that is control-reachable from Ic(N ): (1) if either C ⊆ Ic(t) or Ic(t) ⊆ C then C = Ic(t), and (2) if either C ⊆ Tc(t) or Tc(t) ⊆ C then C = Tc(t). This is used in the proofs characterizing the markings reachable in the net N t in terms of the markings reachable in the nets representing t's subterms (for instance, to show that any run to completion of the net N t 1 ; t 2 can be obtained as a run of the net N t 1 followed by a run of the net N t 2 since when t 1 in N t 1 ; t 2 terminates, precisely the terminal control conditions of N t 1 will be marked).
Some care is necessary since the proof that, for any closed term t, the net N t is clear itself requires understanding of the markings reachable in the net N t . To resolve this apparent 'circularity', when proving the properties required of the net N t required to show that the net is clear we shall assume that the nets representing the subterms of t are clear. We shall then prove that any net N t is clear, allowing us to use elsewhere the properties relating runs of the net N t to the runs of the nets of subterms of t. In effect, we will be proving clearness and the structural properties simultaneously, by induction on the size of terms.
3.8.1. Sequential composition. The technique that we use to relate the runs of the net for a term t to the runs of the nets of its subterms is to establish a suitably strong invariant relating the markings arising before and after the occurrence of any event present in N t , and then perform an induction on the length of sequence. For instance, for sequential composition, we prove: Lemma 3.14. Let P = 1:Tc(t 1 ) × 2:Ic(t 2 ). Assume that N t 1 and N t 2 are clear (Definition 3.13), and consider the net N t 1 ; t 2 . For any event e ∈ Ev(t 1 ; t 2 ) and any markings of control conditions C 1 and C 2 : • Ic(t 1 ; t 2 ) = P ⊳ 1:Ic(t 1 ) and Tc(t 1 ; t 2 ) = P ⊲ 2:Tc(t 2 ). • P = P ⊳ 1:C 1 iff C 1 = Tc(t 1 ), and P = P ⊲ 2:C 2 iff C 2 = Ic(t 2 ).
Proof. The first item is simply a re-statement of part of the definition of N t 1 ; t 2 and the second item is easy to show. The remaining parts follow an analysis of the events of the net.
Using this result, it can be shown that any state reached in N t 1 ; t 2 is reached either as a run of N t 1 or as a run of N t 1 to a terminal marking followed by a run of N t 2 .

Proof.
A straightforward induction on the length of π using Lemma 3.14.
The converse result, that runs of the nets N t 1 and N t 2 , with appropriate intermediate states, give rise to runs of the net N t 1 ; t 2 can also be shown.
• For any markings C 1 , C 2 and C ′ of control conditions and any event e ∈ Ev(t 1 t 2 ), if 1:C 1 ∪ 2:C 2 e − ։ C C ′ in N t 1 t 2 then either: − there exists e 1 ∈ Ev(t 1 ) such that e = 1:e 1 and there exists C ′ 1 such that C ′ = 1:C ′ 1 ∪2:C 2 and C 1 such that e = 2:e 2 and there exists C ′ 2 such that C ′ = 1: Using the preceding lemma, the paths of the net N t 1 t 2 on control conditions can be characterized as: then any event e in π is either equal to 1:e 1 for some event e 1 ∈ Ev(t 1 ) or equal to 2:e 2 for some event e 2 ∈ Ev(t 2 ). Furthermore, there exist C 1 and C 2 such that C = 1:C 1 ∪ 2:C 2 and where π 1 is obtained by removing events equal to 2:e 2 for some e 2 from π, and π 2 is obtained by removing events equal to 1:e 1 for some e 1 from π.
Notably there is no analogue to Lemma 3.16 involving the markings of state conditions for the parallel composition.
3.8.3. Iteration. The net N while b do t 0 od allows runs that start with an event that either shows that the boolean b holds or an event that shows that b fails. If b fails, the net enters its terminal marking an no further action occurs. If the boolean b passes, a run of the net N t 0 occurs, followed by the net re-entering its initial control state. The following lemma captures this; it is proved by establishing an invariant in the same way as was done for the sequential composition, though for brevity we shall omit it.
Lemma 3.19. Let t ≡ while b do t 0 od and suppose that N t 0 is clear. Let P = {i} × 1:Tc(t 1 ), and recall that P = Ic(t). Assume that π is a path such that π : There exists a natural number n ≥ 0 and a (possibly empty if n = 0) collection of paths π 1 , . . . , π n and heaps D 1 , . . . , D n such that, for each path π i : Write e i for the event act (P,1: Either: • C = Ic(t) and π = e 1 · (P ⊲ 1:π 1 ) · . . . · e n · (P ⊲ 1:π n ); • C = P ⊳ 1:C ′ for some marking of control conditions C ′ and there exists a path π ′ and heap D ′ such that π = e 1 · (P ⊲ 1:π 1 ) · . . . e n · (P ⊲ 1:π n ) · act (P,1: and π ′ : ; or • C = Tc(t) and there exists a heap D ′ such that π = e 1 · (P ⊲ 1:π 1 ) · . . . e n · (P ⊲ 1:π n ) · act (P,Tc(t)) (D ′ , D ′ ) and act (Ic(¬b),Tc(¬b)) (D ′ , D ′ ) : The three possible cases for the control marking C above correspond to net being in its initial control state (following some number of iterations), the net being in the body of the loop, and the net being in its terminal control state following exit of the loop.
3.8.4. Sums. The behaviour of the net N α 1 .t 1 + α 2 .t 2 can be characterized as either the occurrence of an event of the action α 1 followed by a run of t 1 or the occurrence of an event of the action α 2 followed by a run of t 2 . Note that if C = P ⊳1: if, and only if, C 1 = Tc(t 1 ), and the similar property for t 2 .
3.8.5. Resource declaration. A consequence of the following result is that any complete run of the net resource w do t 0 od consists first of an event that chooses a resource r to be used for w, then a run of [r/w]t 0 , and finally an event that records that r is no longer in use.
Lemma 3.21. Suppose that the net N [r/w]t 0 is clear for any resource r and let t ≡ resource w do t 0 od. If in the net N t we have π : Ic(t) − ։ * C C then either: • C = Ic(t) and π = (), or • there exist r ∈ Res and C ′ and π 0 such that C = r:C ′ and and there exist r ∈ Res and π 0 such that π = decl ({i},r:Ic([r/w]t 0 )) (r) · (r:π 0 ) · end (r:Tc([r/w]t 0 ),{t}) (r) and decl ({i},r:Ic([r/w]t 0 )) : Proof. By establishing an invariant on markings between the occurrences of single events, as in Lemma 3.14.
3.8.6. Critical regions. The net N with r do t 0 od starts by acquiring the resource r. If this action cannot proceed because the resource is unavailable, no event will occur. If the resource is available, the process behaves as t 0 , and then releases the resource r if t 0 terminates.
Lemma 3.22. Let t ≡ with r do t 0 od and suppose that the net N t 0 is clear. If in the net N t we have π : Ic(t) − ։ * C C then either: • C = Ic(t) and π = (), • C = r:C 0 for some marking of control conditions C 0 and π = acq (Ic(t),r:Ic(t 0 )) (r) · (r:π 0 ) for some path π 0 such that π 0 : Ic(t 0 ) − ։ * C C 0 in N t 0 , or • C = Tc(t) and π = acq (Ic(t),r:Ic(t 0 )) (r) · (r:π 0 ) · rel (r:Tc(t 0 ),Tc(t)) (r) for some path π 0 such that π 0 : Ic(t 0 ) − ։ 3.8.7. Clearness. Now that we have established these control properties of the runs of processes, we can show that the clearness property of Definition 3.13 does indeed hold in the net N t for any term t.
Lemma 3.23. For any closed term t, the net N t is clear.
Proof. Following the observation that the property can be proved by induction on the size of terms using the above control properties.
3.8.8. Preservation of consistency. The final attribute that we aim towards is that any marking of state conditions σ reachable in N t from a consistent initial marking of state conditions σ 0 is itself consistent. The only challenge here will be showing that if r ∈ σ then curr(r) ∈ σ, which shall require some understanding of the nature of the critical regions present in our semantics; the other requirements for consistency are straightforwardly shown to be preserved through the occurrence of the events present in N t .
We shall first show that any release of a resource is dependent on the prior acquisition of that resource: for any sequence π and any resource there exists an injection f that associates any occurrence of a release event to a prior occurrence of an acquisition event of that resource, and between the two occurrences there are no other actions on that resource.
Lemma 3.24. Let π be a sequence of events, π = (e 1 , . . . , e n ). For any closed term t, resource r and marking of control conditions C such that π : Moreover, if there exist markings of state conditions σ 0 , . . . , σ n and markings of control then there exists an f satisfying the above constraints and such that, for all Proof. The first property is shown, using the control properties of sequences established above, by induction on the size of terms. The second property arises since if e i = acq (C i ,C ′ i ) (r) and e j = acq (C j ,C ′ j ) (r) for i < j then there must exist k such that i < k < j and e k = rel (C k ,C ′ k ) (r), and the symmetric property for release events.
We are now able to show that the nets formed preserve the consistency of the markings of state conditions. Lemma 3.25 (Preservation of consistent markings). For any closed term t, if (Ic(t), σ 0 ) − ։ * (C, σ) in the net N t and the marking σ 0 of state conditions is consistent then σ is consistent.
Proof. It is straightforward to prove by induction on the size of the term t that the events present in that net N t are all of one of the following forms: It is readily shown that each form of event preserves the consistency of the marking of state conditions, apart from showing that if r ∈ σ then curr(r) ∈ σ. Suppose, for contradiction, that π ′ is a path such that π ′ :( and that r ∈ σ but curr(r) ∈ σ. Assume, furthermore, and without loss of generality, that any other marking of state conditions σ ′ along π has the property that if r ∈ σ then curr(r) ∈ σ. It must be the case that π ′ = π · rel (D 1 ,D ′ 1 ) (r) for some D 1 , D ′ 1 and π. By Lemma 3.24, there exist D 2 , D ′ 2 , π 1 and π 2 such that π = π 1 · acq (D 2 ,D ′ 2 ) (r) · π 2 and no event in π 2 is an acq(r) or rel(r) event. Let π 1 :(Ic(t), σ 0 ) − ։ * (C 1 , σ 1 ). We must have r ∈ σ 1 , and by assumption curr(r) ∈ σ 1 . It can be seen that we must have curr(r) ∈ σ ′ and r ∈ σ ′ for all states σ ′ reached along acq (D 2 ,D ′ 2 ) (r) · π 2 from (C 1 , σ 1 ) since no end(r) event can have concession in such markings. Consequently, we must have curr(r) ∈ σ 2 for σ 2 obtained by following the path π :(Ic(t), σ 0 ) − ։ * (C 2 , σ 2 ), and therefore curr(r) ∈ σ.
The structure of processes ensures that any resource initially current remains current through the execution of the net. The same property working backwards from the terminal marking of the net also holds.
Lemma 3.26. Let σ, σ ′ be a consistent markings of state conditions. For any markings of control conditions C, C ′ : Proof. We shall only show (1) since (2) is similar. An induction on the size of terms using the control properties above gives the following: • If there exists a sequence π such that π · end (C 1 ,C 2 ) (r) : and assume that curr(r) ∈ σ. Without loss of generality, suppose that (C ′ , σ ′ ) is the earliest marking along π ′ from (Ic(t), σ) such that curr(r) ∈ σ ′ ; otherwise, we can take the initial segment of π ′ with this property. Examination of the events given by our semantics reveals that the last event in π ′ is an end (C 1 ,C 2 ) (r) event, since otherwise curr(r) is not in the state prior to σ ′ . Now, applying the result above informs that there is an event decl (C ′ 1 ,C ′ 2 ) (r) in π ′ and this must occur before end (C 1 ,C 2 ) (r). Now, the event decl (C ′ 1 ,C ′ 2 ) (r) can only occur in a marking σ 0 of state conditions such that curr(r) ∈ σ 0 , but this contradicts our assumption that σ ′ was the first marking of state conditions reachable along π ′ from (Ic(t), σ) with curr(r) ∈ σ ′ . 3.9. Correspondence of semantics. As we have progressed, the event notations introduced have corresponded to labels of the transition semantics. Write |e| for the label corresponding to event e. Before progressing to consider separation logic, we shall give a theorem 1 that shows how the net and transition semantics correspond. It assumes a definition of open map bisimulation [JNW93,NW96] based on paths as pomsets, (N, M ) ∼ (N ′ , M ′ ), relating paths of net N from marking M to paths of N ′ from M ′ . The bisimulations that we form respect terminal markings and markings of state conditions. Theorem 3.27 (Correspondence). Let t be a closed term and σ be a consistent state.
Write (t, σ) ∼ (t ′ , σ ′ ) iff there exist a label-preserving bisimulation (in the standard sense) between the transitions systems for t from initial state σ and t ′ from σ ′ . From the preceding result, we obtain adequacy of our semantics: Corollary 3.28 (Adequacy). Let t, t ′ be closed terms and σ, σ ′ be consistent states. If The converse property with respect to ∼ fails. For instance, for any σ we have However, the definition of open bisimulation on the nets with pomsets as paths yields The reason why the property fails is that the transition system does not capture the independence of actions.

Separation logic
As discussed in the introduction, concurrent separation logic establishes partial correctness assertions about concurrent heap-manipulating programs; that whenever a given program running from a heap satisfying a heap formula ϕ terminates, the resulting heap satisfies a heap formula ψ. The semantics of the heap logic arises as an instance of the logic of Bunched Implications [OP99]. At its core are the associated notions of heap composition and the separating conjunction. Two heaps may be composed if they are defined over disjoint sets of locations: A heap satisfies the separating conjunction ϕ 1 * ϕ 2 if it can be split into two parts, one satisfying ϕ 1 and the other ϕ 2 : The semantics of the other parts of the heap logic is of little significance when considering the semantics of the program logic. For completeness, however, it is defined by induction on the size of formulae in Figure 3 where the full syntax also appears. Unlike the heap logic presented in [Bro07], we do not allow arithmetic on memory locations; this is just to simplify the presentation, and such arithmetic could easily be added. Since we distinguish the types of locations and values, we use x loc as the logical variable for locations and x val for the logical variable for values. We adopt the usual binding precedences, and * binds more tightly the standard logical connectives. We define the shorthand notation We now present the intuition for the key judgement of concurrent separation logic, Γ ⊢ {ϕ} t {ψ}, where ϕ and ψ are formulae of the heap logic, and Γ is a environment of resource invariants , of the form r 1 : χ 1 , · · · , r n : χ n , associating invariants χ i with resources r i . (We refer the reader to [O'H07] for a fuller introduction.) Informally, the judgement means: In any run from a heap satisfying ϕ and the invariants Γ, the process t never accesses locations that it does not own, and if the process t terminates then it does so in a heap satisfying ψ and the invariants Γ. Central to this understanding is the notion of ownership, which we capture formally in Section 4.1. Initially the process t is considered to own that part of the heap which satisfies ϕ, and accordingly to own the locations in that subheap. As t runs the locations it owns may change as it acquires and releases resources, and correspondingly the locations used in justifying their invariants.
Ownership plays a key role in making the judgements of concurrent separation logic compositional: a judgement Γ ⊢ {ϕ} t {ψ} should hold even if other (unknown) processes are to execute in the same heap. It is therefore necessary to make certain assumptions about the ways in which these other processes might interact with the process t. This is achieved through ownership, by assuming that each process owns, throughout its execution, a separate, though possibly changing, part of the heap; the part of the heap that each process owns must not be accessed by any other process; moreover a process must not access locations it does not own.
The rules of concurrent separation logic are presented in Figure 4 in the style of [Bro07]. The only significant difference between the two systems is that we omit the rules for auxiliary variables and for existential quantification. Both are omitted for simplicity since they are peripheral to the focus of our work.
As a first example, the rule for heap actions (L-Act) would allow the judgement since the process is initially assumed to own the location ℓ because the part of the heap that the process initially owns satisfies ℓ → 0. The resulting part of the heap owned by the Variables: Location expressions: e loc :: Expressions: Semantics of closed formulae: iff there exist D 1 , D 2 such that D 1 · D 2 defined and D = D 1 · D 2 and D 1 |= ϕ 1 and D 2 |= ϕ 2 D |= empty iff for all D |= ϕ and (D 1 , D 2 ) ∈ A α : dom(D 1 ) ⊆ dom(D) and is not derivable however: the part of the heap initially owned by the process satisfies empty, and therefore the process initially does not own the location ℓ. Assignment to ℓ violates the principle that the process may only act on locations that it owns -the so-called frame property.
An instance of the separating conjunction is seen in the rule for parallel composition, (L-Par): Informally, the rule is sound because the part of the initial heap that is owned by the process t 1 t 2 can be split into two parts, one part satisfying ϕ 1 owned by t 1 and the other satisfying ϕ 2 owned by t 2 ; as the processes execute the subheaps that we see each as owning remain disjoint from each other and end up separately satisfying ψ 1 and ψ 2 .
It is vital that the logic enforces the requirement that processes only act on locations that they own. If this requirement were not imposed, so that the judgement were derivable, then the rule for parallel composition could be applied with the other judgement above to conclude that This flawed assertion would imply that whenever the process [ℓ] := 1 [ℓ] := 2 runs from a state satisfying ℓ → 0, the resulting state has ℓ → 1, which is obviously wrong.
The notion of ownership is subtle since the collection of locations that a process owns may change as the process evolves. As seen in the rule (L-Alloc), the intuitive reading is that after an allocation event has taken place the process owns the newly current location. Similarly, deallocation of a location leads to loss of ownership. For example, it is possible to make the judgement If the new location were ℓ ′ which initially held value v, this would mean that in the the (fragment of the) resulting heap {ℓ → ℓ ′ * ℓ ′ → v}, the locations ℓ and ℓ ′ would be owned by the process. Consequently, an action [[ℓ]] := 0 which assigns 0 to the location pointed to by ℓ resulting in the heap {ℓ → ℓ ′ , ℓ ′ → 0} allows the judgement by (L-Act) since both locations would be owned by the process. The rule (L-Seq) can now be applied to obtain indicating that the process has ownership of the location ℓ ′ , seen in the ability to write to ℓ ′ , once it has been allocated.
To allow the logic to make judgements beyond those applicable to the almost 'disjointly concurrent' programs outlined so far, further interaction is allowed through a system of invariants. The judgement environment Γ records a formula called an invariant for each resource in its domain, which contains all the resources occurring in the term. The intuition is that, whenever a resource r with an invariant χ is available, there is part of the heap unowned by any other process and protected by the resource that satisfies χ. In such a situation, we shall say that the locations used to satisfy χ are 'owned' by the invariant for r. Processes may gain ownership of these locations, and thereby the right to access them, by entering a critical region protected by the resource. When the process leaves the critical region, the invariant must be restored and the ownership of the locations used to satisfy the invariant is relinquished. This is reflected in the rule (L-CR). As an example, we have the following derivation: The process initially owns the location ℓ ′ , and the location ℓ is protected by the resource r. We reason about the process inside the critical region running from a state with ownership of the locations governed by the invariant in addition to those that it owned before entering the critical region since no other process can be operating on them; that is, we reason about [ℓ ′ ] := [ℓ] with locations ℓ and ℓ ′ owned by the process. However, when the process leaves the critical region, ownership of the locations used to satisfy the invariant is lost, indicated by the conclusion ℓ ′ → 0 in the judgement rather than ℓ ′ → 0 * ℓ → 0.
An invariant is required to be a precise heap logic formula.
Definition 4.1 (Precision). A heap logic formula χ is precise if for any heap D there is at most one subheap D 0 ⊆ D such that D 0 |= χ.
We leave discussion of the rôle of precision to the conclusion, though it might be seen to be of use since it identifies uniquely the part of the heap that is owned by the invariant if the resource is available. Formally, Γ ranges over finite partial functions from resources to precise heap formulae. We write dom(Γ) for the set of resources on which Γ is defined, and write Γ, Γ ′ for the union of the two partial functions, defined only if dom(Γ) ∩ dom(Γ ′ ) = ∅. We write r : χ for the singleton environment taking resource r to χ, and we allow ourselves to write r : χ ∈ Γ if Γ(r) = χ.
The rules allow ownership of locations to be transferred through invariants. Consider the invariant χ defined as ℓ ′ → 0 ∨ (ℓ ′ → 1 * ℓ → 0). If the resource is available, the invariant is satisfied: it either protects the location ℓ ′ , which has value 0, or it protects location ℓ ′ , which has value 1, as well as location ℓ. A process can acquire ownership of ℓ across a critical region by changing the value of ℓ ′ from 1 to 0 and may leave ownership of ℓ inside the invariant by changing the value of ℓ ′ from 0 to 1.
Assume, for example, that the process owns location ℓ. The only way in which the invariant χ can be satisfied disjointly from the locations that the process owns is for ℓ ′ to hold value 0. That is, we have which is implicitly used in the instance of the rule (L-Consequence) below. Consequently, as the process enters a critical region protected by r, it gains ownership of location ℓ ′ . If the process sets the value of ℓ ′ to 1, when the process leaves the critical region it must restore the invariant to the resource, and so relinquish ownership of both ℓ ′ and ℓ. This is seen in the derivation of the following judgement, in which we take Γ = r : χ.
It is also possible to acquire ownership of locations through an invariant. Let the action diverge have the same semantics as that of the boolean guard false, which is an action that can never occur i.e. the process is stuck. We have the following derivation: The undischarged hypotheses at the top of the derivation are all proved by the rule (L-Act). Let t 0 denote the process ([ℓ ′ ] = 0.diverge) + ([ℓ ′ ] = 1.[ℓ ′ ] := 0). Observe that the process with r do t 0 od is considered to own no part of the initial heap. As the process enters the critical region, it is considered to take ownership of the part of the heap satisfying the invariant for r, viz χ. There are two ways in which χ might be satisfied: (1) It may be that the process gains ownership of the location ℓ ′ which holds value 0. In this case, only the guard [ℓ ′ ] = 0 of t 0 can pass, so the process must evolve to diverge and therefore never terminates. It is therefore trivially true that the remainder of the derivation, that if the process t 0 terminates then the part of the heap that it owns satisfies ℓ → 0 * χ and therefore after leaving the critical region and losing ownership of the locations satisfying χ that the process owns location ℓ, is sound. (2) The process might have taken control of the locations ℓ, holding value 0, and ℓ ′ , holding value 1. Inside the critical region, the process t 0 can be seen to change the value of ℓ ′ from 1 to 0. The only way that the invariant χ can then be satisfied is by the location ℓ ′ holding 0, so ownership of ℓ ′ is lost as the process leaves the critical region. Importantly, the process retains ownership of location ℓ. Using the derivations given above, we can give an example of ownership of ℓ, as exhibited by the right to write to ℓ, being transferred (we have annotated internal assertions arising from the proofs above inside the program): We also see that, in any terminating run of this process, it must be the case that the process on the left terminates strictly before the process on the right begins.
The final remark to be made on the rules of the logic is that (L-Res) allows invariants to be established for newly declared resources. We reason about the closed term [r/w]t, for an arbitrary 'fresh' resource r; it is sufficient to consider only one such resource, as shall be seen in Lemma 4.25. The resource r is known not to occur in the domain of Γ and hence does not occur in the term t thanks to the following lemma, proved straightforwardly by induction on the judgement. 4.1. Ownership model. We now progress to give a formal interpretation of the rules presented in the previous section. The key idea is that the judgement Γ ⊢ {ϕ} t {ψ} is robust against the operation of other 'external' processes (which have themselves been subject to a judgement in the logic) on the state, so that the rule for parallel composition is valid. From the account presented earlier, external processes may act on the heap providing they do not access the locations 'owned' by the process t, and they may act to acquire and release resources providing they respect the invariants in Γ. External processes may also make non-current resources current through the instantiation of a resource variable and might make such resources non-current. The semantics of judgements must therefore keep a record of how each current location in the heap and each current resource is owned: whether the process might access the location, whether it forms part of an invariant protected by a resource, or whether external processes might act on that location, along with a similar record for resources. The semantics will include interference events to represent such forms of action by external processes.
Capturing these requirements, we construct an interference net with respect to the environment Γ to represent the execution of suitable external processes proved against Γ. This involves creating ownership conditions ω proc (ℓ), ω inv (ℓ) and ω oth (ℓ) for each location ℓ. The intuition is that ω proc (ℓ) is marked if ℓ is owned by the process, ω inv (ℓ) if ℓ is used to satisfy the invariant for an available open resource, and ω oth (ℓ) is marked if ℓ is current but owned by another process.
To give an example, suppose that we have the judgement The proof can be composed with the judgement Γ ⊢ {ℓ → 0} [ℓ] := 0 {ℓ → 1} to obtain The first proof, that the assignment [k] := 0 changes the value at k from 1 to 0, must take into account the possibility that the values held at other locations may change. In particular, it must take into account the possibility that the value at ℓ (not to equal k) changes from 0 to 1. We therefore reason about the net N [k] := 0 in the presence of the following interference event, which changes the value held at ℓ from 0 to 1: Notably, the above event requires that the location ℓ is owned by an external process, i.e. the condition ω oth (ℓ) is marked.
Since we do not know with which other judgements Γ ⊢ {k → 1} [k] := 0 {k → 0} may be composed, there are interference events present in the net for all the forms of interference permissible according to the notion of ownership. For instance, the interference event which changes the value of k from 0 to 1 is present in the net. However, the judgement asserts that k is owned by the process, so this interference event (and indeed any other interference event that affects k) will not be able to occur because the condition ω proc (k) will be marked, not ω oth (k).
As mentioned above, we introduce interference events to mimic the action of external processes on resources. The notion of ownership is therefore extended in this setting to resources, for example so that an external process cannot be allowed to release a resource held by the current process. It is important to make a distinction between resources in the domain of the environment Γ (called open resources) and those that are not (called closed resources): Open resources have invariants associated with them, so the ownership of the heap is affected by events that acquire or release them, as presented earlier in this section; this is not the case for closed resources. Closed resources are those resources made current to instantiate a local resource variable. They may either be used by the process being considered if it declared the resource, or be used by some external process if some external process declared the resource. We shall introduce conditions ω proc (r), ω inv (r) and ω oth (r) for each resource r. The condition ω proc (r) will be marked if either the resource is closed and was made current by the process or if the resource is open and is held by the process.
The condition ω inv (r) will be marked if r is open and available. The condition ω oth (r) will be marked if either the resource is closed and was made current by an external process or if the resource is both open and the external process holds it.
The set of ownership conditions is denoted W: We use W to range over markings of ownership conditions and introduce the notations W e and e W , as before, for the sets of pre-ownership conditions of e and post-ownership conditions of e, respectively. For a set of locations L, we define the notation and define ω inv (L) and ω oth (L) similarly. Only certain markings of ownership conditions are consistent with a state σ: (1) σ is a consistent state in N t , (2) for each z ∈ Loc ∪ Res, at most one of {ω proc (z), ω inv (z), ω oth (z)} is marked, (3) for each z ∈ Loc ∪ Res, the ownership condition curr(z) is in σ iff precisely one of {ω proc (z), ω inv (z), ω oth (z)} is in W , (4) if r ∈ dom(Γ) and r ∈ R then ω inv (r) ∈ R, (5) if r ∈ dom(Γ) and r ∈ R then either ω proc (r) ∈ W or ω oth (r) ∈ W , and (6) if curr(r) ∈ σ and r ∈ dom(Γ) then either ω proc (r) ∈ W or ω oth (r) ∈ W .
Requirements (2) and (3) assert that W is essentially a function from the set of current locations and resources to describe their ownership. Requirement (4) states that any available open resource is owned as an invariant: it can be accessed either by the process being considered or by an external process, and there is an invariant associated with r. Requirement (5) states that any unavailable open resource is either held by the process or by an external process. Requirement (6) asserts that any closed resource is owned either by the current process or by an external process. Table 1 defines a number of notations for events corresponding to the permitted interference described. To summarize, there will be interference events to represent the following kinds of action by external processes: • act(D 1 , D 2 ): Arbitrary action on the heap (excluding allocation or deallocation) owned by external processes. • alloc(ℓ, v, ℓ ′ , v ′ ): Allocation of a new location ℓ ′ by an external process, storing the result in the location ℓ. The location ℓ must initially have been owned by an external process. Ownership of the new location ℓ ′ is taken by the external process. • dealloc(ℓ, v, ℓ ′ , v ′ ): Disposal of the location ℓ ′ pointed to by ℓ. Both locations are initially owned by external processes, so ω oth (ℓ) and ω oth (ℓ ′ ) are preconditions to the event. • decl(r): Declaration of a resource r. The condition curr(r) is marked by the event, so the resource was not initially current. Ownership of r is taken by the external process, so ω oth (r) is in the postconditions of the event. • end(r): End of scope of a resource r, only permissible if the resource was initially declared by an external process and therefore ω oth (r) is marked. • acq(r): For a closed resource r, the external process may acquire the resource if it is not local to the process being considered and therefore ω oth (r) is marked. • rel(r): For a closed resource r, the external process may release the resource if it is not local to the process being considered and therefore ω oth (r) is marked. • acq(r, D 0 ): For an open resource r with an invariant χ in Γ, if D 0 |= χ and D 0 is part of the current heap then ownership of the locations in the domain of D 0 is changed from being protected by the resource to being owned by the external process, i.e. un-marking ω inv (ℓ) and marking ω oth (ℓ) for each location ℓ ∈ dom(D 0 ). The ownership of r also changes, from ω inv (r) being marked to ω oth (r) being marked. • rel(r, D 0 ): The corresponding release action.
Definition 4.4 (Interference net). The interference net for Γ has conditions S, the state conditions, and W, the ownership conditions. It has the following events: • act(D 1 , D 2 ) for all D 1 and D 2 forming partial functions with the same domain • alloc(ℓ, v, ℓ ′ , v ′ ) and dealloc(ℓ, ℓ ′ , v ′ ) for all locations ℓ and ℓ ′ and values v and v ′ • decl(r) and end(r) for all resources r • acq(r) and rel(r) for all closed resources r • acq(r, D 0 ) and rel(r, D 0 ) for all r ∈ dom(Γ) and D 0 such that D 0 |= χ, for χ the unique formula such that r : χ ∈ Γ We use the symbol u to range over interference events.
The interference events illustrate how the ownership of locations is dynamic and how this constrains the possible forms of interference. The rule for parallel composition requires that the behaviour of the process being reasoned about itself conforms to these constraints, allowing its action to be seen as interference when reasoning about the other process. This requirement may be captured by synchronizing the events of the process with those from the interference net in the following way: • The process event act (C,C ′ ) (D, D ′ ) synchronizes with act(D, D ′ ) • The process event alloc (C, • The process event decl (C,C ′ ) (r) synchronizes with decl(r) • The process event end (C,C ′ ) (r) synchronizes with end(r) • The process event acq (C,C ′ ) (r) synchronizes with acq(r) for any closed resource r, i.e. for any r ∈ dom(Γ) • The process event rel (C,C ′ ) (r) synchronizes with rel(r) for any closed resource r • If r is an open resource with r : χ ∈ Γ, the process event acq (C,C ′ ) (r) synchronizes with every acq(r, D 0 ) such that D 0 |= χ. Similarly, rel (C,C ′ ) (r) synchronizes with every rel(r, D 0 ) such that D 0 |= χ. Suppose that two events synchronize, e from the process and u from the interference net. The event u is the event that would fire in the net for the other parallel process to simulate the event e; it is its dual. Let e · u be the event formed by taking the union of the preand postconditions of e and u, other than using ω proc (ℓ) in place of ω oth (ℓ), and similarly ω proc (r) in place of ω oth (r).
Example 4.5 (Synchronization of heap actions). Define the following events: The event e is an event inside the process net, with pre-control conditions C and postcontrol conditions C ′ , that changes the value of ℓ from 0 to 1. It synchronizes with only one event, u, which performs the corresponding interference action. For the event u to occur, the condition ω oth (ℓ) must be marked i.e. the location ℓ must be seen as owned by an 'external' process. The event formed by synchronizing e and u is e · u, which requires the location ℓ to be owned by the current process for it to occur.
Example 4.6 (Synchronization of critical regions). Define the following events, where the event e is an event inside the process net, with pre-control conditions C and post-control conditions C ′ , that acquires the open resource r.
Recall the invariant ℓ ′ → 0 ∨ (ℓ ′ → 1 * ℓ → 0) used above. There are two heaps, D 1 = {ℓ ′ → 0} and D 2 = {ℓ ′ → 1, ℓ → 0} that satisfy this formula. There are correspondingly two interference events u 1 and u 2 that synchronize with e: the event u 1 acquires the resource r and transfers the ownership of ℓ ′ and r to the external process from the invariant, whereas the event u 2 acquires the resource r and transfers ownership of ℓ, ℓ ′ and r to the external process from the invariant. The event u 1 requires that the heap initially has value 0 at ℓ ′ ; the event u 2 requires that the heap initially has value 1 at ℓ ′ and 0 at ℓ. The synchronized events e · u 1 and e · u 2 are similar, transferring ownership from the invariant to the process being considered.
The semantics of judgements made using the rules of concurrent separation logic will consider a net W t Γ with both interference events to represent external processes running and synchronized events to represent the process t. Every event e · u where e is an event of N t and u from the interference net such that e and u synchronize.
We shall continue to use the symbol e to refer to any kind of event in ownership nets, but shall reserve the symbol s for those events known in particular to be synchronized events.
A consequence of the precision of invariants is that at most one of the synchronized events corresponding to an event in N t may be enabled in any marking of the ownership net W t Γ .
Lemma 4.8. For any marking σ of state conditions, let (C, σ, W ) and (C ′ , σ, W ′ ) be consistent markings of the net W t Γ . For any event e in N t and any interference events u and u ′ in W t Γ , if e · u has concession in (C, σ, W ) and e · u ′ has concession in (C ′ , σ, W ′ ) then u = u ′ .
Proof. Straightforwardly seen to follow from precision by an analysis of the possible forms of the event e.
The occurrence of a synchronized event e · u in a marking (C, σ, W ) of the net W t Γ clearly gives rise to the occurrence of the event e in N t . The earlier results describing the behaviour of N t in terms of the behaviour of the nets representing its subterms can therefore be applied to the net W t Γ .
Proof. The events of W t Γ are, by definition, only interference events or synchronized events. If e is an interference event, C = C ′ because C e = ∅ and e C = ∅. For a synchronized event e 1 · u, observe that C (e 1 · u) = C e 1 and that (e 1 · u) C = e 1 C , and similarly for L e 1 , R e 1 , N e 1 , e 1 L , e 1 R and e 1 N . The only cases where either D (e 1 · u) = D e 1 or (e 1 · u) D = e 1 D are acquisition or release of an open resource, but in these cases D e 1 = ∅ = e 1 D and D (e 1 · u) = (e 1 · u) D . The result follows as a straightforward calculation.
The proof that consistent markings are preserved in the net W t Γ is similar to that of Lemma 3.25; the additional requirements on the marking of ownership conditions are readily seen to be preserved by both interference and synchronized events. is consistent.
The formulation of the ownership net permits a fundamental understanding of when a process acts in a way that cannot be seen as any form of interference; that is, when the process has violated its guarantees.
Definition 4.11 (Violating marking). Let (C, σ, W ) be a consistent marking of W t Γ . We say that M is violating if there exists an event e of N t that has concession in marking (C, σ) but there is no event u from the interference net that synchronizes with e such that e · u has concession in (C, σ, W ).
We shall give two examples of violating markings. The first shall be an example of action on an unowned location, and the second shows how release of an open resource will cause a violation if the invariant is not restored. We have ω oth (ℓ) ∈ W u and therefore ω proc (ℓ) ∈ W (e · u), so the event e · u does not have concession in the marking (C, σ, W ) which is therefore violating: the process acted on the unowned location ℓ ′ .
Example 4.13. Let r be an open resource with the invariant χ = ℓ ′ → 0 ∨ (ℓ ′ → 1 * ℓ → 0), and let (C, σ, W ) be a consistent marking of W t Γ,r : χ with {ℓ → 1, ℓ ′ → 1} ⊆ σ and ω proc (ℓ), ω proc (ℓ ′ ) ∈ W . Suppose further that the event e = rel (C 1 ,C 2 ) (r) has concession in (C, σ) in the net N t . The only two interference events in W t Γ,r : χ that synchronize with e are corresponding to the two ways in which χ can be satisfied. The invariant is not satisfied in the heap component of σ, so the preconditions of the two events are not contained in the marking (C, σ, W ), which is therefore therefore a violating marking because there was no part of the owned heap that satisfied the invariant yet the resource was released.
If no violating marking is ever encountered, the behaviour of W t Γ encapsulates all that of N t .
Lemma 4.14. For any consistent marking (C, σ, W ) of the net W t Γ and any event e ∈ Ev(t), if (C, σ) e − ։ (C ′ , σ ′ ) in N t then either (C, σ, W ) is violating or there exists a marking of ownership conditions W ′ and an interference event u that synchronizes with e such that (C, σ, W ) Proof. Immediate from the definition of violating marking and the fact that, for any e and u that synchronize and any state σ which is easily proved by inspection of the forms that e · u may take.

4.2.
Soundness and validity. The rule for parallel composition permits the view that the ownership of the heap is initially split between the two processes, so that what one process owns is seen as owned by an external process by the other.
If W 1 and W 2 form an ownership split of W , then fewer locations and resources are owned by the process in W 1 than in W , and similarly for W 2 . As one would expect, a process can act in the same way without causing a violation if it owns more, and more interference can occur if the process owns less. This is the essence of the frame property referred to earlier. • For any synchronized event s = e · u, if (C, σ, W 1 ) Following Brookes' lead, we are now able to prove the key lemma upon which the proof of soundness lies. The effect of this lemma is that the the terminal states of parallel processes may be determined simply by observing the terminal markings of the net of each parallel process running in isolation if we split the ownership of the initial state correctly. For convenience, the lemma is stated without intimating the particular event that takes place on the net transition relation.
(1) Suppose that the marking M is violating in W t 1 t 2 Γ . Without loss of generality, assume that this is because there exists an event 1:e 1 of N t 1 t 2 that has concession in marking (1:C 1 ∪ 2:C 2 , σ) but there is no event interference event u such that 1:e 1 synchronizes with u and (1:e 1 ) · u has concession in M . Assume, for contradiction, that the marking M 1 is non-violating in W t 1 Γ . The event e 1 has concession in marking (C 1 , σ) of N t 1 by the first part of Lemma 3.6, so there must exist u 1 an interference event of W t 1 Γ such that e 1 · u 1 has concession in M 1 . The interference events of W t 1 Γ are precisely the interference events of W t 1 t 2 Γ and the tagging of control conditions has no effect on whether events may synchronize, so the event (1:e 1 ) · u 1 is in W t 1 t 2 Γ . From Lemmas 4.16 and 3.6, the event 1:e 1 · u 1 has concession in marking M , which is therefore not violating -a contradiction.
(2) It is a straightforward consequence of Lemma 4.16 that the second property holds if the transition (1:C 1 ∪ 2:C 2 , σ, W ) − ։ (1:C ′ 1 ∪ 2:C ′ 2 , σ ′ , W ′ ) is induced by the occurrence of an interference event. Suppose instead that it is induced by a synchronized event.
The ownership semantics described above has been carefully defined to explicitly take into account the intuitions behind the rule for parallel composition, resulting in the short proof of the parallel decomposition lemma above. The remaining complexity in the proof of soundness lies in the rule for establishing an invariant associated with a resource: It is quite easy to see why this rule follows the intuitive semantics for judgements presented above: Any run of the net W resource w do t od Γ to a terminal marking from a state with the heap owned by the process initially satisfying ϕ * χ can be seen, in conjunction with Lemma 3.21, as consisting first of an event that declares a fresh resource r current, then a run of W [r/w]t Γ , followed by an event that makes r non-current. The run of W [r/w]t Γ from a state where the part of the heap that the process owns satisfies ϕ * χ is simulated by a run of W [r/w]t Γ,r : χ along which the locations satisfying χ are owned by the invariant χ in an environment where r is an open resource. In particular, the run obtained has no interference on the resource r or the locations that it protects and r is available in the terminal state of the run. Assuming the validity of the judgement Γ, r : χ ⊢ {ϕ} [r/w]t {ψ}, the resulting state owned by the process is therefore seen to satisfy the formula ϕ * χ. Similarly, if there were a reachable marking in W resource w do t od Γ where the process accesses a location or resource that it does not own would result in there being a reachable marking in W [r/w]t Γ,r : χ where the process accesses an unowned location or resource. The more formal presentation of this intuition follows.
We shall begin by explicitly characterizing the runs of the net W resource w do t 0 od Γ . The result is again a little technical, as is the following lemma, Lemma 4.21; they are used in the proof of soundness of the rule (L-Res). The reader may wish to pass through this result and Lemma 4.21 and only take note of the following definitions of inv(Γ, R) and D ↾ W proc, D ↾ W inv and D ↾ W oth. • C = Ic(t) and π consists only of interference events, or • there exist r, C ′ , σ ′ , W ′ , π 0 and π 1 such that π 0 comprises only interference events, C = r:C ′ and π = π 0 · s r · (r:π 1 ) and π 0 · s r : ( and there exist r, σ ′ , σ ′′ , W ′ , W ′′ , π 0 , π 1 , π 2 such that π 0 and π 2 comprise only interference events, π = π 0 · s r · (r:π 1 ) · s ′ r · π 2 , and π 0 · s r : ( Proof. Readily seen to be a consequence of Lemmas 3.23, 3.21, 3.10 and 4.9.
It can be shown, as a consequence of the preceding lemma, that during the run of the net following the declaration event, the resource r chosen for w is owned by the process until it is made non-current at the end of the variable w's scope.
We write inv(Γ, R) for the formula χ 1 * . . . * χ n formed as the separating conjunction of the invariants of all the available, according to R, open resources. It is defined by induction on the size of the domain of Γ: Define the notations to represent the heap at locations owned by the process, invariants and other processes, respectively. In any state that we consider, we would expect D ↾ W inv |= inv(Γ, R). A marking of the net W t Γ can be converted to a marking of W t Γ,r : χ by, if r is available, regarding ownership of the locations satisfying the invariant χ as being owned by the invariant rather than by the process.
Definition 4.20. Suppose that χ is a precise heap formula. Let M = (C, (D, L, R, N ), W ) be a consistent marking of W t Γ such that if r ∈ R then there exists (necessarily unique) D 0 ⊆ D ↾ W proc such that D 0 |= χ. Define the projection of M into the net W t Γ,r : χ to be π χ r (M ) It is clear that if M is a consistent marking of W t Γ then π χ r (M ) is a consistent marking of W t Γ,r : χ . They key lemma representing the account above, that behaviour in the net where a resource is closed is simulated by the net where the resource is open, is now stated, though we shall not show its proof here. and ω proc (r) ∈ W ′ then: and ω proc (r) ∈ W ′ then either: We shall say that a state σ with an ownership marking W satisfies the formula ϕ and the invariants in Γ if the heap restricted to the owned locations satisfies ϕ and the invariants are met for all the available resources. The rest of the heap, seen as owned by external processes, is unconstrained.
We now attach a notion of validity to judgements Γ ⊢ {ϕ} t {ψ}. It shall assert that no violating marking is ever reached and that whenever the process t runs to completion from a state where the part of the heap that it owns satisfies ϕ then the part of the resulting heap that it owns satisfies ψ.
It is useful to note that the occurrence of an interference event does not affect whether a marking satisfies ϕ in Γ or whether it is violating. Consequently, when considering validity it is unnecessary to account for runs of the net W t Γ that start or end with an interference event. In the rule (L-Res) which allows invariants to be established for resources, only one resource is considered for substitution for the variable. The following lemma shows that this is sufficient; the semantics of judgements is unaffected by the choice of resource.
Lemma 4.25. For any resources r, r ′ such that r, r ′ ∈ dom(Γ) and any term t with fv(t) ⊆ {w} and res(t) ⊆ dom(Γ), Proof. The net W [r/w]t Γ,r : χ is clearly isomorphic to W [r ′ /w]t Γ,r ′ : χ through interchanging the conditions The result follows from the definition of validity being insensitive to such permutations.
We are now in a position where we the rules of concurrent separation logic can be proved sound. Only two important cases of the proof shall be presented here; full details will be available in the first author's PhD thesis. Proof. By rule induction on the judgement Γ ⊢ {ϕ} t {ψ}. Note that, due to Lemma 4.24, we shall only consider runs of W t Γ that do not start or end with an interference event.
(L-Res): Let t ≡ resource w do t 0 od. Suppose that Γ ⊢ {ϕ * χ} t {ψ * χ} because Γ, r 0 : χ ⊢ {ϕ} [r 0 /w]t 0 {ψ} for some r 0 ∈ dom(Γ). Assume that the marking M = (Ic(t), σ, W ) satisfies ϕ * χ in Γ, and let M ′ = (C ′ , σ ′ , W ′ ) be reachable from M in W t Γ . According to Lemma 4.18, there are three cases to consider for the marking M ′ . • The first case has M = M ′ (we need not consider runs starting with an interference event according to Lemma 4.24). Since Ic(t) = Tc(t), all that we must show is that M is non-violating. Using Lemma 3.21, we can infer that the only events with concession in the marking (Ic(t), σ) of N t are equal to decl (Ic(t),r:Ic([r/w]t 0 )) (r) for some r ∈ Res such that curr(r) ∈ σ. The marking M is assumed to be consistent, so for each such r we have ω proc (r) ∈ W and hence the synchronized event decl (Ic(t),r:Ic([r/w]t 0 )) (r) · decl(r) has concession in M . The marking M ′ cannot therefore be violating. • Secondly, there exists a resource r, markings σ 0 , W 0 , C 1 and a path π 1 such that C ′ = r:C 1 and s r : where s r = decl (Ic(t),r:Ic([r/w]t 0 )) (r)·decl(r). The marking (C ′ , σ ′ , W ′ ) cannot be a terminal marking of the net W t Γ , so all that we must show is that it is non-violating. We have r, curr(r) ∈ σ 0 and ω proc (r) ∈ W 0 since they are in the postconditions of s r . A simple induction on the length of π using Lemmas 4.19 and 4.21 informs that π χ r (C 1 , σ ′ , W ′ ) is reachable from π χ r (Ic([r/w]t 0 ), σ 0 , W 0 ) in W [r/w]t 0 Γ,r : χ . We have curr(r) ∈ σ because the event s r has concession in M , so r ∈ dom(Γ) because the marking M is consistent. Since res(t) ⊆ dom(Γ) by Lemma 4.2, we may use Lemma 4.25 in conjunction with the induction hypothesis to obtain Γ, r : χ |= {ϕ} [r/w]t 0 {ψ}. It is an easy calculation to show that π χ r (Ic([r/w]t 0 ), σ 0 , W 0 ) satisfies ϕ in Γ, r : χ, so the marking π χ r (C 1 , σ ′ , W ′ ) is non-violating. By Lemma 4.21, the marking (C 1 , σ ′ , W ′ ) of W [r/w]t 0 Γ is therefore non-violating. According to Lemma 3.21, there are two possible ways in which the marking (C ′ , σ ′ , W ′ ) of W t Γ might be violating. Firstly, there might exist an event e of N [r/w]t 0 that has concession in the marking (C 1 , σ ′ ) but there is no interference event u that synchronizes with e such that e · u has concession in the marking (C 1 , σ ′ , W ′ ). We have shown, however, that this is not the case since the marking (C 1 , σ ′ , W ′ ) is nonviolating. Alternatively, the event e ′ r = end (r:Tc([r/w]t 0 ),Tc(t)) (r) might have concession in the marking (C ′ , σ ′ ) of N t but the event s ′ r = e ′ r · end(r) might not have concession in (C ′ , σ ′ , W ′ ); that is, ω proc (r) ∈ W ′ . However, we have ω proc (r) ∈ σ 0 so by applying Lemma 4.19 along path π 1 we obtain ω proc (r) ∈ W ′ . So the event s ′ r has concession in the marking, which is therefore not violating.
The following result connects the definition of validity to the execution of processes without interference or ownership. Proof. A consequence of soundness and Lemma 4.14.
4.3. Fault. It can be seen that the rules of concurrent separation logic ensure that processes, running from suitable initial states, only access current locations. The syntax of the language ensures that processes only access current resources and that they are never blocked when releasing a resource through it already being available. We shall now demonstrate that processes avoid such 'faults', in which we shall say that an event e is controlenabled in a marking C of control conditions if there exists a marking C ′ such that C e − ։ C C ′ .
This definition also applies to markings (C, σ, W ) of W t Γ in the by ignoring the marking of ownership conditions W and considering synchronized events e · u.

Proof. By rule induction on the judgement Γ ⊢ {ϕ} t {ψ}.
A corollary of this result and Lemma 4.14 is that if ∅ ⊢ {ϕ} t {ψ} then no fault is reachable from an initial marking of N t if the heap initially satisfies ϕ.

Separation
As mentioned in the introduction, the logic discriminates between the parallel composition of processes and their interleaved expansion. In Brookes' trace semantics [Bro07], this was accounted for by making the notion of a race primitive within the semantics: when forming the parallel composition of processes, if two processes concurrently write to the same location, a special 'race' action occurs and the trace proceeds no further. Our approach when defining the semantics has been different; we do not regard a race as 'catastrophic' and have not embellished our semantics with special race states. Instead, we shall prove, using the semantics directly, that races do not occur for proved processes running from suitable initial states.
Generally, a race can be said to occur when two interacting heap actions occur concurrently. Recall that a heap action is represented in the net semantics by a set of events, with common pre-and post-control conditions, representing each way in which the action can affect the heap. According to the net model, two actions may be allowed to run concurrently if their events do not overlap on their pre-or post-control conditions. In such a situation, where C e 1 C ∩ C e 2 C = ∅, we shall say that e 1 and e 2 are control-independent. One way of capturing the race freedom of a process running from an initial state is to show that there is no reachable marking in the net where two control-independent events are control-enabled but access a common heap location, except interaction through allocation. We, however, shall prove a result based on the behaviour of processes: that whenever two events are control-independent and can occur, then either they are independent or they lie within a form of prescribed class of action.
(3) The symmetric statement for M The first part of the property above tells us how the enabled events of parallel processes conflict with each other in a state: the way in which one parallel process can prevent the other acting in a particular way on the global state. The second part dictates how the event occurrences of parallel processes causally depend on each other: the way in which the ability of one process to affect the global state in a particular way is dependent on events of the other process.
Importantly, whenever the two events s 1 and s 2 arise from heap actions, they neither conflict nor causally depend on each other. This is our net analogue of race freedom. Theorem 5.4 shows that processes proved by the logic are race free when running from suitable initial states. We shall make use of the following rather technical lemmas in the proof.
For a synchronized event s and an interference event u, define the separation property for s and u at M similarly, recalling that any synchronized event is trivially controlindependent from any interference event because C u C = ∅ for any interference event u. It is always the case that a synchronized event and an interference event satisfy the separation property in any consistent marking.
Lemma 5.2. If M is a consistent marking of W t Γ and s is a synchronized event and u is an interference event then s and u satisfy the separation property in M .

Proof.
A straightforward analysis of the many cases for s and u.
The following lemma relates independence from an interference event to independence from any corresponding synchronized event. Recall that we write eIe ′ if e and e ′ are independent.
Lemma 5.3. Let s be any synchronized event of W t Γ and u be an interference event of W t Γ . Suppose that M is a consistent marking in which they both have concession. If e 1 is an event of N t that synchronizes with u and sIu and s is control-independent from e 1 then sI(e 1 · u).
Proof. It is easy to see that the preconditions of e 1 · u are simply the preconditions of u along with the pre-control conditions of e 1 apart from replacing ω oth (ℓ) with ω proc (ℓ) and replacing ω oth (r) with ω proc (r). The postconditions of e 1 · u are similar.
Suppose, for contradiction, that ¬(sI(e 1 · u)). Since sIu and s is control-independent from e 1 , it follows that there must exist z ∈ Loc ∪ Res such that ω proc (z) ∈ • s • ∩ • (e 1 · u) • . From the definition of synchronization, we therefore have ω oth (z) ∈ • u • . The proof is completed by analysis of the cases for how ω proc (z) ∈ • s • ; we shall show only one illustrative case, that where z is a location ℓ such that ω proc (ℓ) ∈ • s but ω proc (ℓ) ∈ s • .
In this case, the event s must either deallocate the location ℓ or must release a resource r with r ∈ dom(Γ) and ℓ forms part of the heap used to satisfy the invariant for r. As the event s has concession in M , we have ω proc (ℓ) ∈ M . By assumption, u has concession in M and ω oth (ℓ) ∈ • u • . We cannot have ω oth (ℓ) ∈ • u since ω proc (ℓ) ∈ M , so ω oth (ℓ) ∈ u • . Therefore, the event u is an interference event that either allocates the location ℓ or acquires an open resource r and ℓ is part of the heap that satisfies the invariant for r. If u is such an event, that acquires r, it must be the case that ω inv (ℓ) ∈ • u so ω inv (ℓ) ∈ M , contradicting that M is a consistent marking with ω proc (ℓ) ∈ M . Consequently, u must in fact be an event that allocates the location ℓ, so therefore curr(ℓ) ∈ M . We then arrive at another contradiction since it must then be the case that ω proc (ℓ) ∈ M because M is consistent.
We may now show that the separation property does indeed hold for any two events s 1 and s 2 in W t Γ for any term t and environment Γ such that Γ ⊢ {ϕ} t {ψ} in any marking M = (C, σ, W ) reachable from an initial marking of t that satisfies ϕ in Γ. The proof is most interesting in the case where t ≡ t 1 t 2 and s 1 is an event of t 1 and s 2 is an event of t 2 . The case proceeds by establishing, as in Theorem 4.26, that there exists an ownership split W 1 and W 2 of W for which s 1 has concession in (C 1 , σ, W 1 ), where C 1 is the marking of control conditions in C for t 1 , and there exist e 2 and u 2 such that s 2 = (2:e 2 ) · u 2 and u 2 also has concession in the marking (C 1 , σ, W 1 ) of W t 1 Γ . By Lemma 5.2, the separation property therefore holds for s 1 and u 2 in the marking (C 1 , σ, W 1 ). It follows that the separation property holds for s 1 and s 2 in M since, by Lemma 5.3, if the events s 1 and u 2 are independent then so are s 1 and s 2 .
Let s 1 and s 2 be synchronized events in W t 1 t 2 Γ . If s 1 = (1:e 1 )·u 1 and s 2 = (1:e 2 )·u 2 for some e 1 , e 2 ∈ Ev(t 1 ) and interference events u 1 and u 2 in W t 1 Γ , the result follows routinely from the induction hypothesis, and similarly if s 1 and s 2 both arise from events of N t 2 . Suppose instead that there exist e 1 ∈ Ev(t 1 ), e 2 ∈ Ev(t 2 ) and interference events u 1 and u 2 such that s 1 = (1:e 1 ) · u 1 and s 2 = (2:e 2 ) · u 2 .
Suppose first that in the net W t 1 t 2 Γ we have . A simple induction applying the parallel decomposition lemma (Lemma 4.17) along the path to M shows that there exist W 1 and W 2 that form an ownership split of W such that (C 1 , σ, W 1 ) 1 , W ′′ 1 . By Lemma 5.2, the separation property holds for e 1 · u 1 and u 2 in (C 1 , σ, W 1 ); consider how it might hold. If e 1 · u 1 deallocates a location that u 2 allocates, then s 1 deallocates a location that s 2 allocates, so the separation property holds for s 1 and s 2 . The argument is similar for all the other cases where e 1 · u 1 and u 2 are not independent. Suppose instead that e 1 · u 1 Iu 2 . The event u 2 has concession in the marking (C 1 , σ, W 1 ) by virtue of the fact that the occurrence of independent events in a run can be interchanged (Proposition 3.4). Consider the marking (1:C 1 ∪ 2:C 2 , σ, W 1 ) of W t 1 t 2 Γ ; this is straightforwardly seen to be consistent. The event s 1 is readily seen using Lemma 3.6 to have concession in this marking, as does u 2 . The event 2:e 2 is control-independent from 1:e 1 , so by Lemma 5.3 we have s 1 Is 2 , as required. Now suppose that in the net W t 1 t 2 Γ we have . A simple induction applying the parallel decomposition lemma (Lemma 4.17) along the path to M shows that there exist W 1 and W 2 that form an ownership split of W such that By Lemma 5.2, the separation property holds for e 1 · u 1 and u 2 in (C 1 , σ, W 1 ); consider how it might hold. If e 1 · u 1 allocates a location that u 2 also allocates, then s 1 allocates a location that s 2 allocates, so the separation property holds for s 1 and s 2 . The argument is similar for all the other cases where e 1 · u 1 and u 2 are not independent. Suppose instead that e 1 · u 1 Iu 2 . Consider the marking (1:C 1 ∪ 2:C 2 , σ, W 1 ) of W t 1 t 2 Γ ; this is readily seen to be consistent. The event s 1 has concession in this marking as does u 2 . The event 2:e 2 is control-independent from 1:e 1 , so by Lemma 5.3 we have s 1 Is 2 , as required.
The remaining cases of the proof follow relatively straightforwardly by induction. The case for (L-Res) requires an observation along the lines of Lemma 4.25; that, for any term t with fv(t) ⊆ {w} and resources r, r ′ ∈ dom(Γ), if the separation property holds for any two synchronized events of W [r/w]t Γ in any marking reachable from any initial marking satisfying ϕ in Γ then it also holds for W [r ′ /w]t Γ .
The proof for the rule (L-Seq) follows straightforwardly by induction using Lemma 3.15 except in the second (and symmetric third) cases of the definition of the separation property, where there are reachable markings M, M ′ , M ′′ such that M s 1 − ։ M ′ s 2 − ։ M ′′ and there exist events e 1 ∈ Ev(t 1 ) and e 2 ∈ Ev(t 2 ) and interference events u 1 , u 2 such that s 1 = (P ⊳ 1:e 1 ) · u 1 and s 2 = (P ⊲ 2:e 2 ) · u 2 for P = 1:Tc(t 1 ) × 2:Ic(t 2 ). In this case, it follows from Lemma 3.15 and Lemma 3.12 that the events s 1 and s 2 are not control-independent.
The result can be applied, using Lemma 4.14 and the observation that e 1 · u 1 Ie 2 · u 2 implies that e 1 Ie 2 , to obtain a similar result for the net semantics of terms without ownership.
Corollary 5.5. Let t be a closed term. Suppose that ∅ ⊢ {ϕ} t {ψ} and that σ 0 = (D 0 , L 0 , ∅, ∅) is a state for which D 0 |= ϕ. If M is a marking reachable from (Ic(t), σ 0 ) in N t and e 1 and e 2 are control-independent events then: − ։ M ′ then either e 1 and e 2 are independent or e 1 releases a resource or a location that e 2 correspondingly takes or allocates, or e 1 makes non-current a resource that e 2 makes current. − ։ M 2 then either e 1 and e 2 are independent or e 1 and e 2 compete either to make current the same resource, acquire the same resource or to allocate the same location. 5.1. Incompleteness. The separation result highlights an important form of possible interaction between concurrent processes. Observe that, although there is neither conflict nor causal dependence arising from heap events (and hence the processes are race-free in the sense of Brookes), there may be interaction through the occurrence of allocation and deallocation events. One may therefore give judgements for parallel processes that interact without using critical regions. Suppose, for example, that we have a heap For any processes t 1 and t 2 such that t 1 does not deallocate ℓ 1 , if we place the process in parallel with alloc(ℓ 2 ); while [ℓ 2 ] = ℓ 1 do alloc(ℓ 2 ) od; t 2 , the process t 2 only takes place once t 1 has terminated, and possibly never, even if t 1 terminates. This arises from the fact that the loop in the second process will only exit when location ℓ 1 is allocated by the command alloc(ℓ 2 ); this can only occur once dealloc(ℓ 0 ) makes ℓ 1 non-current and therefore available for allocation by alloc(ℓ 2 ). Denote this process seq(t 1 , t 2 ).

Refinement
As we remarked in the introduction, the atomicity assumed of primitive actions, also called their granularity, is of significance when considering parallel programs. For example, suppose that the concurrent program It may not, however, be reasonable to assume that the assignment is executed atomically. For instance, the processor on which the process runs might have primitive actions for copying the values held in memory locations and for incrementing them, but not for copying and incrementing in one clock step. In [Rey04], Reynolds proposes a form of trace semantics that regards the occurrence of uncontrolled interference between concurrent processes as 'catastrophic'. The motivation behind this is the race freedom property arising from concurrent separation logic [Bro04]: in the semantics of a proved process running from a suitable initial state, no uncontrolled interference may occur. Reynolds' observation is that, in this situation, judgements may be made that are insensitive to atomicity. Within our net model we can provide a form of refinement, similar to that of [vGG89] but suited to processes executing in a shared environment, that begins to capture these ideas. Importantly, the property required to apply the refinement operation may be captured directly in terms of independence, with no changes to our semantics. We will relate the nets representing processes with different levels of atomicity by regarding them as alternative substitutions into a context. We will then give a condition on substitutions led by Theorem 5.4 to show that any partial correctness assertion made for one of the nets also holds for the other.
The treatment of substitution requires some restrictions to be placed on the nets we consider. In the remainder of this section and in Appendix A where we present the technical details of this section, we require that all embedded nets satisfy the structural properties described in Lemma 3.11 and Definition 3.13. We may now construct the net representing the substitution of a net N for the hole in a context K. We shall assume that, as in the semantics for terms, the two nets are formed with the same sets of conditions. As the nets are extensional (we regard an event simply as its set of preconditions paired with its set of postconditions), all that we need to specify is the events of the net and its initial and terminal markings of control conditions.
To see the definition at work, consider the following example. We elide details of the action of events on state conditions, which is unaffected by the substitution operation.
Write N : σ ⇓ σ ′ if there exists a complete sequence from σ to σ ′ in N .
Using this definition, we can define a notion of complete trace equivalence ≃ as: We wish to constrain K, N 1 and N 2 appropriately so that if N 1 ≃ N 2 then K[N 1 ] ≃ K[N 2 ].
Example 6.5. Write, in the obvious way, − for the action term that will be interpreted as forming the hole of a context. Define Return to the general case for a substitution K[N ]. Intuitively, if the substituend N were an atomic event, it would start running only if the conditions P i were marked and P t were not. There are two distinct ways in which the context K can affect the execution of N . Firstly, it might affect the marking of conditions in P i or P t whilst N is running. Secondly, it might change the marking of state conditions in a way that affects the execution of N . An instance of the latter form of interference is seen in the preceding example. We now define a form of constrained substitution, guided by Theorem 5.4, so that N is not subject to these forms of interference.
Say that a control condition c of K[N ] is internal to N if c = 2 : c 2 where c 2 is a preor a postcondition of an event of N that is not in Ic(N ) or Tc(N ). Given a marking M of K[N ], say that N is active if P i ⊆ M or there exists an internal condition of N in M . Theorem 6.7. If N 1 ≃ N 2 and K[N 1 ] and K[N 2 ] are non-interfering substitutions from state σ, then, for any σ ′ : Proof. Appendix A, Theorem A.12.
The refinement operation defined in this section allows us to change the granularity of heap actions by substituting the occurrence of an action in the original net with a net representing the actual implementation of the action, but only once it has been shown that the noninterference property holds for both the original net and for the net formed. The operation might be a key to proving Reynolds' observation that an occurrence of an action α in the term t can be replaced by a term with the same overall behaviour as α without affecting the validity of the judgement Γ ⊢ {ϕ} t {ψ}.

Related work and conclusions
The first component of this work provides an inductive definition of the semantics as a net of programs operating in a (shared) state. This is a relatively novel technique, but has in the past been applied to give the semantics of a language for investigating security protocols, SPL [CW01], though our language involves a richer collection of constructs. Other independence models for terms include the Box calculus [BDH92] and the event structure and net semantics of CCS [Stu80, Win82, WN95] ( [Stu80] was, to our knowledge, the first Petri net denotational semantics of CCS), though these model interaction as synchronized communication rather than occurring through shared state. We hope that the novel Petri net semantics presented here and in [CW01] can be the start of systematic and comprehensive methods to attribute structural Petri net semantics to a full variety of programming languages, resulting in a Petri net companion to Plotkin's structural operational semantics (SOS) based on transition systems [Plo81]. Paralleling the (inductive) definitions of data and transitions of SOS would be (inductive) definitions of conditions and events of Petri nets.
The proof of soundness of separation logic here is led by Brookes' earlier work [Bro07]. There are a few minor differences in the syntax of processes, including that we allow the dynamic binding of resource variables. Another minor difference between the programming language and logic considered here and that introduced by O'Hearn and proved sound by Brookes is that we do not distinguish stack variables. These may be seen as locations to which other locations may not point and are the only locations that terms can directly address. In Brookes' model, as in [O'H07], interference of parallel processes through stack variables is constrained by the use of a side condition on the rule rather than using the concept of ownership (the area of current research on 'permissions' [BCOP05,BCY05,Bro06] promises a uniform approach). In particular, the rule allows the concurrent reading of stack locations. Though we have chosen not to include stack variables in our model in order to highlight the concept of ownership, our model and proofs could be easily extended to deal with them. Concurrent reading of memory would be at the cost of a more sophisticated notion of independence that allowed independent events to access the same condition providing that neither affects the marking of that condition.
More notably, at the core of Brookes' work is a 'local enabling relation', which gives the semantics of programs over a restricted set of 'owned' locations. Our notion of validity involves maintaining a record of ownership and using this to constrain the occurrence of events in the interference net augmented to the process. This allows the intuition of ownership in O'Hearn's introduction of concurrent separation logic [O'H07] to be seen directly as constraining interference. Though the relationship between our model and Brookes' is fairly obvious, we believe that our approach leads to a clearer parallel decomposition lemma, upon which the proof of soundness of the logic critically stands.
The most significant difference between our work and Brookes' is that the net model captures, as a primitive property, the independence of parallel processes enforced by the logic. We have used this property to define a straightforward, yet general, form of refinement suited to changing the atomicity of commands within the semantics of a term. This is in contrast to [Bro05], which gives a new semantics to race-free processes that abstracts entirely away from attaching any form of atomicity to the semantics of heap actions. As said at the end of the previous section, we hope to show that the refinement operation can be applied to change the atomicity of any action occurring within any process running from a suitable initial state proved using to the rules of concurrent separation logic.
Our characterization of 'separation' arising from the logic is much finer than that obtained from the existing proof of race freedom, for example showing that interaction between parallel processes may occur through allocation and deallocation. This is significant, as such interaction leads to examples of the incompleteness of concurrent separation logic.
There are a number of other areas for further research in addition to those mentioned above. One interesting consideration is the necessity (or otherwise) of precision in the proof of soundness of the logic. In forthcoming work, we hope to give a form of game semantics for the logic and a soundness proof without precision in the absence of the Hoare's Law of Conjunction (L-Conjunction). Another area of interest is whether symmetry present in our semantics for allocation and resource declaration might be exploited properly to obtain more compact nets to represent processes.

Appendix A. Refinement
A sequence of events π = (e 1 , . . . , e n ) considered from a marking M can be thought of equivalently as a sequence M e 1 − ։ M 1 . . . en − ։ M n . To describe the structure of such sequences, we shall say that π from marking M is of form Π 1 · Π 2 if there exist π 1 and π 2 such that π = π 1 · π 2 , where · denotes the obvious concatenation of sequences, and π 1 is of form Π 1 from marking M and π 2 is of form Π 2 from the marking obtained by following π 1 from M . Sequence π is of form Π * if it is the concatenation of a finite number of sequences, each of form Π.
Throughout this section, when we consider the substitution K[N ] let P i and P t be defined as in Definition 6.2: Any reachable marking of conditions of the net can be partitioned into two sets: conditions that occur solely within K and conditions that are either N -internal or in P i or P t . Formally, a condition c is a K-condition if c = 1 : c 1 for some condition c 1 of K not in • [−] • . A condition c is an N -condition if either c ∈ P i ∪ P t or c = 2 : c 2 for some condition c 2 of N not in Ic(N ) ∪ Tc(N ). Recall that we call 2 : c 2 an N -internal condition. It is easy to see that, for any σ 0 , from the marking (Ic (K[N ]), σ 0 ) only Kor N -control conditions may be marked: If (C, σ) is a reachable marking of K[N ], we have C = C N ∪ C K for some marking C N of N -conditions and some marking C K of K-conditions. We shall frequently use the notation (C N , C K ) for a marking of control conditions, where C N comprises only N -conditions and C K comprises only K-conditions.
Henceforth, when considering a substitution K[N ], we shall refer to an event e as being an N -event if it is equal to (P i ∪ P t ) ⊲ 2 : e 2 for some e 2 in N . Otherwise, it is a K-event. Recall that a marking (C N , C K , σ) reachable from (Ic (K[N ]), σ 0 ) is N -active if either there is an N -internal condition in C N or if C N = P i . It is useful to further classify the markings of conditions in C N according to whether they support the occurrence of N -or K-events on the conditions P i and P t : x, x ′ ∈ [−] • , i ∈ Ic(N ) and t ∈ Tc(N ): • if (1 : a, 2 : i) ∈ C N then (1 : a ′ , 2 : i) ∈ C N , and • if (1 : x, 2 : t) ∈ C N then (1 : x ′ , 2 : t) ∈ C N . A marking (C N ∪C K , σ) of K[N ] is a K-marking if there is no N -internal condition marked, and furthermore, for all a ∈ • [−], x ∈ [−] • , i, i ′ ∈ Ic(N ) and t, t ′ ∈ Tc(N ): • if (1 : a, 2 : i) ∈ C N then (1 : a, 2 : i ′ ) ∈ C N , and • if (1 : x, 2 : t) ∈ C N then (1 : x, 2 : From a marking of control conditions (C N , C K ), we can extract markings of control conditions for the nets N and K. We define ρ N (C N ) to be the marking of N obtained from (C N , C K ), which is not dependent on the marking C K of K-conditions, and ρ K (C N , C K ) for the marking of K obtained from (C N , C K ), which is dependent on the marking of Nconditions (namely, the marking of N -conditions in P i ∪ P t ).
For a marking C of the context K, we define θ K (C) to be the corresponding marking of K[N ]. For a marking C ′ of the net N , we define θ N (C ′ ) to be the marking of N -conditions in the net K[N ] corresponding to C ′ . Definition A.3. Let K[N ] be any substitution. For any marking C N of N -conditions and C K of K-conditions, define For any marking C of control conditions of the net K and marking C ′ of control conditions of the net N , define For an event e of K[N ], define ρ N (e) = e ′ for the unique e ′ such that e = (P i ∪ P t ) ⊲ 2 : e ′ . For an event e of N , define θ N (e) = (P i ∪ P t ) ⊲ 2 : e. Define ρ K (e) and θ K (e) similarly, apart from having θ K ([−]) undefined. Proof. Immediate from the definitions.
It is clear that ρ N and θ N form a bijection between N -events and Ev(N ). It is also clear that ρ K and θ K form a bijection between K-events and Ev(K) \ {[−]}. On markings, the situation is a little more intricate: Lemma A.5. Let K[N ] be a substitution. For any marking of control conditions C K ∪ C N of K[N ] that is a K-marking and any marking C of control conditions of K: θ K (ρ K (C K ∪ C N )) = C K ∪ C N and ρ K (θ K (C)) = C.
For any marking of control conditions C K ∪ C N of K[N ] that is an N -marking and any marking C of control conditions of N : θ N (ρ N (C N )) = C N and ρ N (θ N (C)) = C.
Proof. First, let C be any marking of control conditions of K. We shall show that ρ K (θ K (C)) = C. Let c be any control condition of the net K. Since K is an embedded net, by the restrictions imposed in Lemma 3.11 there are three distinct cases: The first case is straightforward since the operation of θ K on such conditions is to add a '1 :'-tag which is removed by ρ K . Now consider c ∈ • [−]; the case for c ∈ [−] • will be similar. By the definition of θ K , since Ic(N ) is nonempty (again by Lemma 3.11): From the definition of ρ K , we have ∀i ∈ Ic(N ).
(1 : c, 2 : i) ∈ θ K (C) iff c ∈ ρ K (θ K (C)). So c ∈ C iff c ∈ ρ K (θ K (C)). Now suppose that (C K , C N ) is a K-marking of the substitution K[N ]. Let c be any condition of the net K[N ]. There are three distinct possible cases: c ∈ P i ∪ P t , c ∈ P i or c ∈ P t . First, suppose that c ∈ P i ∪ P t : c ∈ C N ∪ C K iff c ∈ C K (def. of K-marking) iff ∃c 1 .(c 1 ∈ ρ K (C N , C K ) and c = 1 : c 1 ) (def. of ρ K ) iff c ∈ θ K (ρ K (C N ∪ C K )) (def. of θ K ) Now suppose that c ∈ P i , so c = (1 : a, 2 : i) for some a ∈ • [−] and i ∈ Ic(N ): We have a similar analysis if c ∈ P t . Hence (C K , C N ) = θ K (ρ K (C K , C N )). For any marking of control conditions C of the net N and any N -marking (C K , C N ), θ N (ρ N (C N )) = C N and ρ N (θ N (C)) = C are shown similarly, this time with the first analysis considering conditions in Ic(N ), Tc(N ) and conditions not in either set.
(1) Let C and C ′ be markings of control conditions of K. If (C, σ) (2) Now let C and C ′ be markings of control conditions of N . If (C, σ) for any marking C K of K-conditions.
We are now able to characterize the runs of the net K[N ] when a non-interfering substitution is formed.
• Π 1 ranges over nonempty sequences π 1 of any events between N -markings, where no K-event uses any condition in P i or P t . If (C N ∪ C K , σ) and (C ′ N ∪ C ′ K , σ ′ ) are the initial and final markings of π 1 , respectively, then C N = P i and C ′ N = P t . The first event of π 1 is an N -event and the final event of π 1 is also an N -event.
Proof. We first show that any sequence π in N from (Ic(K[N ]), σ 0 ) is of the form Π 0 · (Π 1 · Π 0 ) * or Π 0 · (Π 1 · Π 0 ) * · Π ′ 1 by induction on the length of sequence, where a sequence is of form Π ′ 1 if: • it is a sequence of Kand N -events between N -markings where no K-event uses any condition in P i or P t , and • if (C N , C K , σ) is the initial marking of π 1 then C N = P i , and the first event of π 1 is an N -event. We shall simultaneously show that if π :(Ic(K[N ]), σ 0 ) − ։ * (C N , C K , σ) and (C N , C K , σ) is an N -marking then either it is N -active or C N = P t . Furthermore, if P t ⊆ C N then P t = C N . The base case for the induction is straightforward. Suppose that π :(Ic(K[N ]), σ 0 ) − ։ * M where M = (C N , C K , σ) and that e is an event such that M e − ։ M ′ . Let M ′ = (C ′ N , C ′ K , σ ′ ). We shall show that π · e from marking (Ic (K[N ]), σ 0 ) is of the correct form and that M ′ satisfies the required properties.
Suppose that M ′ is an N -marking but C ′ N = P t and M ′ is not N -active. As M ′ is not N -active, we must have C ′ N = P i . From the induction hypothesis, there must exist a path π ′ and markings C ′′ K and σ ′′ such that π ′ :(P i , C ′′ K , σ ′′ ) − ։ * (C ′ N , C ′ K , σ ′ ) and (P i , C ′′ K , σ ′′ ) is reachable from (Ic (K[N ]), σ 0 ). Furthermore, from (P i , C ′′ K , σ ′′ ) the path π ′ is between N -active markings. Since K[N ] is a non-interfering substitution from state σ 0 , it follows from the requirement that consecutive Kand N -events must be independent that there must exist paths π 1 and π 2 made exclusively of N -and K-events, respectively, such that π 1 · π 2 :(P i , C ′′ K , σ ′′ ) − ։ * (C ′ N , C ′ K , σ ′′ ). Since N -events do not affect the marking of K-conditions and from the requirement that K-events do not affect the marking of N -conditions along the path π 2 because K[N ] is a non-interfering substitution from σ 0 , there exists a state σ 1 such that π 1 :(P i , C ′′ K , σ ′′ ) − ։ * (C ′ N , C ′′ K , σ 1 ). Since ρ N (P i ) = Ic(N ), a simple induction on the length of this sequence using Lemma A.6 shows that the marking (ρ N (C ′ N ), σ 1 ) is reachable from (Ic(N ), σ ′′ ) in N . Consider the ways in which the N -marking (C ′ N , C ′ K , σ ′ ) may fail to be N -active: Firstly, if C ′ N P i , it follows that ρ N (C ′ N ) Ic(N ). Since (ρ N (C ′ N ), σ 1 ) is reachable from (Ic(N ), σ ′ ), this contradicts the requirement of Definition 3.13. The proof is similar in the other cases, C ′ N P t and P t C ′ N , which may cause the marking to fail to be N -active without C ′ N = P t . To complete the proof, it suffices to show the following properties: (1) K-events preserve K-markings: If M is a K-marking and e is a K-event and M e − ։ M ′ then M ′ is a K-marking. The only markings that are both N -and K-markings are of the form (P i , C K , σ) or (P t , C K , σ) or (P i ∪ P t , C K , σ) for some C K and σ. (5) No N -event has concession in any reachable marking that is not N -active. Properties (1) and (2) are straightforward calculations using Lemmas A.4, A.5 and A.6. Property (3) follows immediately from Lemma A.1. Property (4) is obvious from the definitions of N -and K-markings. Property (5) is straightforward from the induction hypotheses and the fact that no event has concession in the terminal marking of N according to the requirements of Lemma 3.11.
Finally, to see that any complete run is of the form Π 0 · (Π 1 · Π 0 ) * , observe that the terminal marking of control conditions Tc(K[N ]) is a K-marking. There are no C K and σ such that the marking (P i , C K , σ) is terminal since then • [−] ∩ Tc(K) = ∅, contradicting the requirement that K should be an embedded net satisfying the requirements of Lemma 3.11. Hence the terminal marking is not N -active.
Having now dealt with the control structure of contexts, we return to the idea that, given a net K[N 1 ] which is a non-interfering substitution from state σ, the events in any sequence may be reordered in a way that ensures that events of N 1 occur consecutively and form a "complete run" of the net N 1 . As N 1 ≃ N 2 , the net K[N 2 ] will therefore have a path between the same sets of state conditions.
To formalize this, let π be any sequential run of a non-interfering substitution K[N ] from marking M . The set P K[N ] (π, M ) is defined to be the least set of sequences from marking M of K[N ] closed under the operation of swapping consecutive independent events that contains the sequence π. It is easy to see that if π : M − ։ * M ′ and π ′ ∈ P K[N ] (π, M ) then π ′ : M − ։ * M ′ for any paths π and π ′ . Define the order ≺ on P K[N ] (π, M ) as follows: Definition A.9. Let π, π ′ ∈ P K[N ] (π 0 , M ). Define ≺ to be the transitive closure of ≺ 1 , where π ≺ 1 π ′ iff there exist sequences π 1 and π 2 , an N -event e and a K-event e ′ such that eIe ′ and π = π 1 · e · e ′ · π 2 and π ′ = π 1 · e ′ · e · π 2 .
It is clear that the order ≺ is well-founded since any path is, by definition, of finite length.
Definition A.10. Say that a sequence π of K[N ] from marking M is N -complete if M = (P i , C K , σ) for some C K and σ, every event of π is an N -event, and π :(P i , C K , σ) − ։ * (P t , C K , σ ′ ).
where Π N matches N -complete paths and Π 0 is as in Lemma A.8.
Proof. Suppose that π is a ≺-minimal element of P K[N ] (π 0 , M 0 ) but not of the form above. The sequence π is of the form of Lemma A.8 because π is a complete path of K[N ]. Consequently, there are π 1 , π 2 and π 3 such that π = π 1 · π 2 · π 3 and π 2 = (e · e ′ ) where e is a K-event and e ′ is an N -event. Furthermore, the marking M 1 such that π 1 : M 0 − ։ * M 1 is N -active. Now, from the definition of non-interfering substitution, the events e and e ′ are independent. Hence the sequence π 1 · e ′ · e · π 3 is in P K[N ] (π 0 , M 0 ) and is beneath π, contradicting its minimality.
This gives us the ability to prove Theorem 6.7 by induction on paths of K[N 1 ].
Theorem A.12. If K[N 1 ] and K[N 2 ] are non-interfering substitutions from σ 0 and N 1 ≃ N 2 then, for all states σ: Proof. Suppose that π is a complete sequence of K[N 1 ] from σ 0 to σ ′ . We shall show that, for all π 1 ∈ P K[N 1 ] (π, (Ic(K[N 1 ]), σ 0 )), if π 1 is a complete sequence from σ 0 to σ ′ then there exists a complete sequence π 2 of K[N 2 ] from σ 0 to σ ′ . The proof shall proceed by induction on the well-founded order ≺. In particular π ∈ P K[N 1 ] (π, (Ic(K[N ]), σ 0 )), so, with the symmetric proof for the other direction, this will complete the proof of the required property.
Furthermore, for each i ≤ n, the sequence π 0i is of the form Π 0 defined in Lemma A.8, as is the sequence π 0 ; and, for each i ≤ n, the sequence π 1i is of the form Π N 1 , which matches K , etc. We shall show, by induction on n, that if π 1 is a sequence of this form in K[N 1 ] from (Ic(K[N 1 ]), σ 0 ) to the marking (C ′ 1 , σ ′ ) then there exists a path π 2 from (Ic(K[N 2 ]), σ 0 ) to (C ′ 2 , σ ′ ) for some C ′ 2 such that ρ (1) K (C ′ 2 ).