Knowledge-Based Synthesis of Distributed Systems Using Event Structures

To produce a program guaranteed to satisfy a given specification one can synthesize it from a formal constructive proof that a computation satisfying that specification exists. This process is particularly effective if the specifications are written in a high-level language that makes it easy for designers to specify their goals. We consider a high-level specification language that results from adding knowledge to a fragment of Nuprl specifically tailored for specifying distributed protocols, called event theory. We then show how high-level knowledge-based programs can be synthesized from the knowledge-based specifications using a proof development system such as Nuprl. Methods of Halpern and Zuck then apply to convert these knowledge-based protocols to ordinary protocols. These methods can be expressed as heuristic transformation tactics in Nuprl.


Introduction
Errors in software are extremely costly and disruptive.NIST (the National Institute of Standards and Technology) estimates the cost of software errors to the US economy at $59.5 billion per year.One approach to minimizing errors is to synthesize programs from specifications.Synthesis methods have produced highly reliable moderate-sized programs in cases where the computing task can be precisely specified.One of the most elegant synthesis methods is the use of so-called correct-by-construction program synthesis [Bates and Constable 1985;Constable et al. 1986].Here programs are constructed from proofs that the specifications are satisfiable.That is, a constructive proof that a specification is satisfiable gives a program that satisfies the specification.This method has been successfully used by several research groups and companies to construct large complex sequential programs, but it has not yet been used to create substantial realistic distributed programs.
The Cornell Nuprl proof development system was among the first tools used to create correct-byconstruction functional and sequential programs [Constable et al. 1986].Nuprl has also been used extensively to optimize distributed protocols, and to specify them in the language of I/O Automata [Bickford, Kreitz, Renesse, and Liu 2001] Recent work by two of the authors [Bickford and Constable 2003] has resulted in the definition of a fragment of the higher-order logic used by Nuprl tailored to specifying distributed protocols, called event theory, and the extension of Nuprl methods to synthesize distributed protocols from specifications written in event theory [Bickford and Constable 2003].However, as has long been recognized [Fagin, Halpern, Moses, and Vardi 1995], designers typically think of specifications at a high level, which often involves knowledge-based statements.For example, the goal of a program might be to guarantee that a certain process knows certain information.It has been argued that a useful way of capturing these high-level knowledge-based specifications is by using high-level knowledge-based programs [Fagin, Halpern, Moses, and Vardi 1995;Fagin, Halpern, Moses, and Vardi 1997].Knowledge-based programs are an attempt to capture the intuition that what an agent does depends on what it knows.For example, a knowledge-based program may say that process 1 should stop sending a bit to process 2 once process 1 knows that process 2 knows the bit.Such knowledge-based programs and specifications have been given precise semantics by Fagin et al. [1995Fagin et al. [ , 1997]].They have already met with some degree of success, having been used both to help in the design of new protocols and to clarify the understanding of existing protocols [Dwork and Moses 1990;Halpern and Zuck 1992;Stulp and Verbrugge 2002].
In this paper, we add knowledge operators to event theory raising its level of abstraction and show by example that knowledge-based programs can be synthesized from constructive proofs that specifications in event theory with knowledge operators are satisfiable.Our example uses the sequencetransmission problem, where a sender must transmit a sequence of bits to a receiver in such a way that the receiver eventually knows arbitrarily long prefixes of the sequence.Halpern and Zuck [1992] provide knowledge-based programs for the sequence-transmission, prove them correct, and show that many standard programs for the problem in the literature can be viewed as implementations of their high-level knowledge-based programs.Here we show that one of these knowledge-based programs can be synthesized from the specifications of the problem, expressed in event theory augmented by knowledge.We can then translate the arguments of Halpern and Zuck to Nuprl, to show that the knowledge-based program can be transformed to the standard programs in the literature.Engelhardt, van der Meyden, andMoses [1998, 2001] have also provided techniques for synthesizing knowledge-based programs from knowledge-based specifications, by successive refinement.We see their work as complementary to ours.Since our work is based on Nuprl, we are able to take advantage of the huge library of tactics provided by Nuprl to be able to generate proofs.The expressive power of Nuprl also allows us to express all the high-level concepts of interest (both epistemic and temporal) easily.Engelhardt, van der Meyden, and Moses do not have a theorem-proving engine for their language.However, they do provide useful refinement rules that can easily be captured as tactics in Nuprl.
The paper is organized as follows.In the next section we give a brief overview of the Nuprl system, review event theory, discuss the type of programs we use (distributed message automata), and show how automata can be synthesized from a specification.In Section 3 we review epistemic logic, show how it can be translated into Nuprl, and show how knowledge-based automata can be captured in Nuprl.The sequence transmission problem is analyzed in Section 4. We conclude with references to related work and a discussion on future research in Section 5.
2 Synthesizing programs from constructive proofs 2.1 Nuprl: a brief overview Much current work on formal verification using theorem proving, including Nuprl, is based on type theory (see [Constable ] for a recent overview).A type can be thought of as a set with structure that facilitates its use as a data type in computation; this structure also supports constructive reasoning.The set of types is closed under constructors such as × and →, so that if A and B are types, so are A × B and A → B, where, intuitively, A → B represents the computable functions from A into B.
Constructive type theory, on which Nuprl is based, was developed to provide a foundation for constructive mathematics.The key feature of constructive mathematics is that "there exists" is interpreted as "we can construct (a proof of)".Reasoning in the Nuprl type theory is intuitionistic [Brouwer 1923], in the sense that proving a certain fact is understood as constructing evidence for that fact.For example, a proof of the fact that "there exists x of type A" builds an object of type A, and a proof of the fact "for any object x of type A there exists an object y of type B such that the relation R(x, y) holds" builds a function f that associates with each object a of type A an object b of type B such that R(a, b) holds.
One consequence of this approach is that the principle of excluded middle does not apply: while in classical logic, ϕ ∨ ¬ϕ holds for all formulas ϕ, in constructive type theory, it holds exactly when we have evidence for either ϕ or ¬ϕ, and we can tell from this evidence which of ϕ and ¬ϕ it supports.A predicate Determinate is definable in Nuprl such that Determinate(ϕ) is true iff the principle of excluded middle holds for formula ϕ. (From here on in, when we say that a formula is true, we mean that it is constructively true, that is, provable in Nuprl.) In this paper, we focus on synthesizing programs from specifications.Thus we must formalize these notions in Nuprl.As a first step, we define a type Pgm in Nuprl and take programs to be objects of type Pgm.Once we have defined Pgm, we can define other types of interest.Definition 2.1: A program semantics is a function S of type Pgm → Sem assigning to each program Pg of type Pgm a meaning of type Sem = 2 Sem ′ .Sem ′ is the type of executions consistent with the program Pgm under the semantics S .A specification is a predicate X on Sem ′ .A program P g satisfies the specification X if X (e) holds for all e in S (Pg).A specification X is satisfiable if there exists a program that satisfies X .As Definition 2.1 suggests, all objects in Nuprl are typed.To simplify our discussion, we typically suppress the type declarations.Definition 2.1 shows that the satisfiability of a specification is definable in Nuprl.The key point for the purposes of this paper is that from a constructive proof that X is satisfiable, we can extract a program that satisfies X .
Constructive type logic is highly undecidable, so we cannot hope to construct a proof completely automatically.However, experience has shown that, by having a large library of lemmas and proof tactics, it is possible to "almost" automate quite a few proofs, so that with a few hints from the programmer, correctness can be proved.For this general constructive framework to be useful in practice, the parameters Pgm, Sem ′ , and S must be chosen so that (a) programs are concrete enough to be compiled, (b) specifications are naturally expressed as predicates over Sem ′ , and (c) there is a small set of rules for producing proofs of satisfiability.
To use this general framework for synthesis of distributed, asynchronous algorithms, we choose the programs in Pgm to be distributed message automata.Message automata are closely related to IO-Automata [Lynch and Tuttle 1989] and are roughly equivalent to UNITY programs [Chandy and Misra 1988] (but with message-passing rather than shared-variable communication).We describe distributed message automata in Section 2.3.As we shall see, they satisfy criterion (a).
The semantics of a program is the system, or set of runs, consistent with it.Typical specifications in the literature are predicates on runs.We can view a specification as a predicate on systems by saying that a system satisfies a specification exactly if all the runs in the system satisfy it.To satisfy criterion (b), we formalize runs as structures that we call event structures, much in the spirit of Lamport's [1978] model of events in distributed systems.Event structures are explained in more detail in the next section.We have shown [Bickford and Constable 2003] that, although satisfiability is undecidable, there is indeed a small set of rules from which we can prove satisfiability in many cases of interest; these rules are discussed in Section 2.3.

Event structures
Consider a set AG of processes or agents; associated with each agent i in AG is a set X i of local variables.Agent i's local state at a point in time is defined as the values of its local variables at that time.We assume that the sets of local variables of different agents are disjoint.Information is communicated by message passing.The set of links is Links.Sending a message on some link l ∈ Links is understood as enqueuing the message on l, while receiving a message corresponds to dequeuing the message.Communication is point-to-point: for each link l there is a unique agent source(l) that can send messages on l, and a unique agent dest (l) that can receive message on l.For each agent i and link l with source(l) = i, we assume that msg (l) is a local variable in X i .
We assume that communication is asynchronous, so there is no global notion of time.Following Lamport [1978], changes to the local state of an agent are modeled as events.Intuitively, when an event "happens", an agent either sends a message, receives a message or chooses some values (perhaps nondeterministically).As a result of receiving the message or the (nondeterministic) choice, some of the agent's local variables are changed.
Lamport's theory of events is the starting point of our formalism.To help in writing concrete and detailed specifications, we add more structure to events.Formally, an event is a tuple with three components.The first component of an event e is an agent i ∈ AG, intuitively the agent whose local state changes during event e.We denote i as agent(e).The second component of e is its kind, which is either a link l with dest(l) = i or a local action a, an element of some given set Act of local actions.The only actions in Act are those that set local variables to certain values.We denote this component as kind(e).We often write kind(e) = rcv (l) rather than kind (e) = l to emphasize the fact that e is a receive event; similarly we write kind(e) = local (a) rather than kind (e) = a to emphasize the fact that a is a local action.The last component of e is its value v , a tuple of elements in some domain Val ; we denote this component as val (e).If e is a receive event, then val (e) is the message received when e occurs; if e is a local event a, then val (e) represents the tuple of values to which the variables are set by a. (For more details on the reasons that led to this formalism, see [Bickford and Constable 2005].) Rather than having a special kind to model send events, we model the sending of a message on link l by changing the value of a local variable msg(l) that describes the message sent on l.A special value ⊥ indicates that no message is sent when the event occurs; changing msg(l) to a value other than ⊥ indicates that a message is sent on l.This way of modeling send events has proved to be convenient.One advantage is that we can model multicast: the event e of i broadcasting a message m to a group of agents just involves a local action that sets msg (l) to m for each link l from i to one of the agents in the group.Similarly, there may be an action in which agent i sends a message to some agents and simultaneously updates other local variables.
Following Lamport [1978], we model an execution of a distributed program as a sequence of events satisfying a number of natural properties.We call such a sequence an event structure.We take an event structure es to be a tuple consisting of a set E of events and a number of additional elements that we now describe.These elements include the functions dest, source, and msg referred to above, but there are others.For example, Lamport assumes that every receive event e has a corresponding (and unique) event where the message received at e was sent.To capture this in our setting, we assume that the description of the event structure es includes a function send whose domain is the receive events in es and whose range is the set of events in es; we require that agent (send (e)) = source(l).Note that, since we allow multicasts, different receive events may have the same corresponding send event.
For each i ∈ AG, we assume that the set of events e in es associated with i is totally ordered.This means that, for each event e, we can identify the sequence of events (history) associated with agent i that preceded e.To formalize this, we assume that, for each agent i ∈ AG, the description of es includes a total order ≺ i on the events e in es such that agent(e) = i.Define a predicate first and function pred such that first(e) holds exactly when e is the first event in the history associated with agent(e) in es; if first(e) does not hold, then pred (e) is the unique predecessor of e in es.Following Lamport [1978], we take ≺ to be the least transitive relation on events in es such that send(e)≺e if e is a receive event and e≺e ′ if e≺ i e ′ .We assume that ≺ is well-founded.We abbreviate (e ′ ≺e) ∨ (e = e ′ ) as e ′ e, or e e ′ .Note that ≺ i is defined only for events associated with agent i: we write e≺ i e ′ only if agent(e) = agent(e ′ ) = i.
The local state of an agent defines the values of all the variables associated with the agent.While it is possible that an event structure contains no events associated with a particular agent, for ease of exposition, we consider only event structures in which each agent has at least one local state, and denote the initial local state of agent i as initstate i .In event structures es where at least one event associated with a given agent i occurs, initstate i represents i's local state before the first event associated with i occurs in es.Formally, the local state of an agent i is a function that maps X i and a special symbol val i to values.(The role of val i will be explained when we give the semantics of the logic.)If x ∈ X i , we write s(x) to denote the value of x in i's similarly, s(val i ) is the value of val i in s.If agent(e) = i, we take state before e to be the local state of agent i before e; similarly, state after e denotes i's local state after event e occurs.The value (state after e)(x) is in general different from (state before e)(x).How it differs depends on the event e, and will be clarified in the semantics.We assume that (state after e)(val i ) = val (e); that is, the value of the special symbol val i in a local state is just the value of the event that it follows.If x ∈ X i , we take x before e to be an abbreviation for (state before e)(x); that is, the value of x in the state before e occurs; similarly, x after e is an abbreviation for (state after e)(x).
Example 2.2: Suppose that Act contains send and send +inc(x ), where x ∈ X i , and that V al contains the natural numbers.Let n and v be natural numbers.Then • the event of agent i receiving message m on link l in the event structure es is modeled by the tuple e = (i, l, m), where agent(e) = i, kind(e) = rcv (l), and val (e) = m; • the event of agent i sending message n on link l in es is represented by the tuple e = (i, send , m), where msg(l) after e = m; • the event e of agent i sending m on link l and incrementing its local variable x by v in es is represented by the tuple e such that agent(e) = i, kind (e) = send +inc(x ), and val (e) = m, v , where msg(l) after e = m and x after e = x before e + v. • if e has kind rcv (l), then the value of e is the message sent on l during event send (e), agent(e) = dest (l), and agent(send (e)) = source(l): ∀e ∈ es.∀l .(kind (e) = rcv (l)) ⇒ (val (e) = msg(l) after send (e)) ∧ (agent(e) = dest(l)) ∧ (agent(send (e)) = source(l)) • for each agent i, events associated with i are totally ordered: ∀e ∈ es.∀e ′ ∈ es.(agent(e) = agent(e ′ ) = i ⇒ e≺ i e ′ ∨ e ′ ≺ i e ∨ e = e ′ ).
• e is the first event associated with agent i if and only if there is no event associated with i that precedes e: ∀e ∈ es ∀i.(agent(e) = i) ⇒ (first(e) ⇔ ∀e ′ ∈ es.¬(e ′ ≺ i e)).
• the initial local state of agent i is the state before the first event associated with i, if any: ∀i. (∀e ∈ es.(agent(e) = i ⇒ (first (e) ⇔ (state before e = initstate i )))).
• the local variables of agent agent(e) do not change value between the predecessor of e and e: ∀e ∈ es.∀i.(agent(e) = i ∧ ¬first(e)) ⇒ ∀x ∈ X i .(x after pred (e) = x before e).
• the causal order ≺ is well-founded: ∀P .(∀e. (∀e ′ ≺e.P (e ′ )) ⇒ P (e)) ⇒ (∀e.P (e)), where P is an arbitrary predicate on events.(It is easy to see that this axiom is sound if ≺ is well-founded.On the other hand, if ≺ is not well-founded, then let P be a predicate that is false exactly of the events e such that there there is an infinite descending sequence starting with e.In this case, the antecedent of the axiom holds, and the conclusion does not.) In our proofs, we will need to argue that two events e and e ′ are either causally related or they are not.It can be shown [Bickford and Constable 2003] that this can be proved in constructive logic iff the predicate first satisfies the principle of excluded middle.We enforce this by adding the following axiom to the characterization of event structures: ∀e ∈ es.Determinate(first(e)).
The set of event structures is definable in Nuprl (see [Bickford and Constable 2003]).We use event structures to model executions of distributed systems.We show how this can be done in the next section.

Distributed message automata
As we said, the programs we consider are message automata.Roughly speaking, we can think of message automata as nondeterministic state machines, though certain differences exist.Each basic message automaton is associated with an agent i; a message automaton associated with i essentially says that, if certain preconditions hold, i can take certain local actions.(We view receive actions as being out of the control of the agent, so the only actions governed by message automata are local actions.)At each point in time, i nondeterministically decides which actions to perform, among those whose precondition is satisfied.We next describe the syntax and semantics of message automata.

Syntax
We consider a first-order language for tests in automata.Fix a set AG of agents, a set X i of local variables for each agent i in AG, and a set X * of variables that includes ∪ i∈AG X i (but may have other variables as well).The language also includes special constant symbols val i , one for each agent i ∈ AG, predicate symbols in some finite set P, and function symbols in some finite set F. Loosely speaking, val i is used to denote the value of an event associated with agent i; constant symbols other than val 1 , . . ., val n are just 0-ary function symbols in F. We allow quantification only over variables other than local variables; that is, over variables x / ∈ ∪ i∈AG X i .
Message automata are built using a small set of basic programs, which may involve formulas in the language above.Fix a set Act of local actions and a set Links of links between agents in AG.1 There are five types of basic programs for agent i: • @i initially ψ; • @i if kind = k then x := t, where k ∈ Act ∪ Links and x ∈ X i ; • @i kind = local (a) only if ϕ; • @i if necessarily ϕ then i.o.kind = local (a); and • @i only L affects x, where L is a list of kinds in Act ∪ Links and x ∈ X i .
Note that all basic program for agent i are prefixed by @i.
We can form more complicated programs from simpler programs by composition.We can compose automata associated with different agents.Thus, the set (type) P gm of programs is the smallest set that includes the basic programs such that if Pg 1 and Pg 2 are programs, then so is Pg 1 ⊕ Pg 2 .2

Semantics
We give semantics by associating with each program the set of event structures consistent with the program.Intuitively, a set of event structures is consistent with a distributed message automaton if each event structure in the set can be seen as an execution of the automaton.The semantics can be defined formally in Nuprl as a relation between a distributed program Pg and an event structure es.In this section, we define the consistency relation for programs and give the intuition behind these programs.
In classical logic, we give meaning to formulas using an interpretation.In the Nuprl setting, we are interested in constructive interpretations I, which can be characterized by a formula ϕ I .We can think of ϕ I as characterizing a domain Val I and the meaning of the fuction and predicate symbols.If I is an interpretation with domain Val I , an I-local state for i maps X i ∪ {val i } to Val I ; an I-global state is a tuple of I-local states, one for each agent in AG.Thus, if s = (s 1 , . . ., s n ) is an I-global state, then s i is i's local state in s. (Note that we previously used s to denote a local state, while here s denotes a global state.We will always make it clear whether we are referring to local or global states.) For consistency with our later discussion of knowledge-based programs, we allow the meaning of some predicate and function symbols that appear in tests in programs to depend on the global state.We say that a function or predicate symbol is rigid if it does not depend on the global state.For example, if the domain is the natural numbers, we will want to treat +, ×, and < as rigid.However, having the meaning of a function or predicate depend on the global state is not quite as strange as it may seem.For example, we may want to talk about an array whose values are encoded in agent 1's variables x 1 , x 2 , and x 3 .An array is just a function, so the interpretation of the function may change as the values of x 1 , x 2 , and x 3 change.For each nonrigid predicate symbol P and function symbol f in P ∪ F, we assume that there is a predicate symbol P + and function symbol f + whose arity is one more than that of P (resp., f ); the extra argument that is a global state.We then associate with every formula ϕ and term t that appears in a program a formula ϕ + and term t + in the language of Nuprl.We define ϕ + by induction on the structure of ϕ.For example, for an atomic formula such as P (c), if P and c are rigid, then (P (c)) + is just P (c).If P and c are both nonrigid, then (P (c)) + is P + (c + (s), s), where s is a variable interpreted as a global state. 3We leave to the reader the straightforward task of defining ϕ + and t + for atomic formulas and terms.We then take (ϕ ∧ ψ) + = ϕ + ∧ ψ + , (¬ϕ) + = ¬ϕ + , and (∀xϕ) + = ∀xϕ + .
An I-valuation V associates with each non-local variable (i.e., variable not in ∪ i∈AG X i ) a value in Val I .Given an interpretation I, an I-global state s, and an I-valuation V , we take I V (ϕ)(s) to be an abbreviation for the formula (expressible in Nuprl) that says ϕ I together with the conjunction of atomic formulas of the form x = V (x) for all non-local variables x that appear in ϕ, x = s i (x) for variables x ∈ X i , i ∈ AG, that appear in ϕ, and s = s implies ϕ + .Thus, I V (ϕ)(s) holds if there is a constructive proof that the formula that characterizes I together with the (atomic) formulas that describe V (x) and s, and a formula that says that s is represented by s, imply ϕ + .It is beyond the scope of this paper (and not necessary for what we do here) to discuss constructive proofs in Nuprl; details can be found in [Constable et al. 1986].However, it is worth noting that, for a first-order formula ϕ, if I V (ϕ)(s) holds, then ϕ + is true in state s with respect to the semantics of classical logic in I.The converse is not necessarily true.Roughly speaking, I V (ϕ)(s) holds if there is evidence for the truth of ϕ + in state s (given valuation V ).We may have evidence for neither ϕ + nor ¬ϕ + .We also take I V (t)(s) to be the value v such that there is a constructive proof of I V (t = v)(s).Note that this says that, just as we may not have evidence for either ϕ nor ¬ϕ in constructive logic, not all terms are computable in Nuprl and I V (t)(s) may not be defined for all terms and states s.
A formula ϕ is an i-formula in interpretation I if its meaning in I depends only in i's local state; that is, for all global states s and s ′ such that It is easy to see that ϕ is an i-formula in all interpretations I if all the predicate and function symbols in ϕ are rigid, and ϕ does not mention variables in X j for j = i and does not mention the constant symbol val j for j = i.Intuitively, this is because if we have a constructive proof that ϕ holds in s with respect to valuation V , and ϕ is an i-formula, then all references to local states of agents other than i can be safely discarded from the argument to construct a proof for ϕ based solely on s i .If ϕ is an i-formula, then we sometimes abuse notation and write I V (ϕ)(s i ) rather than I V (ϕ)(s).Note that the valuation V is not needed for interpreting formulas whose free variables are all local; in particular, V is not needed to interpret iformulas.For the rest of this paper, if the valuation is not needed, we do not mention it, and simply write I(ϕ).Given a formula ϕ and term t, we can easily define Nuprl formulas i-formula(ϕ,I) and i-term(t,I) that are constructively provable if ϕ is an i-formula in I (resp., t is an i-term in I).
We define a predicate Consistent I on programs and event structures such that, intuitively, Consistent I (Pg, es) holds if the event structure es is consistent with program Pg, given interpretation I.We start with basic programs.The basic program @i initially ψ is an initialization program, which is intended to hold in an event structure es if ψ is an i-formula and i's initial local state satisfies ψ.Thus, (This notation implicitly assumes that initstate i is as specified by es, according to Definition 2.1.For simplicity, we have opted for this notation instead of es.initstate i .) We call a basic program of the form @i if kind = k then x := t an effect program.It says that, if t is an i-term, then the effect of an event e of kind k is to set x to t.We define where we write ∀e@i ∈ es.ϕ as an abbreviation for ∀e ∈ es.agent(e) = i ⇒ ϕ.As above, the notation above implicitly assumes that before and after are as specified by es.Again, this expression is an abbreviation for a formula expressible in Nuprl whose intended meaning should be clear; Consistent I (@i if kind = k then x := t, es) holds if there is a constructive proof of the formula.
We can use a program of this type to describe a message sent on a link l.For example, says that for all events e, f (v) is sent on link l if the kind of e is a, the local state of agent i before e is s i , and v = s i (val i ).
The third type of program, @i kind = local (a) only if ϕ, is called a precondition program.It says that an event of kind a can occur only if the precondition ϕ (which must be an i-formula) is satisfied: Note that we allow conditions of the form kind (e) = local (a) here, not the more general condition of the form kind (e) = k allowed in effect programs.We do not allow conditions of the form kind (e) = rcv (l) because we assume that receive events are not under the control of the agent.
Standard formalizations of input-output automata (see [Lynch and Tuttle 1989]) typically assume that executions satisfy some fairness constraints.We assume here only a weak fairness constraint that is captured by the basic program @i if necessarily ϕ then i.o.kind = local (a), which we call a fairness program.Intuitively, it says that if ϕ holds from some point on, then an event with kind local (a) will eventually occur.For an event sequence with only finitely many states associated with i, we take ϕ to hold "from some point on" if ϕ holds at the last state.In particular, this means that the program cannot be consistent with an event sequence for which there are only finitely many events associated with i if ϕ holds of the last state associated with i. Define The last type of basic program, @i only L affects x, is called a frame program.It ensures that only events of kinds listed in L can cause changes in the value of variable x .The precise semantics depends on whether x has the form msg(l).If x does not have the form msg (l), then Consistent I (@i only L affects x,es) = def ∀e@i ∈ es.((x after e) = (x before e)⇒(kind (e) ∈ L)).
If x has the form msg (l), then we must have source(l) = i.Recall that sending a message m on l is formalized by setting the value of msg (l) to m.We assume that messages are never null (i.e., m = ⊥).No messages are sent during event e if msg (l)after e = ⊥.If x has the form msg(l), then Consistent I (@i only L affects msg(l),es) = def ∀e@i ∈ es.((msg (l) after e = ⊥)⇒(kind (e) ∈ L)).
Finally, an event structure es is said to be consistent with a distributed program P g that is not basic if es is consistent with each of the basic programs that form P g: Definition 2.4: Given an interpretation I, the semantics of a program Pg is the set of event structures consistent with Pg under interpretation I.We denote by S I this semantics of programs: S I (Pg) = {es | Consistent I (Pg, es)}.We write Pg | ≈ I X if Pg satisfies X with respect to interpretation I; that is, if X (es) is true for all es ∈ S I (P g)).
Note that S I (Pg 1 ⊕ Pg 2 ) = S I (Pg 1 ) ∩ S I (Pg 2 ).Since the Consistent I predicate is definable in Nuprl, we can formally reason in Nuprl about the semantics of programs.
A specification is a predicate on event structures.Since our main goal is to derive from a proof that a specification X is satisfiable a program that satisfies X , we want to rule out the trivial case where the derived program Pg has no executions, so that it vacuously satisfies the specification X .
Thus, a specification is realizable if there exists a consistent program that satisfies it, and, given an interpretation I, a program is realizable if there exists an event structure consistent with it (with respect to I).Since we reason constructively, this means that a program is realizable if we can construct an event structure consistent with the program.This requires not only constructing sequences of events, one for each agent, but all the other components of the event structure as specified in Definition 2.3, such as AG and Act.
All basic programs other than initialization and fairness programs are vacuously satisfied (with respect to every interpretation I) by the empty event structure es consisting of no events.The empty event structure is consistent with these basic programs because their semantics in defined in terms of a universal quantification over events associated with an agent.It is not hard to see that an initialization program @i initially ψ is consistent with respect to interpretation I if and only if ψ is satisfiable in I; i.e., there is some global state s such that I(ψ)(s i ) holds.For if es is an event structure with initstate i = s i , then clearly es realizes @i initially ψ.
Fair programs are realizable with respect to interpretations I where the precondition ϕ satisfies the principle of excluded middle (that is, ϕ I ⇒ Determinate(ϕ + ) is provable in Nuprl), although they are not necessarily realized by a finite event structure.To see this, note that if ϕ satisfies the principle of excluded middle in I, then either there is an I-local state s * i for agent i such that I(¬ϕ)(s * i ) holds, or I(ϕ)(s i ) holds for all I-local states s i for i.In the former case, consider an empty event structure es with domain Val I and initstate i = s * i ; it is easy to see that es is consistent with @i if necessarily ϕ then i.o.kind = local (a).Otherwise, let Act = {a}.Let es be an event structure where Act is the set of local actions, Val I is the set of values, the sequence of events associated with agent i in es is infinite, and all events associated with agent i have kind local (a).Again, it is easy to see that es is consistent with @i if necessarily ϕ then i.o.kind = local (a).
If ϕ does not satisfy the principle of excluded middle in I, then @i if necessarily ϕ then i.o.kind = local (a) may not be realizable with respect to I.For example, this would be the case if for example, neither I(ϕ)(s i ) nor I(¬ϕ)(s i ) holds for any local state s i .
Note that two initialization programs may each be consistent although their composition is not.For example, if both ψ and ¬ψ are satisfiable i-formulas, then each of @i initially ψ and @i initially ¬ψ is consistent, although their composition is not.Nevertheless, all programs synthesized in this paper can be easily proven consistent.Bickford and Constable [2003] derived from the formal semantics of distributed message automata some Nuprl axioms that turn out to be useful for proving the satisfiability of a specification.We now present (a slight modification of) their axioms.The axioms have the form Pg | ≈ I X, where Pg is a program and X is a specification, that is, a predicate on event structures; the axiom is sound if all event structures es consistent with program Pg under interpretation I satisfy the specification X.We write | ≈ I to make clear that the program semantics in given with respect to an interpretation I.There is an axiom for each type of basic program other than frame programs, two axioms for frame programs (corresponding to the two cases in the semantic definition of frame programs), together with an axiom characterizing composition and a refinement axiom.
(Note that the right-hand side of | ≈ is a specification; given an event structure es, it is true if i-formula(ψ, I) ∧ I(ψ)(initstate i ) holds in event structure es.)
Proof: This is immediate from Definitions 2.1 and 2.4, and the definition of Consistent I .

A general scheme for program synthesis
Recall that, given a specification ϕ and an interpretation I, the goal is to prove that ϕ is satisfiable with respect to I, that is, to show that ∃P g. (P g | ≈ I ϕ) holds.We now provide a general scheme for doing this.Consider the following scheme, which we call GS: 1. Find specifications ϕ 1 , ϕ 2 , . .., ϕ n such that ∀es.(ϕ 1 (es) ∧ ϕ 2 (es) ∧ . . .∧ ϕ n (es) ⇒ ϕ(es)) is true under interpretation I.

Conclude that
Step 1 of GS is proved using the rules and axioms encoded in the Nuprl system; Step 2 is proved using the axioms given in Section 2.3.3.It is easy to see that GS is sound in the sense that, if we can show using GS that Pg satisfies ϕ, then Pg does indeed satisfy ϕ.We formalize this in the following proposition.
Proposition 2.7: Scheme GS is sound.

Example
As an example of a specification that we use later, consider the run-based specification Fair I (ϕ, t, l), where i = j, l is a link with source(l) = i and dest (l) = j, ϕ is an i-formula, and t is an i-term.Fair I (ϕ, t, l) is a conjunction of a safety condition and a liveness condition.The safety condition asserts that if a message is received on link l, then it is the term t interpreted with respect to the local state of the sender, and that ϕ, evaluated with respect to the local state of the sender, holds.The liveness condition says that, if (there is a constructive proof that) condition ϕ is enabled from some point on in an infinite event sequence, then eventually a message sent on l is delivered.(Thus, the specification imposes a weak fairness requirement.)We define F air I (ϕ, t, l) as follows: F air I (ϕ, t, l) = def λes.i-formula(ϕ, I) ∧ i-term(t, I) (∀e ′ ∈ es.(kind (e ′ ) = rcv (l) ⇒ I (ϕ)(state before send (e ′ )) ∧ val (e ′ ) = I(t)(state before send (e ′ ))) ∧ ((∃e@i ∈ es ∧ ∀e@i ∈ es.∃e ′ i e. I(¬ϕ)(state after e ′ )) ∨ (¬(∃e@i ∈ es) ∧ I(¬ϕ)(initstate i )) ∨ (∃e@i ∈ es ∧ ∀e@i ∈ es.∃e ′ i e. kind (e ′ ) = rcv (l) ∧ send (e ′ ) i e)).
We are interested in this fairness specification only in settings where communication satisfies a (strong) fairness requirement: if infinitely often an agent sends a message on a link l, then infinitely often some message is delivered on l.We formalize this assumption using the following specification: We explain below why we need communication to satisfy strong fairness rather than weak fairness (which would require only that if a message is sent infinitely often, then a message is eventually delivered).
For an arbitrary action a, let Fair -Pg(ϕ, t, l, a) be the following program for agent i: The first basic program says that i takes action a only if ϕ holds.The second basic program says that the effect of agent i taking action a is for t to be sent on link l; in other words, a is i's action of sending t to agent j.The third program ensures that only action a has the effect of sending a message to agent j.With this program, if agent j (the receiver) receives a message from agent i (the sender), then it must be the case that the value of the message is t and that ϕ was true with respect to i's local state when it sent the message to j.The last basic program ensures that if ϕ holds from some point on in an infinite event sequence, then eventually an event of kind a holds; thus, i must send the message t infinitely often.The fairness requirement on communication ensures that if an event of kind a where i sends t occurs infinitely often, then t is received infinitely often.
Lemma 2.8: For all actions a, Fair -Pg(ϕ,t, l, a) satisfies λes.FairSend (l)(es) ⇒ F air I (ϕ, t, l)(es ) with respect to all interpretations I such that ϕ is an i-formula and t is an i-term in I.

Proof:
We present the key points of the proof here, omitting some details for ease of exposition.We follow the scheme GS.We assume that i-formula(ϕ, I) and i-term(t, I) both hold.
We want to find formulas ψ 1 (es), . . ., ψ 4 (es) that follow from the four basic programs that make up Fair -Pg(ϕ,t, l, a) and together imply ϕ 1 (es) ∧ ϕ 2 (es) ∧ ϕ 3 (es).It will simplify matters to reason directly about the events where a message is sent on link l.We thus assume that, for all events e, agent i sends a message on link l during event e iff kind (e) = local(a).This assumption is expressed by: It is easy to check that (ψ 1 (es) ∧ ψ 2 (es)) ⇒ ϕ 1 (es)) is true, where ψ 2 (es) is ∀e@i ∈ es.(kind (e) = local (a)) ⇒ I(ϕ)(state before e).
Similarly, using the axiom of event structures given in Section 2.2 that says that the value of a receive event e on l is the value of msg(l) after send (e), it is easy to check that (ψ 1 (es) ∧ ψ 3 (es)) ⇒ ϕ 2 (es)) is true, where ψ 3 (es) is ∀e@i ∈ es.(kind(e) = local (a)) ⇒msg (l) after e = I(t)(state before e).
Step 2. By Ax-sends @i only [a] affects msg(l) and by Ax-fair By the soundness of GS (Proposition 2.7), Fair -Pg(ϕ, t, l, a) satisfies λes.FairSend (l)(es) ⇒ F air I (ϕ, t, l)(es) with respect to I. Lemma 2.9: For all interpretations I such that ϕ is an i-formula and t is an i-term in I, if ϕ satisfies the principle of excluded middle with respect to I, then F air-P g(ϕ,t, l, a) is consistent with respect to I.
Proof: This argument is almost identical to that showing that fair programs are realizable with respect to interpretations where the precondition satisfies the principle of excluded middle.Since ϕ satisfies the principle of excluded middle with respect to I, either there exists an I-local state s * i for agent i such that I(¬ϕ)(s * i ) holds, or I(ϕ)(s i ) holds for all I-local states s i for i.In the former case, let es be an empty event structure such that i, j ∈ AG, l ∈ Links, a ∈ Act, and initstate i = s * i .In the latter case, choose es with AG and Links as above, let Act = {a, b}, and where i and j alternate sending and receiving the message t on link l, where these events have kind a and b, respectively.
Corollary 2.10: For all interpretations I such that if ϕ is an i-formula and t is an i-term in I, if ϕ satisfies the principle of excluded middle with respect to I, then the specification F air I (ϕ, t, l) is realizable with respect to I.
Proof: This is immediate from Lemmas 2.8 and 2.9, and from the fact that the event structure constructed in Lemma 2.8 satisfies FairSend (l).
The notion of strong communication fairness is essential for the results above: Fair I (ϕ, t, l) may not be realizable if we assume that communication satisfies only a weak notion of fairness that says that if a message is sent after some point on, then it is eventually received.This is so essentially because our programming language is replacing standard "if condition then take action" programs with weaker variants that ensure that, if after some point a condition holds, then eventually some action is taken.
We now show that the composition of Fair -Pg(ϕ, t, l, a) and Fair -Pg(ϕ, t, l ′ , a) for different links l and l ′ satisfies the corresponding fairness assumptions.
Proof: Suppose a = a ′ .We again use scheme GS.
Step 2. By Lemma 2.8, F air-P g(ϕ,t, l, a) | ≈ I ϕ 1 and F air-P g(ϕ Finally, we can show that F air-P g(ϕ,t, l, a) ⊕ F air-P g(ϕ ′ ,t ′ , l ′ , a ′ ) is consistent, where l is a link from i to j, l ′ is a link from i ′ to j ′ , and l = l ′ (so that we may have i = i ′ or j = j ′ , but not both), and thus the specification λes.(FairSend (l)( es) ∧ FairSend (l ′ )(es)) ⇒ (F air I (ϕ, t, l)(es ) ∧ F air I (ϕ ′ , t ′ , l ′ )(es)) is realizable with respect to I. if both ϕ and ϕ ′ satisfy the principle of excluded middle with respect to I. Lemma 2.12: For all interpretations I such that ϕ is an i-formula, t is an i-term, ϕ ′ is an i ′ -formula, and t ′ is an i-term in I, if both ϕ and ϕ ′ satisfy the principle of excluded middle with respect to I, then, for all distinct actions a and a ′ and all distinct links l and l ′ , F air-P g(ϕ,t, l, a) ⊕ F air-P g(ϕ ′ ,t ′ , l ′ , a ′ ) is consistent with respect to I.
Proof: If I(¬ϕ ∧ ¬ϕ)(s) holds for some global state s, then let es be the empty event structure such that initstate i = s i and initstate i ′ = s i ′ .Clearly es is consistent with F air-P g(ϕ,t, l, a) ⊕ F air-P g(ϕ ′ ,t ′ , l ′ , a ′ ).Otherwise, let es be an event structure with domain Val I , i, j, i ′ , j ′ ∈ AG, and l, l ′ ∈ Links, consisting of an infinite sequence of states such that if I(ϕ) holds for infinitely many states, then i sends t on link l infinitely often; if I(ϕ ′ ) holds for infinitely many states, then i ′ sends t ′ on link l ′ infinitely often; if t is sent on l infinitely often, then j receives it on link l infinitely often; and if t ′ is sent on l ′ infinitely often, then j ′ receives it on l ′ infinitely often.It is straightforward to construct such an event structure es.Again, it should be clear that es is consistent with F air-P g(ϕ,t, l, a) ⊕ F air-P g(ϕ ′ ,t ′ , l ′ , a ′ ).

Adding knowledge to Nuprl
We now show how knowledge-based programs can be introduced into Nuprl.

Consistent cut semantics for knowledge
We want to extend basic programs to allow for tests that involve knowledge.For simplicity, we take AG = {1, 2, . . ., n}.As before, we start with a finite set P ∪ F of predicates and functions, and close off under conjunction, negation, and quantification over non-local variables; but now, in addition, we also close off under application of the temporal operators and ♦, and the epistemic operators K i , i = 1, . . ., n, one for each process i.
We again want to define a consistency relation in Nuprl for each program.To do that, we first need to review the semantics of knowledge.Typically, semantics for knowledge is given with respect to a pair (r, m) consisting of a run r and a time m, assumed to be the time on some external global clock (that none of the processes necessarily has access to [Fagin, Halpern, Moses, and Vardi 1995]).In event structures, there is no external notion of time.Fortunately, Panangaden and Taylor [1992] give a variant of the standard definition with respect to what they call asynchronous runs, which are essentially identical to event structures.We just apply their definition in our framework.
The truth of formulas is defined relative to a pair (Sys, c), consisting of a system Sys (i.e., a a set of event structures) and a consistent cut c of some event structure es ∈ Sys, where a consistent cut c in es is a set of events in es closed under the causality relation.Recall from Section 2.2 that this amounts to c satisfying the constraint that, if e ′ is an event in c and e is an event in es that precedes e ′ (i.e., e ≺ e ′ ), then e is also in c.We write c ∈ Sys if c is a consistent cut in some event structure in Sys.
Traditionally, a knowledge formula K i ϕ is interpreted as true at a point (r, m) if ϕ is true regardless of i's uncertainty about the whole system at (r, m).Since we interpret formulas relative to a pair (Sys, c), we need to make precise i's uncertainty at such a pair.For the purposes of this paper, we assume that each agent keeps track of all the events that have occurred and involved him (which corresponds to the assumption that agents have perfect recall); we formalize this assumption below.Even in this setting, agents can be uncertain about what events have occurred in the system, and about their relative order.Consider, for example, the scenario in the left panel of Figure 1: agent i receives a message from agent j (event e 2 ), then sends a message to agent k (e 3 ), then receives a second message from agent j (e 6 ), and then performs an internal action (e 7 ).Agent i knows that send(e 2 ) occurred prior to e 2 and that send(e 6 ) occurred prior to e 6 .However, i considers possible that after receiving his message, agent k sent a message to j which was received by j before e 7 (see the right panel of Figure 1).
Figure 1: Two consistent cuts that cannot be distinguished by agent i.
In general, as argued by Panangaden and Taylor, agent i considers possible any consistent cut in which he has recorded the same sequence of events.To formalize this intuition, we define equivalence relations ∼ i , i = 1, . . ., n, on consistent cuts by taking c ∼ i c ′ if i's history is the same in c and c ′ .Given two consistent cuts c and c ′ , we say that c c ′ if, for each process i, process i's history in c is a prefix of process i's history in c ′ .Relative to (Sys, c), agent i considers possible any consistent cut c ′ ∈ Sys such that c ′ ∼ i c.
Since the semantics of knowledge given here implicitly assumes that agents have perfect recall, we restrict to event structures that also satisfy this assumption.So, for the remainder of this paper, we restrict to systems where local states encode histories, that is, we restrict to systems Sys such that, for all event structures es, es ′ ∈ Sys, if e is an event in es, e ′ is an event in es ′ , agent(e) = agent(e ′ ) = i, and state before e = state before e ′ , then i has the same history in both es and es ′ .For simplicity, we guarantee this by assuming that each agent i has a local variable history i ∈ X i that encodes its history.Thus, we take initstate i (history i ) = ⊥ and for all events e associated with agent i, we have (s after e)(history i ) = (s before e)(history i )•e.It immediately follows that in two global states where i has the same local state, i must have the same history.Let S ystem be the set of all such systems.
Recall that events associated with the same agent are totally ordered.This means that we can associate with every consistent cut c a global state s c : for each agent i, s c i is i's local state after the last event e i associated with i in c occurs.Since local states encode histories, it follows that if In the following, we assume that all global states in a system Sys have the form s c for some consistent cut c.
Nuprl is rich enough that epistemic and modal operators can be defined within Nuprl.Thus, to interpret formulas with epistemic operators and temporal operators, we just translate them to formulas that do not mention them.Since the truth of an epistemic formula depends not just on a global state, but on a pair (Sys, c), where the consistent cut c can be identified with a global state in some event structure in Sys, the translated formulas will need to include variables that, intuitively, range over systems and global states.To make this precise, we expand the language so that it includes rigid binary predicates CC and , a rigid binary function ls, and rigid constants s and Sys.Intuitively, s represents a global state, Sys represents a system, CC(x, y) holds if y is a consistent cut (i.e., global state) in system x, ls(x, i) is i's local state in global state x, and represents the ordering on consistent cuts defined above.
For every formula that does not mention modal operators, we take ϕ t = ϕ.We define and Given an interpretation I, let I ′ be the interpretation that extends I by adding to ϕ I formulas characterizing Sys, s, CC, ls, and appropriately.That is, the formulas force Sys to represent a set of event structures, s to be a consistent cut in one of these event structures, and so on.These formulas are all expressible in Nuprl.More specifically, we restrict here to constructive systems, that is, systems that can be defined in Nuprl.A constructive system Sys can be characterized by a formula ϕ Sys in Nuprl.ϕ Sys has a free variable Sys ranging over systems such that ϕ Sys holds under interpretation I ′ and valuation V iff V (Sys) = Sys.We now define a predicate I ′ V (ϕ) on systems and global states by simply taking I ′ V (ϕ)(Sys, s) to hold iff ϕ I ′ together with the conjunction of atomic formulas of the form x = V (x) for all non-local variables x that appear in ϕ, x = s i (x) for variables x ∈ X i , i ∈ AG, that appear in ϕ, s = s, and ϕ Sys , imply (ϕ t ) + (where, in going from ϕ t to (ϕ t ) + , we continue to use the s).Thus, we basically reduce a modal formula to a non-modal formula, and evaluate it in system Sys using I V .
Just as in the case of non-epistemic formulas, the valuation V is not needed to interpret formulas whose only free variables are in ∪ i∈AG X i .For such formulas, we typically write I ′ (ϕ)(Sys , s) instead of I ′ V (ϕ)(Sys , s).We can also define i-formulas and i-terms, but now whether a formula is an i-formula or a term is an i-term depends, not only on the interpretation, but on the system.A formula ϕ is an iformula in interpretation I ′ and system Sys if, for all states s, s ′ in Sys, We write this as i-formula(ϕ, I, Sys ) and i-term(t, I, Sys), respectively.If ϕ is an iformula and t is an i-term in I and Sys for all systems Sys, then we simply write i-formula(ϕ, I) and i-formula(t, I).For an i-formula, we often write I ′ V (ϕ)(Sys, s i ) rather than I ′ V (ϕ)(Sys , s).Note that a Boolean combination of epistemic formulas whose outermost knowledge operators are K i is guaranteed to be an i-formula in every interpretation, as is a formula that has no nonrigid functions or predicates and does not mention K j for j = i.The former claim is immediate from the following lemma.
Lemma 3.1: For all formulas ϕ, systems Sys, and global states s and s ′ , if Proof: Follows from the observation that if we have a proof in Nuprl that an i-formula holds given I ′ , Sys, and s ∈ Sys, then we can rewrite the proof so that it mentions only s i rather than s.Thus, we actually have a proof that the i-formula holds in all stats s ′ ∈ Sys such that s ′ i = s i .

Knowledge-based programs and specifications
In this section, we show how we can extend the notions of program and specification presented in Section 2 to knowledge-based programs and specifications.This allows us to employ the large body of tactics and libraries already developed in Nuprl to synthesize knowledge-based programs from knowledgebased specifications.

Syntax and semantics
Define knowledge-based message automata just as we defined message automata in Section 2.3, except that we now allow arbitrary epistemic formulas in tests.If we want to emphasize that the tests can involve knowledge, we talk about knowledge-based initialization, precondition, effect, and fairness programs.For the purposes of this paper, we take knowledge-based programs to be knowledge-based message automata.
We give semantics to knowledge-based programs by first associating with each knowledge-based program a function from systems to systems.Let (Pg kb ) t be the result of replacing every formula ϕ in Pg kb by ϕ t .Note that (Pg kb ) t is a standard program, with no modal formulas.Given an interpretation I and a system Sys let I(Sys) be the result of adding to ϕ I the formula ϕ Sys .Now we can apply the semantics of Section 2.3.2 and get the system S I(Sys) ((Pg kb ) t ).In general, the system S I(Sys) ((Pg kb ) t ) will be different from the system Sys.A system Sys represents a knowledge-based program Pg kb (with respect to interpretation I) if it is a fixed point of this mapping; that is, if S I(Sys) ((Pg kb ) t ) = Sys.Following Fagin et al. [1995Fagin et al. [ , 1997]], we take the semantics of a knowledge-based program Pg kb to be the set of systems that represent it.

Definition 3.2:
A knowledge-based program semantics is a function associating with a knowledgebased program Pg kb and an interpretation I the systems that represent Pg kb with respect to I; that is, As observed by Fagin et al. [1995Fagin et al. [ , 1997]], it is possible to construct knowledge-based programs that are represented by no systems, exactly one system, or more than one system.However, there exist conditions (which are often satisfied in practice) that guarantee that a knowledge-based program is represented by exactly one system.Note that, in particular, standard programs, when viewed as knowledge-based programs, are represented by a unique system; indeed, S kb I (Pg) = {S I (Pg)}.Thus, we can view S kb I as extending S I .A (standard) program P g implements the knowledge-based program Pg kb with respect to interpretation I if S I (Pg) represents Pg kb with respect to I, that is, if S I(S I (Pg )) ((Pg kb ) t ) = S I (Pg).In other words, by interpreting the tests in Pg kb with respect to the system generated by Pg, we get back the program Pg.

Knowledge-based specifications
Recall that a standard specification is a predicate on event structures.Following [Fagin, Halpern, Moses, and Vardi 1997], we take a knowledge-based specification (kb specification from now on) to be a predicate on systems.As for standard basic programs, it is not difficult to show that knowledge-based precondition, effect, and frame programs are trivially consistent: we simply take Sys to consist of only one event structure es with no events.A knowledge-based initialization program is realizable iff ϕ I ∧ ψ t is satisfiable.Finding sufficient conditions for fair knowledge-based programs to be realizable is nontrivial.We cannot directly translate the constructions sketched for the standard case to the knowledge-based case because, at each step in the construction (when an event structure has been only partially constructed), we would have to argue that a certain knowledge-based fact holds when interpreted with respect to an entire system and an entire event structure.However, in the next section, the knowledge-based programs used in the argument for STP (which do include fairness requirements) are shown to be realizable.

Axioms
We now consider the extent to which we can generalize the axioms characterizing (standard) programs presented in Section 2.3 to knowledge-based programs.
Basic knowledge-based message automata other than knowledge-based precondition and fairness requirement programs satisfy analogous axioms to their standard counterparts.The only difference is that now we view the specifications as functions on systems, not on event structures.For example, the axiom corresponding to Ax-init is Ax-initK : @i initially ψ | ≈ I λSys.i-formula(ψ, I , Sys ) ∧ ∀es ∈ Sys.I (ψ)(Sys , initstate i ).
(Note that here, just as in the definition of Ax-init, for simplicity, we write initstate i instead of es.initstate i .Since ψ is constrained to be an i-formula in makes sense to talk about I(ψ)(Sys, initstate i ) instead of I(ψ)(Sys, s) for a global state s with s i = initstate i .)The knowledge-based analogues of axioms Ax-cause, Ax-affects, and Ax-sends are denoted Ax-causeK, Ax-affectsK, and Ax-sendsK, respectively, and are identical to the standard versions of these axioms.The knowledgebased counterparts of Ax-if and Ax-fair now involve epistemic preconditions, which are interpreted with respect to a system: Ax-fairK : @i if necessarily ϕ then i.o.kind = local (a) | ≈ I λSys.i-formula(ϕ, I, Sys )∧ ∀es ∈ Sys.((∃e@i ∈ es ∧ ∀e@i ∈ es.∃e ′ i e. I(¬ϕ)(Sys, state after e ′ ) ∨ kind(e ′ ) = local(a))∨ (¬(∃e@i ∈ es) ∧ I(¬ϕ)(Sys, initstate i (es)))).
Proof: Since the proofs for all axioms are similar in spirit, we prove only that Ax-ifK holds for all interpretations I ′ .Fix an interpretation I. Let P g kb be the program @i kind = local (a) only if ϕ, where ϕ is an i-formula.Let Y kb be an instance of Ax-ifK: λSys.i-formula(ϕ, I, Sys )∧∀es ∈ Sys.∀e@i ∈ es.(kind (e) = local (a)) ⇒ I(ϕ)(Sys , state before e).
By Definition 3.3, P g kb | ≈ I Y kb is true if and only if, for all systems S ys ∈ S kb I (P g kb ), Y kb (Sys) holds.That is, for all systems Sys such that S I(Sys) ((Pg kb ) t ) = Sys, the following holds: ∀es ∈ Sys.i-formula(ϕ, I, Sys) ∧ ∀e@i ∈ es.(kind (e) = local (a)) ⇒ I(ϕ)(Sys , state before e).
Let Sys be a system such that S I(Sys) ((Pg kb ) t ) = Sys.By Definition 2.4, all event structures in Sys are consistent with the program (Pg kb ) t with respect to interpretation I(Sys).Recall that (Pg kb ) t is the (standard) program @i kind = local (a) only if ϕ t , where I(Sys)(ϕ t )(s) = I(ϕ)(Sys, s).We can thus apply axiom Ax-if and conclude that the following holds for all event structures es consistent with I (Sys)((Pg kb ) t ) with respect to I(Sys) (i.e., for all es ∈ Sys): i-formula(ϕ t , I(Sys)) ∧ ∀e@i ∈ es.(kind (e) = local (a)) ⇒ I(Sys)(ϕ t )(state before e).
The first conjunct says that, for all global states s and s ′ in Sys, if s i = s ′ i then I(Sys)(ϕ t )(s) = I(Sys)(ϕ t )(s ′ ), which is equivalent to saying that I(ϕ)(Sys , s) = I(ϕ)(Sys , s ′ ), that is, i-formula(ϕ, I, Sys ) holds.The second conjunct is equivalent to ∀e@ies.(kind (e) = local (a)) ⇒ I(ϕ)(Sys , state before e), by the definition of ϕ t and I(Sys).Thus, Y kb (Sys) holds under interpretation I.
The proof of Lemma 3.4 involves only unwinding the definition of satisfiability for knowledgebased specifications and the application of simple refinement rules, already implemented in Nuprl.In general, proofs of epistemic formulas will also involve reasoning in the logic of knowledge.Sound and complete axiomatizations of (nonintuitionistic) first-order logic of knowledge are well-known (see [Fagin, Halpern, Moses, and Vardi 1995] for an overview) and can be formalized in Nuprl in a straightforward way.This is encouraging, since it supports the hope that Nuprl's inference mechanism is powerful enough to deal with knowledge specifications, without further essential additions.
Note that Ax-⊕K is not included in Lemma 3.4.That is because it does not always hold, as the following example shows.

Example 3.5:
, where x i ∈ X i , and let I = ∅.Let P g i , i = 1, 2 be the standard program for agent i such that S I (Pg i ) consists of all the event structures such that x i = i at all times; that is, Pg i is the program Since Pg i places no constraints on x 2−i , is straightforward to prove that P g i | ≈ I Y kb 2−i , for i = 1, 2. On the other hand, S I (Pg 1 ⊕ Pg 2 ) consists of all the event structures where x i = i at all times, for i = 1, 2, so P g 1 ⊕ P g 2 | ≈ I ¬Y kb 1 ∧ ¬Y kb 2 .

Example
Recall from Section 2.4 that the specification FairSend (l) ⇒ Fair I (ϕ, t, l) is satisfied by the program Fair -Pg(ϕ,t, l, a), for all actions a.We now consider a knowledge-based version of this specification.
If ϕ is an i-knowledge-based formula and t is an i-term in I, define F air kb I (ϕ, t, l) = def λSys.∀es ∈ Sys.F air I(Sys) (ϕ t , t, l)(es), that is For example, F air kb I (K i ϕ, t, l) says that every message received on l is given by the term t interpreted at the local state of the sender i, and that i must have known fact ϕ when it sent this message on l; furthermore, if from some point on i knows that ϕ holds, then eventually a message is received on l.
As in Section 2.4, we assume that message communication satisfies a strong fairness condition.The knowledge-based version of the condition FairSend (l) simply associates with each system S ys the specification FairSend (l); that is, FairSend kb (l) is just λSys.∀es ∈ Sys.FairSend (l)(es).Lemma 3.6: For all interpretations I such that ϕ is an i-formula and t is an i-term in I, and all actions a, we have that F air-P g(ϕ,t, l, a) | ≈ I FairSend kb (l) ⇒ F air kb I (ϕ, t, l).
The proof is similar in spirit to that of Lemma 3.4; by supplying a system Sys as an argument to the specification, we essentially reduce to the situation in Lemma 2.8.We leave details to the reader.
We can also prove the following analogue of Lemma 2.11.
Lemma 3.7: For all interpretations I such that ϕ is an i-formula, ϕ ′ is a j-formula, t is an i-term, and t ′ is a j-term in I, all distinct links l and l ′ , and all distinct actions a and a ′ , we have that F air-P g(ϕ,t, l, a) ⊕ F air-P g(ϕ

The sequence transmission problem (STP)
In this section, we give a more detailed example of how a program satisfying a knowledge-based specification X can be extracted from X using the Nuprl system.We do the extraction in two stages.In the first stage, we use Nuprl to prove that the specification is satisfiable.The proof proceeds by refinement: at each step, a rule or tactic (i.e., a sequence of rules invoked under a single name) is applied, and new subgoals are generated; when there are no more subgoals to be proved, the proof is complete.The proof is automated, in the sense that subgoals are generated by the system upon tactic invocation.From the proof, we can extract a knowledge-based program Pg kb that satisfies the specification.In the second stage, we find standard programs that implement Pg kb .This two-stage process has several advantages: • A proof carried out to derive Pg kb does not rely on particular assumptions about how knowledge is gained.Thus, it is potentially more intuitive and elegant than a proof based on certain implementation assumptions.
• By definition, if Pg kb satisfies a specification, then so do all its implementations.
• This methodology gives us a general technique for deriving standard programs that implement the knowledge-based program, by finding weaker (non-knowledge-based) predicates that imply the knowledge preconditions in Pg kb .
We illustrate this methodology by applying it to one of the problems that has received considerable attention in the context of knowledge-based programming, the sequence transmission problem (STP).

Synthesizing a knowledge-based program for STP
The STP involves a sender S that has an input tape with a (possibly infinite) sequence X = X (0 ), X (1 ), . . . of bits, and wants to transmit X to a receiver R; R must write this sequence on an output tape Y .(Here we assume that X (n) is a bit only for simplicity; our analysis of the STP does not essentially change once we allow X (n) to be an element of an arbitrary constructive domain.)A solution to the STP must satisfy two conditions: 1. (safety): at all times, the sequence Y of bits written by R is a prefix of X , and 2. (liveness): every bit X (n) is eventually written by R on the output tape.Halpern and Zuck [1992] give two knowledge-based programs that solve the STP, and show that a number of standard programs in the literature, like Stenning's [1976] protocol, the alternating bit protocol [1969], and Aho, Ullman and Yannakakis's algorithms [1982], are all particular instances of these programs.
If messages cannot be lost, duplicated, reordered, or corrupted, then S could simply send the bits in X to R in order.However, we are interested in solutions to the STP in contexts where communication is not reliable.It is easy to see that if undetectable corruption is allowed, then the STP is not solvable.Neither is it solvable if all messages can be lost.Thus, following [Halpern and Zuck 1992], we assume (a) that all corruptions are detectable and (b) a strong fairness condition: for any given link l, if infinitely often a message is sent on l, then infinitely often some message is delivered on l.We formalize strong fairness by restricting to systems where FairSend (l) holds for all links l.
The safety and liveness conditions for STP are run-based specifications.As argued by Fagin et al. [1997], it is often better to think in terms of knowledge-based specifications for this problem.The real goal of the STP is to get the receiver to know the bits.Writing K R (X (n)) as an abbreviation for K R (X(n) = 0) ∨ K R (X(n) = 1), we really want to satisfy the knowledge-based specification This is the specification we now synthesize.
Since we are assuming fairness, S can ensure that R learns the nth bit by sending it sufficiently often.Thus, S can ensure that R learns the n th bit if, infinitely often, either S sends X (n) or S knows that R knows X (n).(Note that once S knows that R knows X(n), S will continue to know this, since local states encode histories.)We can enforce this by using an appropriate instantiation of F air kb .Let c S be a (nonrigid) constant that, intuitively, represents the smallest n such that S does not know that R knows X(n), if such an n exists.That is, we want the following formula to be true: Let ϕ S be the knowledge-based formula that holds at a consistent cut c if and only if there exists a smallest n such that, at c, S does not know that R knows X(n): Let t S be the term c S , X(c S ) . 4 Let l SR denote the communication link from S to R. Now consider the knowledge-based specification F air kb I (ϕ S , t S , l SR ).F air kb I (ϕ S , t S , l SR ) holds in a system Sys if, (1) whenever R receives a message from S , the message is a pair of the form n, X(n) ; (2) at the time S sent this message to R, S knew that R knew the first n elements in the sequence X , but S did not know whether R knew X (n); and (3) R is guaranteed to either eventually receive the message n, X(n) or eventually know X (n).
How does the sender learn which bits the receiver knows?One possibility is for S to receive from R a request to send X (n).This can be taken by S to be a signal that R knows all the preceding bits.We can ensure that S gets this information by again using an appropriate instantiation of Fair kb .Define c R be a (nonrigid) constant that, intuitively, represents the smallest n such that R does not know X(n), if such an n exists.In other words, we want the following formula to be true: We take ϕ R to be the knowledge-based formula which says that there exists a smallest n such that R does not know X(n) (or, equivalently, such that c R = n holds).Finally, let l RS denote the communication link from R to S. F air k b I (ϕ R , t R , l RS ) implies that whenever S receives a message n from R, it is the case that, at the time R sent this message, R knew the first n elements of X , but not X (n).Note that, for all n, S is guaranteed to eventually receive a message n unless R eventually knows X (n).
We can now use the system to verify our informal claim that we have refined the initial specification ϕ kb stp .That is, the system can prove No new techniques are needed for this proof: we simply unwind the definitions of the semantics of knowledge formulas and of the fairness specifications, and proceed with a standard proof by induction on the smallest n such that R does not know X (n).
It follows from Lemma 3.7 that Fair kb I (ϕ S , t S , l SR ) ∧ Fair kb I (ϕ R , c R , l RS ) is satisfied by the combination of two simple knowledge-based programs, assuming that message communication on links l SR and l RS satisfies the strong fairness conditions FairSend kb (l SR ) and FairSend kb (l RS ).That is, for any two distinct actions a S and a R , the following is true: As explained in Section 2.4, FairSend kb (l SR ) ∧ FairSend kb (l RS ) says that if infinitely often a message is sent on l SR then infinitely often a message is received on l SR , and, similarly, if infinitely often a message is sent on l RS then infinitely often a message is received on l RS ; as mentioned at the beginning of this section, we restrict to systems where these conditions are met.Furthermore, it is not difficult to show that we can use simple initialization clauses to guarantee that the constraints on the interpretation of c S and c R are satisfied: From the definition of Fair -Pg(ϕ R , c R , l RS , a R ) in Section 3.3, it follows that Pg kb S (ϕ S , t S , l SR , a S ) is the following composition: Using the program notation of Fagin et al. [1995], Pg kb S (ϕ S , t S , l SR , a S ) is essentially semantically equivalent to the following collection of programs, one for each value n: In both of these programs, S takes the same action under the same circumstances, and with the same effects on its local state.That is, given a run r (i.e., a sequence of global states) consistent with the collection of knowledge-based programs, we can construct an event structure es consistent with Pg kb S (ϕ S , t S , l SR , a S ) such that the sequence of local states of S in es, with stuttering eliminated, is the same as in r.The converse is also true.More precisely, in a run r consistent with the collection of knoweldge-based programs, at each point of time, either S knows that R knows the value of X(n) for all n, or there exists a smallest n such that ¬K S K R (X(n)) holds.In the first case, S does nothing, while in the second case S sends n, X(n) on l SR .Similarly, in an event structure es consistent with Pg kb S (ϕ S , t S , l SR , a S ), if S knows that R knows X (n) for all n, then S does nothing; if not, then it is impossible for S to know that R knows the first n bits, but never know that R knows X (n), without eventually S taking an a S action with value n, X(n) .This means that for each run r consistent with the collection of knowledge-based programs, the event structure es in which S starts from the same initial state as in r and performs action a S as soon as it is enabled has the same sequence of local states of S as r .For each event structure es consistent with Pg kb S (ϕ S , t S , l SR , a S ), in the run r of global states in es with stuttering eliminated, S takes action a S as soon as enabled; subsequently, r is consistent with the collection of knowledge-based programs.
Similarly, Pg kb R (ϕ R , c R , l RS , a R ) is essentially semantically equivalent to the following collection of programs, one for each value n: Thus, the derived program is essentially one of the knowledge-based programs considered by Halpern and Zuck [1992].This is not surprising, since our derivation followed much the same reasoning as that of Halpern and Zuck.However, note that we did not first give a knowledge-based program and then verify that it satisfied the specification.Rather, we derived the knowledge-based programs for the sender and receiver from the proof that the specification was satisfiable.And, while Nuprl required "hints" in terms of what to prove, the key ingredients of the proof, namely, the specification Fair kb I (ϕ, t, l) and the proof that Fair -Pg(ϕ, t, l, a) realizes it, were already in the system, having been used in other contexts.Thus, this suggests that we may be able to apply similar techniques to derive programs satisfying other specifications in communication systems with only weak fairness guarantees.

Synthesis of standard programs for STP
This takes care of the first stage of the synthesis process.We now want to find a standard program that implements the knowledge-based program.As discussed by Halpern and Zuck [1992], the exact standard program that we use depends on the underlying assumptions about the communications systems.
Here we sketch an approach to finding such a standard program.
The first step is to identify the exact properties of knowledge that are needed for the proof.This can be done by inspecting the proof to see which properties of the knowledge operators K S and K R are used.The idea is then to replace formulas involving the knowledge operators by standard (non-epistemic formulas) which have the relevant properties.
Suppose that φkb S is a formula that mentions the function X, has a free variable m, and is guaranteed to be an S-formula in all interpretations I and systems Sys.(Recall that, as noted just before Lemma 3.1, there are simple syntactic conditions that guarantee that a formula is an i-formula for all I and Sys.) Roughly speaking, we can think of φkb S as corresponding to K S K R (X(m)).Let ϕ kb S be an abbreviation of ∃n.

Similarly, suppose that φkb
R is a formula that mentions X, has a free variable m, and is guaranteed to be an R-formula in all interpretations I; let ϕ kb R be an abbreviation of Thus, ϕ kb S and ϕ kb R are the analogues of ϕ S and ϕ R in Section 4.1.While ϕ S is a formula that says that there is a least n such that K S K R X(n) does not hold, ϕ kb S says that there is a least n such that φkb S (n) does not hold.Similarly, while ϕ kb R says that there is a least n such that K R X(n) does not hold, ϕ kb R says that there is a least n such that φkb R (n) does not hold.We also use we use constants cS , and cR that are analogues to c S , c R ; φkb S plays the same role in the definition of cS as K S K R (X(m)) played in the definition of c S , and φkb R plays the same role in the definition of cR as K R (X(m)) played in the definition of c R .Thus, we take cS to be a constant that represents the least n such that φkb S [m/n] does not hold (that is, we want ∃n.∀k ) to be true), and define tS as the pair cS , X( cS ) , Similarly, we take cR to be a constant that represents the least n such that φkb R [m/n] does not hold (that is, we want ∃n.∀k < ) to be true).Let ϕ kb stp (φ kb R ) be the specification that results by using φkb R instead of K R in ϕ kb stp : We prove the goal ϕ kb stp ( φkb R ) by refinement: at each step, a rule (or tactic) of Nuprl is applied, and a number of subgoals (typically easier to prove) are generated; the rule gives a mechanism of constructing a proof of the goal from proofs of the subgoals.Some of the subgoals cannot be further refined in an obvious manner; this is the case, for example, for the simple conditions on φkb S or φkb To explain the next condition, recall that φR is meant to represent K R (X(m)).With this interpretation, I(∀k ≤ n. φkb R [m/k])(Sys , state before send (e S )) says that R knows the first n bits before it sends a message to S .We would like it to be the case that, just as with the knowledge-based derivation, when S receives R's message, S knows that R knows the n th bit.Since we think of φkb With this background, we can describe the last condition.Intuitively, it says that if n is the least value for which φkb S fails when S sends a message to R, then φkb R holds for n upon message delivery: We denote the conjunction of these conditions as ψ kb (φ R starts by sending message 0 to S; since communication is fair, eventually S receives this message, and x S is set to 0; S starts sending 0, X(0) to R; since communication is fair, eventually R receives this message, so R sets x R to 1, stops sending message 0 to S, and starts sending message 1 to S. It is not difficult to show that, for all values n, there is a time when x R is set to n, which triggers R to send message n to S. S eventually receives this message and starts sending n, X(n) to R. This, in turn, ensures that R eventually receives this message and thus learns X(n).Note that the program Pg(true, x S , X S (x S ) , l SR , a S )⊕Pg(true, x R , l RS , a R ) is realizable.We have thus extracted a standard program that realizes the STP specification.In fact, the program turns out to be essentially equivalent to Stenning's [1976] protocol.
The key point here is that by replacing the knowledge tests by weaker predicates that imply them and do not explicitly mention knowledge, we can derive standard programs that implement the knowledgebased program.We believe that other standard implementations of the knowledge-based program can be derived in a similar way.

Conclusion and Future Work
We have shown that the mechanism for synthesizing programs from specifications in Nuprl can be extended to knowledge-based programs and specifications, Moreover, we have shown that axioms much in the spirit of those used for standard programs can be used to synthesize kb programs as well.We applied this methodology to the analysis of the sequence transmission problem and showed that the kb programs of proposed by Halpern and Zuck for solving the STP problem can be synthesized in Nuprl.We also sketched an approach for deriving standard programs that implement the kb programs that solve the STP.A feature of our approach is that the extracted standard programs are close to the type of pseudocode designers write their programs in, and can be translated into running code.
There has been work on synthesizing both standard programs and kb programs from kb specifications.In the case of synchronous systems with only one process, Van der Meyden and Vardi [1998] provide a necessary and sufficient condition for a certain type of kb specification to be realizable, and show that, when it holds, a program can be extracted that satisfies the specification.Still assuming a synchronous setting, but this time allowing multiple agents, Engelhardt, van der Meyden, andMoses [1998, 2001] propose a refinement calculus in which one can start with an epistemic and temporal specification and use refinement rules that eventually lead to standard formulas.The refinement rules annotate formulas with preconditions and postconditions, which allow programs to be synthesized from the leaf formulas in a straightforward way.A search up the tree generated in the refinement process suffices to build a program that satisfies the specification.The extracted programs are objects of a programming language that allows concurrent and sequential executions, variable assignments, loops and conditional statements.
We see our method for synthesizing programs from kb specifications as an alternative to this approach.As in the Engelhart et al. approach, the programs extracted in Nuprl are close to realistic programming languages.Arguably, distributed I/O message automata are general enough to express most of the distributed programs of interest when communication is done by message passing.Our approach has the additional advantage of working in asynchronous settings.
A number of questions, both theoretical and more applicative, still remain open.While synthesis of distributed programs from epistemic and temporal specifications is undecidable in general, recent results [Meyden and Wilke 2005] show that, under certain assumptions about the setting in which agents communicate, the problem is decidable.It would be worth understanding the extent to which these assumptions apply to our setting.Arguably, to prove a result of this type, we need a better understanding of how properties of a number of kb programs relate to the properties of their composition; this would also allow us to prove stronger composition rules than the one presented in Section 3.2.As we said, we believe that the approach that we sketched for extracting a standard program from the kb specification for the STP problem can be extended into a general methodology.As pointed out by Engelhart et al., the key difficulty in extracting standard programs from abstract specifications is in coming up with good standard tests to replace the abstract tests in a program.However, it is likely that, by reducing the complexity of the problem and focusing only on certain classes of kb specifications, "good" standard tests can be more easily identified.We plan to investigate heuristics for finding such tests and to implement them as tactics in Nuprl.

Definition 3. 3 :
A knowledge-based specification is a predicate on System.A knowledge-based program Pg kb satisfies a knowledge-based specification Y kb with respect to I, written Pg kb | ≈ I Y kb , if all the systems representing Pg kb with respect to I satisfy Y kb , that is, if the following formula holds: ∀Sys ∈ S kb I (Pg kb ).Y kb (Sys).The knowledge-based specification Y kb is realizable with respect to I if there exists a (standard) program P g such that S I (P g) = ∅ and P g | ≈ I Y kb (i.e., Y kb (S I (P g)) is true).

Definition 2.3: An
event structure is a tuple es = AG, Links, source, dest , Act, {X i } i∈AG , Val , {initstate i } i∈AG , E, agent, send , first , {≺ i } i∈AG , ≺ where AG is a set of agents, Links is a set of links such that source : Links −→ AG, dest : Links −→ AG, Act is a set of actions, X i is a set of variables for agent i ∈ AG such that, for all links l ∈ Links, msg(l) ∈ X i if i = source(l), Val is a set of values, initstate i is the initial local state of agent i ∈ AG, E is a set of events for agents AG, kinds Kind = Links ∪ Act, and domain Val , functions agent, send and first are defined as explained above, ≺ i s are local precedence relations and ≺ is a causal order such that the following axioms, all expressible in Nuprl, are satisfied: The new theorem states that, under suitable conditions on φkb S and φkb R , ϕ kb stp (φ kb R ) is satisfiable if both Fair kb I (ϕ kb S , tS , l SR ) and Fair kb I (ϕ kb R , cR , l RS ) are satisfiable.We now explain the conditions placed on the predicates φkb S and φkb R .One condition is that φkb R be stable, that is, once true, it stays true: Stable(φ kb R ) = def λSys.∀es ∈ Sys.∀e R @R ∈ es.∀n.I( φkb R [m/n])(Sys, state before e R ) ⇒ I (φ kb R [m/n])(Sys , state after e R ).Assuming Stable(φ kb R ) allows us to prove ϕ kb R by induction on the least index n such that ¬ φkb R [m/n] holds.To allow us to carry out a case analysis on whether φkb R holds, we also assume that φkb R satisfies the principle of excluded middle; that is, we assume that Determinate(φ kb R ) = def Determinate(∀n.(φ kb R [m/n]) t ).For similar reasons, we also restrict φkb S to being stable and determinate; that is, we require that Stable(φ kb S ) and Determinate( φkb S ) both hold.The third condition we impose establishes a connection between φkb S and φkb R , and ensures that, for all values n, if φkb R .S [m/n] holds, then eventually φkb R [m/n] will also hold:Implies(φ kb S , φkb R ) = def λSys.∀es ∈ Sys.∀n.∀e S @S ∈ es.I( φkb S [m/n])(Sys, state before e S ) ⇒ ∃e R ≻ e S @R ∈ es.I (φ kb R [m/n])(Sys , state after e R ).