Forward Analysis for WSTS, Part II: Complete WSTS

We describe a simple, conceptual forward analysis procedure for infinity-complete WSTS S. This computes the so-called clover of a state. When S is the completion of a WSTS X, the clover in S is a finite description of the downward closure of the reachability set. We show that such completions are infinity-complete exactly when X is an omega-2-WSTS, a new robust class of WSTS. We show that our procedure terminates in more cases than the generalized Karp-Miller procedure on extensions of Petri nets and on lossy channel systems. We characterize the WSTS where our procedure terminates as those that are clover-flattable. Finally, we apply this to well-structured counter systems.

The starting point of this paper and of its first part [FG09] is our desire to derive similar algorithms working forwards, namely algorithms computing the cover ↓ P ost * (↓ s) of s.
While the cover allows one to decide coverability as well, by testing whether t ∈ ↓ P ost * (↓ s), it can also be used to decide U -boundedness, i.e., to decide whether there are only finitely many states t in the upward-closed set U and such that s (≥; → * ) t. (U -boundedness generalizes the boundedness problem, which is the instance of U -boundedness where U is the entire set of states).No backward algorithm can decide this.In fact, U -boundedness is undecidable in general, e.g., on lossy channel systems [CFP96].So the reader should be warned that computing the cover is not possible for general WSTS.Despite this, the known forward algorithms are felt to be more efficient than backward procedures in general: e.g., for lossy channel systems, although the backward procedure always terminates, only a (necessarily non-terminating) forward procedure is implemented in the TREX tool [ABJ98].Another argument in favor of forward procedures is the following: for depth-bounded processes, a fragment of the π-calculus, the backward algorithm of [AČJT00] is not applicable when the maximal depth of configurations is not known in advance because, in this case, the predecessor configurations are not effectively computable [WZH10].But the Expand, Enlarge and Check forward algorithm of [GRvB07], which operates on complete WSTS, solves coverability even though the depth of the process is not known a priori [WZH10].
State of the Art.Karp and Miller [KM69] proposed an algorithm, for Petri nets, which computes a finite representation of the cover , i.e., of the downward closure of the reachability set of a Petri net.Finkel [Fin87,Fin90] introduced the framework of WSTS and generalized the Karp-Miller procedure to a class of WSTS.This was achieved by building a non-effective completion of the set of states, and replacing ω-accelerations of increasing sequences of states (in Petri nets) by least upper bounds.In [EN98,Fin90] a variant of this generalization of the Karp-Miller procedure was studied; but no guarantee was given that the cover could be represented finitely.In fact, no effective finite representations of downward-closed sets were given in [Fin90].Finkel [Fin93] modified the Karp-Miller algorithm to reduce the size of the intermediate computed trees.Geeraerts et al. [GRvB07] recently proposed a weaker acceleration, which avoids some possible underapproximations in [Fin93].Emerson and Namjoshi [EN98] take into account the labeling of WSTS and consequently adapt the generalized Karp-Miller algorithm to model-checking.They assume the existence of a compatible dcpo, and generalize the Karp-Miller procedure to the case of broadcast protocols (which are equivalent to transfer Petri nets).However, termination is then not guaranteed [EFM99], and in fact neither is the existence of a finite representation of the cover.We solved the latter problem in [FG09].
Abdulla, Collomb-Annichini, Bouajjani and Jonsson proposed a forward procedure for lossy channel systems [ACABJ04] using downward-closed regular languages as symbolic representations.Ganty, Geeraerts, Raskin and Van Begin [GRvB06b,GRvB06a] proposed a forward procedure for solving the coverability problem for WSTS equipped with an effective adequate domain of limits, or equipped with a finite set D used as a parameter to tune the precision of an abstract domain.Both solutions ensure that every downward-closed set has a finite representation.Abdulla et al. [ACABJ04] applied this framework to Petri nets and lossy channel systems.Abdulla, Deneux, Mahata and Nylén proposed a symbolic framework for dealing with downward-closed sets for Timed Petri nets [ADMN04a].
Our Contribution.First, we define a complete WSTS as a WSTS S whose well-ordering is also a continuous dcpo (a dcpo is a directed complete partial ordering).This allows us to design a conceptual procedure Clover S that looks for a finite representation of the downward closure of the reachability set, i.e., of the cover [Fin90].We call such a finite representation a clover (for closure of cover ).This clearly separates the fundamental ideas from the data structures used in implementing Karp-Miller-like algorithms.Our procedure also terminates in more cases than the well-known (generalized) Karp-Miller procedure [EN98,Fin90].We establish the main properties of clovers in Section 3 and use them to prove Clover S correct, notably, in Section 5.
Second, we characterize complete WSTS for which Clover S terminates.These are the ones that have a (continuous) flattening with the same clover.This establishes a surprising relationship with the theory of flattening [BFLS05].The result (Theorem 5.21), together with its corollary on covers, rather than clovers (Theorem 5.26), is the main achievement of this paper.
Third, and building on our theory of completions [FG09], we characterize those WSTS whose completion is a complete WSTS in the sense above.They are exactly the ω 2 -WSTS , i.e., those whose state space is ω 2 -wqo (a wqo is a well quasi-ordering), as we show in Section 4. All naturally occurring WSTS are in fact ω 2 -WSTS.We shall also explain why this study is important: despite the fact that Clover S cannot terminate on all inputs, that S is an ω 2 -WSTS will ensure progress, i.e., that every opportunity of accelerating a loop will eventually be taken by Clover S .
Finally, we apply our framework of complete WSTS to counter systems in Section 6.We show that affine counter systems may be completed into ∞-complete WSTS iff the domains of the monotonic affine functions are upward-closed.

Preliminaries
2.1.Posets, Dcpos.We borrow from theories of order, as used in model-checking [AČJT00,FS01], and also from domain theory [AJ94, GHK + 03].A quasi-ordering ≤ is a reflexive and transitive relation on a set X.It is a (partial) ordering iff it is antisymmetric.
A set X with a partial ordering ≤ is a poset (X, ≤), or just X when ≤ is clear.If X is merely quasi-ordered by ≤, then the quotient X/≡ is ordered by the relation induced by ≤ on equivalence classes.So there is not much difference in dealing with quasi-orderings or partial orderings, and we shall essentially be concerned with the latter. The A subset E of X is upward-closed if and only if E = ↑ E. Downward-closed sets are defined similarly.A basis of a downward-closed (resp.upward-closed) set E is a subset A such that E = ↓ A (resp.E = ↑ A); E has a finite basis iff A can be chosen to be finite.
A quasi-ordering is well-founded iff it has no infinite strictly descending chain x 0 > x 1 > . . .> x i > . . .An antichain is a set of pairwise incomparable elements.A quasi-ordering is well iff it is well-founded and has no infinite antichain; equivalently, from any infinite sequence x 0 , x 1 , . . ., x i , . .., one can extract an infinite ascending chain x i 0 ≤ x i 1 ≤ . . .≤ x i k ≤ . .., with i 0 < i 1 < . . .< i k < . ... While wqo stands for well-quasi-ordered set, we abbreviate well posets as wpos.
An upper bound x ∈ X of E ⊆ X is such that y ≤ x for every y ∈ E. The least upper bound (lub) of a set E, if it exists, is written lub(E).An element Write Max E (resp.Min E) for the set of maximal (resp.minimal) elements of E.
A directed subset of X is any non-empty subset D such that every pair of elements of D has an upper bound in D. Chains, i.e., totally ordered subsets, and one-element sets are examples of directed subsets.A dcpo is a poset in which every directed subset has a least upper bound.For any subset E of a dcpo X, let Lub(E) = {lub(D) | D directed subset of E}.
Clearly, E ⊆ Lub(E); Lub(E) can be thought of E plus all limits from elements of E.
The way below relation on a dcpo X is defined by x y iff, for every directed subset D such that lub(D) ≤ y, there is a z ∈ D such that x ≤ z.Note that x y implies x ≤ y, and that x ≤ x y ≤ y implies x y .Write ↓ ↓ E = {y ∈ X | ∃x ∈ E • y x}, and ↓ ↓ x = ↓ ↓ {x}.X is continuous iff, for every x ∈ X, ↓ ↓ x is a directed subset, and has x as least upper bound.
When ≤ is a well partial ordering that also turns X into a dcpo, we say that X is a directed complete well order , or dcwo.We shall be particularly interested in continuous dcwos.
A subset U of a dcpo X is (Scott-)open iff U is upward-closed, and for any directed subset D of X such that lub(D) ∈ U , some element of D is already in . This is all we require when we define accelerations, but general continuity is more natural in proofs.We won't discuss this any further: the two notions coincide when X is countable, which will always be the case of the state spaces X we are interested in, where the states should be representable on a Turing machine, hence at most countably many.
The closed sets are the complements of open sets.Every closed set is downward-closed.On a dcpo, the closed subsets are the subsets B that are both downward-closed and inductive, i.e., such that Lub(B) = B.An inductive subset of X is none other than a sub-dcpo of X.
The closure cl(A) of A ⊆ X is the smallest closed set containing A. This should not be confused with the inductive closure Ind(A) of A, which is obtained as the smallest inductive subset B containing A. In general, ↓ A ⊆ Lub(↓ A) ⊆ Ind(↓ A) ⊆ cl(A), and all inclusions can be strict.Consider X = N k ω , where k ∈ N, and N ω denotes N with a new element ω added, ordered by and this is a strictly increasing chain of subsets.All of them are contained in Ind(↓ A) = N k ω , which coincides with cl(A) here.It may also be the case that Ind(↓ A) is strictly contained in cl(A): consider the set X of all pairs (i, m) with i ∈ {0, 1}, m ∈ N, plus a new element ω, ordered by (i, m) ≤ (j, n) iff i = j and m = n, and (i, m) ≤ ω for all (i, m) ∈ X, and let A = {(0, m) | m ∈ N}; Then Ind(↓ A) = A ∪ {ω}, but the latter is not even downward-closed, so is strictly smaller than cl(A); in fact cl(A) is the whole of X.
All this nitpicking is irrelevant when X is a continuous dcpo, and A is downward-closed in X.In this case indeed, Lub(A) = Ind(A) = cl(A).This is well-known, see e.g., [FG09, Proposition 3.5], and will play an important role in our constructions.As a matter in fact, the fact that Lub(A) = cl(A), in the particular case of continuous dcpos, is required for lubaccelerations to ever reach the closure of the set of states that are reachable in a transition system.

Well-Structured Transition Systems.
A transition system is a pair S = (S, →) of a set S, whose elements are called states, and a transition relation → ⊆ S × S. We write s → s for (s, s ) ∈ →.Let * → be the transitive and reflexive closure of the relation →.We write P ost S (s) = {s ∈ S | s → s } for the set of immediate successors of the state s.The reachability set of a transition system S = (S, →) from an initial state s 0 is P ost * S (s 0 ) = {s ∈ S | s 0 * → s}.We shall be interested in effective transition systems.Intuitively, a transition system (S, →) is effective iff one can compute the set of successors P ost S (s) of any state s.We shall take this to imply that P ost S (s) is finite, and each of its elements is computable, although one could imagine that P ost S (s) be described differently, say as a regular expression.
Formally, one needs to find a representation of the states s ∈ S. A representation map is any surjective map r : E → S from some subset E of N to S. If e ∈ E is such that r(e) = s, then one says that e is a code for the state s.
An effective transition system is a 4-tuple (S, →, r, post), where (S, →) is a transition system, r : E → S is a representation map, and post : E → Pfin (E) is a computable map such that, for every code e, r post(e) = P ost S (r(e)).We write r A the image {r(a) | a ∈ A} of the set A by r, and Pfin (E) is the set of finite subsets of E. A computable map from E to Pfin (E) is by definition a partial recursive map post : N → Pfin (N) that is defined on all elements of E, and such that post(e) ∈ Pfin (E) for all e ∈ E.
For reasons of readability, we shall make an abuse of language, and say that the pair (S, →) is itself an effective transition system in this case, leaving the representation map r and the post function implicit.
An ordered transition system is a triple S = (S, →, ≤) where (S, →) is a transition system and ≤ is a partial ordering on S. We say that (S, →, ≤) is effective if (S, →) is effective and if ≤ is decidable.This is again an abuse of language: formally, an effective ordered transition system is a 6-tuple (S, →, ≤, r, post, ) where (S, →, ≤) is an ordered transition system, (S, →, r, post) is an effective transition system, and is a decidable relation on E such that e e iff r(e) ≤ r(e ).By decidable on E, we mean that is a partial recursive map from N × N to the set of Booleans, which is defined on E × E at least.
We say that S = (S, →, ≤) is monotonic (resp.strictly monotonic) iff for every s, s , s 1 ∈ S such that s → s and s 1 ≥ s (resp.s 1 > s), there exists an s 1 ∈ S such that s 1 * → s 1 and s 1 ≥ s (resp.s 1 > s ).S is strongly monotonic iff for every s, s , s 1 ∈ S such that s → s and s 1 ≥ s, there exists an s 1 ∈ S such that s 1 → s 1 and s 1 ≥ s .
Finite representations of P ost * S (s), e.g., as Presburger formulae or finite automata, usually don't exist even for monotonic transition systems (not even speaking of being computable).However, the cover Cover S (s) = ↓ P ost * S (↓ s) (= ↓ P ost * S (s) when S is monotonic) will be much better behaved.Note that being able to compute the cover allows one to decide coverability: s (≥; → * ; ≥) t iff t ∈ Cover S (s).In most cases we shall encounter, it will also be decidable whether a finitely represented cover is finite, or whether it meets a given upward-closed set U in only finitely many points.Therefore boundedness (is P ost * S (s) finite?) and U -boundedness (is P ost * S (s) ∩ U finite?) will be decidable, too.An ordered transition system S = (S, →, ≤) is a Well Structured Transition System (WSTS ) iff S is monotonic and (S, ≤) is wpo.This is our object of study.
For strictly monotonic WSTS, it is also possible to decide the boundedness problem, with the help of the Finite Reachability Tree (FRT) [Fin90].However, the place-boundedness problem (i.e., to decide whether a place can contain an unbounded number of tokens) remains undecidable for transfer Petri nets [DFS98], which are strictly monotonic WSTS, but it is decidable for Petri nets.It is decided with the help of a richer structure than the FRT, the Karp-Miller tree.The set of labels of the Karp-Miller tree is a finite representation of the cover.
We will consider transition systems that are functional , i.e., defined by a finite set of transition functions.This is, as in [FG09], for reasons of simplicity.However, our Clover S procedure (Section 5), and already the technique of accelerating loops (Definition 3.3) depends on the considered transition system being functional.
Formally, a functional transition system (S, F →) is a labeled transition system where the transition relation F → is defined by a finite set F of partial functions f : S −→ S, in the sense that for every s, s ∈ S, s F → s iff s = f (s) for some f ∈ F .If additionally, a partial ordering ≤ is given, a map f : S → S is partial monotonic iff dom f is upward-closed and for all x, y ∈ dom f with x ≤ y, f (x) ≤ f (y).An ordered functional transition system is an ordered transition system S = (S, F →, ≤) where F consists of partial monotonic functions.This is always strongly monotonic.A functional WSTS is an ordered functional transition system where ≤ is a well-ordering.
A functional transition system (S, F →) is effective if every f ∈ F is computable: given a state s and a function f , we can decide whether s ∈ dom f and in this case, one can also compute f (s).
For example, every Petri net, every reset/transfer Petri net, and in fact every affine counter system (see Definition 6.2) is an effective, functional WSTS.
Lossy channel systems [ACABJ04] are not functional: any channel can lose a letter at any position, and although one may think of encoding this as a functional transition system defined by functions f i for each i, where f i would lose the letter at position i, this would require an unbounded number of functions.However, for the purpose of computing covers, lossy channel systems are equivalent [Sch01] ("equivalent" means that the decidability status of the usual properties is the same for both models) to functional-lossy channel systems, which are functional [FG09].In the latter, there are functions send a to add a fixed letter a to the back of each queue (i.e., dom(send a ) = Σ * , where Σ is the queue alphabet, and send a (w) = wa), and functions recv a to read a fixed letter a from the front of each queue, where reading is only defined when there is an a in the queue, and means removing all letters up to and including the first a from the queue (i.e., dom(recv a ) = {waw | w, w ∈ Σ * }, recv a (waw ) = w where a does not occur in w).

Clovers of Complete WSTS
3.1.Complete WSTS and Their Clovers.All forward procedures for WSTS rest on completing the given WSTS to one that includes all limits.E.g., the state space of Petri nets is N k , the set of all markings on k places, but the Karp-Miller algorithm works on N k ω , where N ω is N plus a new top element ω, with the usual componentwise ordering.We have defined general completions of wpos, serving as state spaces, and have briefly described completions of (functional) WSTS in [FG09].We temporarily abstract away from this, and consider complete WSTS directly.
Generalizing the notion of continuity to partial maps, we define:

New lubs
Clover S (s) "Down" This is the special case of a more topological definition: in general, a partial continuous map f : X → Y is a partial map whose domain is open in X, and such that f −1 (U ) is open (in X, or equivalently here, in dom f ) for any open U of Y .
The composition of two partial continuous maps again yields a partial continuous map.
Definition 3.2 (Complete WSTS).A complete transition system is a functional transition system S = (S, F →, ≤) where (S, ≤) is a continuous dcwo and every function in F is partial continuous.
A complete WSTS is a functional WSTS that is complete as a functional transition system.
The point in complete WSTS is that one can accelerate loops: Definition 3.3 (Lub-acceleration).Let (X, ≤) be a dcpo, f : X → X be partial continuous.The lub-acceleration f ∞ : X → X is defined by: dom f ∞ = dom f , and for any Note that if x ≤ f (x), then f (x) ∈ dom f , and f (x) ≤ f 2 (x).By induction, we can show that {f n (x) | n ∈ N} is an increasing sequence, so that the definition makes sense.
Complete WSTS are strongly monotonic.One cannot decide, in general, whether a recursive function f is monotonic [FMP04] or continuous, whether an ordered set (S, ≤) with a decidable ordering ≤, is a dcpo or whether it is a wpo.To show the latter claim for example, fix a finite alphabet Σ, and consider subsets S of Σ * specified by a Turing machine M with tape alphabet Σ, so that S is the language accepted by M. Let ≤ be, say, the prefix ordering on Σ * .The property that (S, ≤) is a dcpo, resp.a wpo, is non-trivial and extensional, hence undecidable by Rice's Theorem.
We can also prove that given an effective ordered functional transition system, one cannot decide whether it is a WSTS, or a complete WSTS, in a similar way.However, the completion of any functional ω 2 -WSTS is complete, as we shall see in Theorem 4.4.
In a complete WSTS, there is a canonical finite representation of the cover: the clover (a succinct description of the cl osure of the cover ).Definition 3.4 (Clover).Let S = (S, F →, ≤) be a complete WSTS.The clover Clover S (s 0 ) of the state s 0 ∈ S is Max Lub(Cover S (s 0 )).This is illustrated in Figure 1.The "down" part on the right is meant to illustrate in which directions one should travel to go down in the chosen ordering.The cover Cover S (s 0 ) is a downward-closed subset, illustrated in blue (grey if you read this in black and white).Lub(Cover S (s 0 )) has some new least upper bounds of directed subsets, here x 1 and x 3 .The clover is given by just the maximal points in Lub(Cover S (s 0 )), here x 1 , x 2 , x 3 , x 4 .
The fact that the clover is indeed a representation of the cover follows from the following.
Lemma 3.5.Let (S, ≤) be a continuous dcwo.For any closed subset F of S, Max F is finite and Proof.As F is closed, it is inductive (i.e., Lub(F ) = F ).In particular, every element x of F is below some maximal element of F .This is a well-known, and an easy application of Zorn's Lemma.Since  For any other representative, i.e., for any finite set R such that ↓ R = ↓ Clover S (s 0 ), Clover S (s 0 ) = Max R. Indeed, for any two finite sets A, B ⊆ S such that ↓ A = ↓ B, Max A = Max B. So Clover is the minimal representative of the cover, i.e., there is no representative R with |R| < |Clover S (s 0 )|.The clover was called the minimal coverability set in [Fin93].
Despite the fact that the clover is always finite, it is non-computable in general (see Proposition 4.6 below).Nonetheless, it is computable on flat complete WSTS, and even on the larger class of clover-flattable complete WSTS (Theorem 5.21 below).
3.2.Completions.Many WSTS are not complete: the set N k of states of a Petri net with k places is not even a dcpo.The set of states of a lossy channel system with k channels, (Σ * ) k , is not a dcpo for the subword ordering either.We have defined general completions of wpos, and of WSTS, in [FG09], a construction which we recall quickly.
The completion X of a wpo (X, ≤) is defined in any of two equivalent ways.First, X is the ideal completion Idl(X) of X, i.e., the set of ideals of X, ordered by inclusion, where an ideal is a downward-closed directed subset of X.The least upper bound of a directed family of ideals (D i ) i∈I is their union.X can also be described as the sobrification S(X a ) of the Noetherian space X a , but this is probably harder to understand.
There is an embedding η X : X → X, i.e., an injective map such that x ≤ x in X iff η X (x) ≤ η X (x ) in X.This is defined by η X (x) = ↓ x.This allows us to consider X as a subset of X, by equating X with its image η X X , i.e., by equating each element x ∈ X with ↓ x ∈ X.However, we shall only do this in informal discussions, as this tends to make proofs messier.
For instance, if X = N k , e.g., with k = 3, then (1, 3, 2) is equated with the ideal ↓(1, 3, 2), while {(1, m, n) | m, n ∈ N} is a limit, i.e. an element of X \ X; the latter is usually written (1, ω, ω), and is the least upper bound of all (1, m, n), m, n ∈ N. The downward-closure of (1, ω, ω) in X, intersected with X, gives back the set of non-limit elements This is a general situation: one can always write X as the disjoint union X ∪ L, so that any downward-closed subset D of X can be written as X ∩ ↓ A, where A is a finite subset of X ∪ L. Then L, the set of limits, is a weak adequate domain of limits (WADL) for X-we slightly simplify Definition 3.1 of [FG09], itself a slight generalization of [GRvB06b].In fact, X (minus X) is the smallest WADL [FG09, Theorem 3.4].
X = Idl(X) is always a continuous dcpo.In fact, it is even algebraic [AJ94, Proposition 2.2.22].It may however fail to be well, hence to be a continuous dcwo, see Proposition 4.2 below.
We have also described a hierarchy of datatypes on which completions are effective [FG09, Section 5].Notably, N = N ω , A = A for any finite poset, and k i=1 X i = k i=1 X i .Also, X * is the space of word-products on X.These are the products, as defined in [ABJ98], i.e., regular expressions that are products of atomic expressions A * (A ∈ Pfin ( X), A = ∅) or a ?(a ∈ X).In any case, elements of completions X have a finite description, and the ordering ⊆ on elements of X is decidable [FG09, Theorem 5.3].
Having defined the completion X of a wpo X, we can define the completion S = X of a (functional) WSTS X = (X, In the cases of Petri nets or functional-lossy channel systems, the completed WSTS is effective [FG09, Section 6]. The important fact, which assesses the importance of the clover, is Proposition 3.9 below.We first require a useful lemma.Up to the identification of X with its image η X X , this states that for any downward-closed subset F of X, cl(F ) ∩ X = F ∩ X, i.e., taking the closure of F only adds new limits, no proper elements of X.
Lemma 3.8.Let X be a wpo.For any downward-closed subset F of X, η −1 In other words, to compute the cover of s 0 in the WSTS X on the state space X, one can equivalently compute the cover s 0 in the completed WSTS X, and keep only those non-limit elements (first equality of Proposition 3.9).Or one can equivalently compute the closure of the cover in the completed WSTS X, in the form of the downward closure ↓ Clover S (s 0 ) of its clover.The closure of the cover will include

X X
Clover S (s) Figure 2: The clover and the cover, in a completed space extra limit elements, compared to the cover, but no non-limit element by Lemma 3.8.This is illustrated in Figure 2.
Proposition 3.9.Let S = X be the completion of the functional WSTS X = (X, Proof.The first equality actually follows from Proposition 6.1 of [FG09].To be self-contained, we give a direct proof: this will be a consequence of (1) and (2) below.The second equality is a consequence of Proposition 3.7 and Lemma 3.8.
First, we show that: Conversely, we show: (2) Cover X (s 0 ) ⊆ η −1 X (Cover S (η X (s 0 ))).Let x ∈ Cover X (s 0 ).So there is a natural number k ∈ N, and there are k maps f 1 , . . ., Cover S (s 0 ) is contained, usually strictly, in ↓ Clover S (s 0 ).The above states that, when restricted to non-limit elements (in X), both contain the same elements.Taking lubaccelerations (Sf ) ∞ of any composition f of maps in F may leave Cover S (s 0 ), but is always contained in ↓ Clover S (s 0 ) = cl(Cover S (s 0 )).So we can safely lub-accelerate in S = X to compute the clover in S. While the clover is larger than the cover, taking the intersection back with X will produce exactly the cover Cover X (s 0 ).
In more informal terms, the cover is the set of states reachable by either following the transitions in F , or going down.The closure of the cover ↓ Clover S (s 0 ) contains not just states that are reachable in the above sense, but also the limits of chains of such states.One may think of the elements of ↓ Clover S (s 0 ) as being those states that are "reachable in infinitely many steps" from s 0 .And we hope to find the finitely many elements of Clover S (s 0 ) by doing enough lub-accelerations.4. A Robust Class of WSTS: ω 2 -WSTS It would seem clear that the construction of the completion S = X of a WSTS X = (X, F →, ≤) be, again, a WSTS.We shall show that this is not the case.The only missing ingredient to show that S is a complete WSTS is to check that X is well-ordered by inclusion.We have indeed seen that X is a continuous dcpo; and S is strongly monotonic, because Sf is continuous, hence monotonic, for every f ∈ F .
Next, we shall concern ourselves with the question: under what condition on X is S = X again a WSTS?Equivalently, when is X well-ordered by inclusion?We shall see that there is a definite answer: when X is ω 2 -wqo.4.1.Motivation.The question may seem mostly of academic interest.Instead, we illustrate that it is crucial to establish a progress property described below.
Let us imagine a procedure in the style of the Karp-Miller tree construction.We shall provide an abstract version of one, Clover S , in Section 5.However, to make things clearer, we shall use a direct imitation of the Karp-Miller procedure for Petri nets for now, generalized to arbitrary WSTS.This is a slight variant of the generalized Karp-Miller procedure of [Fin87,Fin90], and we shall therefore call it as such.
We build a tree, with nodes labeled by elements of the completion X, and edges labelled by transitions f ∈ F .During the procedure, nodes can be marked extensible or nonextensible.We start with the tree with only one node labeled s 0 , and mark it extensible.At each step of the procedure, we pick an extensible leaf node N , labeled with s ∈ X, say, and add new children to N .For each f ∈ F such that s ∈ dom Sf , let s = Sf (s), and add a new child N to N .The edge from N to N is labeled f .If s already labels some ancestor of N , then we label N with s and mark it non-extensible.If s ≤ s for no label s of an ancestor of N , then we label N with s and mark it extensible.Finally, if s < s for some label s of an ancestor N 0 of N (what we shall refer to as case (*) below), then the path from N 0 to N is labeled with a sequence of functions f 1 , . . ., f p from F , and we label N with the lub-acceleration (f p • . . .• f 1 ) ∞ (s ).(There is a subtle issue here: if there are several such ancestors N 0 , then we possibly have to lub-accelerate several sequences f 1 , . . ., f p from the label s of N 0 : in this case, we must create several successor nodes N , one for each value of (f p • . . .• f 1 ) ∞ (s ).)When X = N k and each f ∈ F is a Petri net transition, this is the Karp-Miller procedure, up to the subtle issue just mentioned, which we shall ignore.
Let us recall that the Karp-Miller tree (and also the reachability tree) is finitely branching, since the set F of functions is finite.This will allow us to use König's Lemma, which states that any finitely branching, infinite tree has at least one infinite branch.
The reasons why the original Karp-Miller procedure terminates on (ordinary) Petri nets are two-fold.First, when X = N k ω , one cannot lub-accelerate more than k times, because each lub-acceleration introduces a new ω component to the label of the produced state, which will not disappear in later node extensions.This is specific to Petri nets, and already fails for reset Petri nets, where ω components do disappear.
The second reason is of more general applicability: X = N k ω is wpo, and this implies that along every infinite branch of the tree thus constructed, case (*) will eventually happen, and in fact will happen infinitely many times.Call this progress: along any infinite path, one will lub-accelerate infinitely often.In the original Karp-Miller procedure for Petri nets, this will entail termination.
As we have already announced, for WSTS other than Petri nets, termination cannot be ensured.But at least we would like to ensure progress.The argument above shows that progress is obtained provided X is wpo (or even just wqo).This is our main motivation in characterizing those wpos X such that X is wpo again.
This example also illustrates the following: progress does not mean that we shall eventually compute limits g ∞ (s) that could not be reached in finitely many steps.In the example above, we do lub-accelerate infinitely often, and compute (t ∞ (1, i, 0, 0), but none of these lub-accelerations actually serve any purpose, since (1, i, 0, 0).Progress will take a slightly different form in the actual procedure Clover S of Section 5.In fact, the latter will not build a tree, as the tree is in fact only algorithmic support for ensuring a fair choice of a state in X, and essentially acts as a distraction.However, progress will be crucial (Proposition 5.4 states that if the set of values computed by the 4.2.The Rado Structure.We now return to the purpose of this section: showing that X is well-ordered iff X is ω 2 -wqo.We start by showing that , in some cases, X is indeed not well-ordered.Take X to be Rado's structure X Rado [Rad54], i.e., {(m, n) ∈ N 2 | m < n}, ordered by ≤ Rado : (m, n) ≤ Rado (m , n ) iff m = m and n ≤ n , or n < m .It is well-known that ≤ Rado is a well quasi-ordering, and that P (X Rado ) is not well-quasi-ordered by ≤ Rado , defined as for each i ∈ N.This is pictured as the dark blue (or dark grey) region in Figure 4, and arises naturally in Lemma 4.1 below.Note that ω i is downward-closed in ≤ Rado .Consider the complement ω i of ω i , and note that However, when i < j, (i, j) is in ω i but not in ω j , so ω i ≤ Rado ω j .So (ω i ) i∈N is an infinite sequence of P (X Rado ) from which one cannot extract any infinite ascending chain.Hence P (X Rado ) is indeed not wqo.
Let us characterize X Rado .To this end, we exploit the fact that X Rado = Idl(X Rado ), and examine the structure of directed subsets of X Rado .
Lemma 4.1.The downward-closed directed subsets of X Rado , apart from those of the form ↓(m, n), are of the form Proof.Take any downward-closed directed subset D of X Rado .Consider the set I of all integers i such that some (i, n) is in D. If I is not bounded, then D = X Rado .Indeed, for every (m, n) ∈ X Rado , since I is not bounded, there is an If I is bounded, on the other hand, let i be the largest element of I. Then (i, i + 1) is in D: by assumption (i, n) is in D for some n ≥ i + 1, hence (i, i + 1) also, since D is downward-closed.
There cannot be any (i , j ) ∈ D with i < i and j ≥ i.That is, the rectangular area above the lower triangle of ω i , as shown in Figure 4, must be entirely outside D. Otherwise, since D is directed, there would be an (i , j ) ∈ D with (i, i + 1), (i , j ) ≤ Rado (i , j ); the case i = i is impossible, since then (i , j ) ≤ Rado (i , j ) would imply i = i and j ≤ j (impossible since i < i), or j < i (impossible since then i ≤ j < i = i); since i = i and (i, i + 1) ≤ Rado (i , j ), i > i + 1, contradicting the maximality of i in I.
On the other hand, since (i, i + 1) is in D, then the lower triangle of ω i , as shown in Figure 4, must be in D: these are the points (m, n) with n < i.
If the set of natural numbers n such that (i, n) is in D is bounded, say by n max , then the only elements in D are those of the form (i, j) with j ≤ n max , and those of the form (m, n) with n < i.One checks easily that this is ↓(i, n max ) in X Rado .Otherwise, D contains every (i, n) with n ≥ i + 1, and therefore D contains ω i .It cannot contain more, so D = ω i .Then one checks that ω i is indeed directed and downward-closed.
So X Rado = Idl(X Rado ) is obtained by adjoining infinitely many elements ω 0 , ω 1 , . . ., ω i , . . ., and ω to X Rado .They are ordered so that (i, n) ≤ ω i for all n ≥ i + 1, ω i ≤ ω for all i ∈ N, and no other ordering relationship exists that involves one of the fresh elements.In particular, note that {ω i | i ∈ N} is an infinite antichain, whence X Rado = Idl(X Rado ) is not wqo: Proposition 4.2.X Rado contains an infinite chain, and is therefore not well-ordered by inclusion.
4.3.ω 2 -WSTS.Recall here the working definition in [Jan99]: a well-quasi-order X is ω 2wqo if and only if it does not contain an (isomorphic copy of) X Rado ; here we use Jančar's definition, as it is more tractable than the complex definition of [Mar94].Jančar proved that X is ω 2 -wqo iff ( P (X), ≤ ) is wqo, see e.g.[Jan99].We show that the above is the only case that can go bad: Proposition 4.3.Let S be a well-quasi-order.Then S is well-quasi-ordered by inclusion iff S is ω 2 -wqo.
Proof.Recall that B 1 ≤ Rado B 2 if and only if for every y 2 ∈ B 2 , there is Reformulate the previous result of Jančar [Jan99] by using the ordering ≤ Rado : S is ω 2 -wqo if and only if P (S) is well-ordered by ≤ Rado .
Recall that the Alexandroff topology on a poset is the collection of its upward-closed subsets; i.e., a subset is Alexandroff-open if and only if it is upward-closed.Write S a for S with its Alexandroff topology.Any set of the form ↑ B in S is Alexandroff-open (i.e., upward-closed), and any Alexandroff-open is of this form, with B finite, because S is well.In other words, the set O(S a ) of all opens (upward-closed subsets) of S is well-ordered by reverse inclusion ⊇ if and only if S is ω 2 -wqo.
Recall that the Hoare powerdomain H(S a ) of S a is the set of all non-empty closed subsets of S a (the downward-closed subsets of S), ordered by inclusion.It follows that H(S a ) is wellordered by inclusion ⊇ if and only if S is ω 2 -wqo.Then we recall that S = S(S a ) is the subspace of H(S) consisting of all irreducible closed subsets [Gou07].
When S is ω 2 -wqo, since H(S a ) is well-ordered by inclusion, the smaller set S = S(S a ) is also well-ordered by inclusion.
Conversely, assume that S = S(S a ) is well-ordered by inclusion.If S was not ω 2 -wqo, then it would contain a subset Y that is order-isomorphic to X Rado .Hence S = S(S a ) = Idl(S) would contain Y = Idl(Y ).However by Proposition 4.2 Idl(Y ) contains an infinite antichain: contradiction.

4.4.
Are ω 2 -wqos Ubiquitous?X Rado is an example of a wqo that is not ω 2 -wqo.It is natural to ask whether this is the norm or an exception.We claim that all wpos used in the verification literature are in fact ω 2 -wpo.
Consider the following grammar of datatypes, which extends that of [FG09, Section 5] with the case of finite trees (last line): N is ordered with its usual ordering; the ordering ≤ on the arbitrary finite set A is itself arbitrary.Finite products are ordered componentwise: given that each D i is ordered by ≤ i , then the ordering ≤ on D = D 1 × . . .× D k is defined by (x 1 , . . ., x k ) ≤ (y 1 , . . ., y k ) iff x 1 ≤ 1 y 1 and . . .and x k ≤ y k .Finite sums are ordered in the obvious way: the elements of D 1 + . . .+ D k are pairs (i, x) where 1 ≤ i ≤ k and x ∈ D i , and (i, x) ≤ (j, y) iff i = j and x ≤ y.D * is the set of finite words over the (possibly infinite) alphabet D, and given that the ordering on D is ≤, D * is ordered by the divisibility ordering ≤ * , defined by w ≤ * w iff, writing w as the sequence of letters a 1 a 2 . . .a n , then w is of the form w 0 a 1 w 1 a 2 . . .a n w n , for some words w 0 , w 1 , . . ., w n , and some letters a i , 1 ≤ i ≤ n, such that a i ≤ a i .
D is the set of finite multisets {|x 1 , . . ., x n |} of elements of D. Write again ≤ the ordering on D. Then D is ordered by ≤ defined as: {|x 1 , x 2 , . . ., x m |} ≤ {|y 1 , y 2 , . . ., y n |} iff there is an injective map r : {1, . . ., m} → {1, . . ., n} such that x i ≤ y r(i) for all i, Note that ≤ is not the usual multiset extension ≤ mul of ≤.However, for one, this is the ≤ m quasi-ordering considered, on finite sets X, by Abdulla et al. [ADMN04b, Section 2] for example.Then, it turns out that m ≤ m entails m ≤ mul m .In particular, the fact that ≤ is well, whenever ≤ is, entails that ≤ mul is well: given any sequence of multisets (m i ) i∈N , one can extract an infinite ascending chain with respect to ≤ , hence also with respect to ≤ mul .Similarly, when (D , ≤ ) is an ω 2 -wqo, then so is (D , ≤ mul ), using the fact that X is ω 2 -wqo iff both X and P (X) are wqo (the latter, equipped with ≤ ).
Finally, T (D) is the set of all finite (unranked, ordered) trees over function symbols taken from D. This is the smallest set X such that, for every f ∈ D, for every t ∈ D * , the pair (f, t) is in X.When t is the word consisting of the terms t 1 t 2 . . .t m , we usually write (f, t) as the term f (t 1 , t 2 , . . ., t m ).Given an ordering ≤ on D, the embedding ordering ≤ emb on T (D) is defined by induction on the sum of the sizes of the terms to compare by: t = f (t 1 , t 2 , . . ., t m ) ≤ emb g(u 1 , u 2 , . . ., u n ) iff t ≤ emb u j for some j, 1 ≤ j ≤ n, or f ≤ g and We will prove that every datatype defined in (4.1) is not only ω 2 -wqo but a better quasi-ordering (bqo).Better quasi-orderings were invented by Nash-Williams to overcome certain limitations of wqo theory [NW65].Their definition is complex, and we shall omit it.For short, X is bqo iff P ω1 (X) is wqo, where ω 1 is the first uncountable ordinal, P α (X) is defined for every ordinal α by P 0 (X) = X, P α+1 = P ( P α (X)), P α (X) = β<α P β (X) for every limit ordinal α, and where powersets are quasi-ordered by ≤ .Abdulla and Nylén give a gentle introduction to the theory of bqos [AN00]. Then: Proposition 4.5.Every datatype defined in (4.1) is ω 2 -wqo, and in fact bqo.
Proof.Every bqo is ω 2 -wqo, as the above characterization shows ( P α (X) is wqo for all α ≤ ω 1 , hence certainly for α = 0 and α = 1).Any finite ordered set, any finite union of bqos, any finite product of bqos is bqo [Mil85].When D is bqo, the set of all ordinal-indexed sequences over D is again bqo under an obvious extension of the divisibility ordering, see [NW65] or [Mil85,2.22].Since any subset of a bqo is again bqo, we deduce that D * is bqo whenever D is (this is also mentioned in [AN00, Theorem 3.1 (3)]).When D is bqo, D is proved to be a bqo in [AN00, Theorem 3.1 (4)].Finally, D is bqo implies that T (D) is bqo by [Lav71, Theorem 2.2]; Laver in fact shows that the class of so-called Q-trees is bqo under tree embedding as soon as Q is, where a Q-tree is a possibly infinitely branching tree with branches of length at most ω whose nodes are labeled with elements of Q.
In fact, all naturally occurring wqos are bqos, perhaps to the notable exception of finite graphs quasi-ordered by the graph minor relation, which are wqo [RS04] but not known to be bqo.4.5.Effective Complete WSTS.The completion S of a WSTS S is effective iff the completion S of the set of states is effective and Sf is recursive for all f ∈ F . S is effective for all the data types of [FG09, Section 5] 1 .Also, Sf is indeed recursive for all f ∈ F , whether in Petri nets, functional-lossy channel systems, and reset/transfer Petri nets notably.
In the case of ordinary or reset/transfer Petri nets, and in general for all affine counter systems (which we shall investigate from Definition 6.2 on), Sf coincides with the extension f defined in [FMP04, Section 2]: whenever dom f is upward-closed and f : N k → N k is defined by f ( s) = A s + a, for some matrix A ∈ N k×k and vector a ∈ Z k , then dom Sf = ↑ S dom f , and S(f )( s) is again defined as A s + a, this time for all s ∈ N k ω , and using the convention that 0 × ω = 0 when computing the matrix product A s [FMP04, Theorem 7.9].
In the case of functional-lossy channel systems, it is easy to see that dom S(send a ) = S, S(send a )(P ) = P a ?for every word-product P ; and that dom S(recv a ) = ↑ S a ?These formulae in fact work whenever letters are taken from an alphabet that is wqo; for example, any of the data types D of (4.1).We retrieve the formulae of [ABJ98, Lemma 6], which were proved in the case where the alphabet D is finite, with = as ordering.This also generalizes the algorithms on the so-called word language generators of [ADMN04a], which are elements of (A ) * with A finite.
As promised, we can now show: Proposition 4.6.There are effective complete WSTS S such that the map Clover S : S → Pfin (S) is not recursive.Proof.Let S be the completion of a functional-lossy channel system [FG09, Section 6] on the message alphabet Σ.By Theorem 4.4, S is a complete WSTS.It is effective, too, see above or [ABJ98, Lemma 6].Clover S (s 0 ) can be written as a finite set of tuples, consisting of control states q i (one for each of the communicating automata) and of word-products P j (one for each channel).Each P j is a product of atomic expressions A * (A ∈ Pfin (Σ), A ∅) or a ?(a ∈ Σ).Now P ost * S (s 0 ) is finite iff none of these atomic expressions is of the form A * .So, if we could compute Clover S (s 0 ), this would allow us to decide boundedness for functionallossy channel systems.However functional-lossy channel systems are equivalent to lossy channel systems in this respect, and boundedness is undecidable for the latter [CFP96].We could have played the same argument with reset Petri nets [DFS98] instead as well.

A Conceptual Karp-Miller Procedure
There are some advantages in using a forward procedure to compute (part of) the clover for solving coverability.For depth-bounded processes, a fragment of the π-calculus, the simple algorithm that works backward (computing the set of predecessors of an upward-closed initial set) of [AČJT00] is not applicable when the maximal depth of configurations is not known in advance because, in this case, the predecessor configurations are not effectively computable [WZH10].It has been also proved that, unlike backward algorithms (which solve coverability without computing the clover), the Expand, Enlarge and Check forward algorithm of [GRvB07], which operates on complete WSTS, solves coverability by computing a sufficient part of the clover, even though the depth of the process is not known a priori [WZH10].Recently, Zufferey, Wies and Henzinger proposed to compute a part of the clover by using a particular widening, called a set-widening operator [ZWH12], which loses some information, but always terminates and seems sufficiently precise to compute the clover in various case studies.
The Petri net case also gives complexity-theoretic insights.Solving coverability in Petri nets can be done by using Rackoff's forward procedure [Rac78], or the backward procedure [BG11].Both work in EXPSPACE-the complexity of the forward coverability procedure of [GRvB07] is not known.On the other hand, the complexity of computing the clover is not primitive recursive for Petri nets [MM81].
Model-checking safety properties of WSTS can be reduced to coverability, but there are other properties, such as boundedness (is P ost * S (s) finite?) and U -boundedness (is P ost * S (s)∩ U finite?) that cannot be reduced to coverability: U -boundedness is decidable for Petri nets and for Vector Addition Systems but undecidable for Reset Vector Addition Systems [DFS98], and for Lossy Channel Systems [May03a], hence for general WSTS.
Recall that being able to compute the clover allows one to decide not only coverability since s (≥; → * ; ≥) t iff t ∈ Cover S (s) iff ∃t ∈ Clover S (s) such that t ≤ t but also boundedness, U -boundedness and place-boundedness.To the best of our knowledge, the only known algorithms that decide place-boundedness (and also some formal language properties such as regularity and context-freeness of Petri net languages) require one to compute the clover.
Another argument in favor of computing clovers is Emerson and Namjoshi's [EN98] approach to model-checking liveness properties of WSTS, which uses a finite (coverability) graph based on the clover.Since WSTS enjoy the finite path property ([EN98], Definition 7), model-checking liveness properties is decidable for complete WSTS for which the clover is computable.
All these reasons motivate us to try to compute the clover for classes of complete WSTS, even though it is not computable in general.
The key to designing some form of a Karp-Miller procedure, such as the generalized Karp-Miller tree procedure (Section 4.1) or the Clover S procedure below is being able to compute lub-accelerations.Hence: Definition 5.1 (∞-Effective).An effective complete functional WSTS S = (S, F →, ≤) is ∞-effective iff every function g ∞ is computable, for every g ∈ F * , where F * is the set of all compositions of maps in F .E.g., the completion of a Petri net is ∞-effective: not only is N k ω a wpo, but every composition of transitions g ∈ F * is of the form g( Let S be an ∞-effective WSTS, and write A ≤ B iff ↓ A ⊆ ↓ B, i.e., iff every element of A is below some element of B. This is the Hoare quasi-ordering, also known as the domination quasi-ordering.The following is a simple procedure which computes the clover of its input s 0 ∈ S (when it terminates): Procedure Clover S (s 0 ) : Note that Clover S is well-defined and all its lines are computable by assumption, provided we make clear what we mean by fair choice in line (a).Call A m the value of A at the start of the (m − 1)st turn of the loop at step 2 (so in particular A 0 = {s 0 }).The choice at line (a) is fair iff, on every infinite execution, every pair (g, a) ∈ F * × A m will be picked at some later stage n ≥ m.
A possible implementation of this fair choice is the generalized Karp-Miller tree construction of Section 4.1: organize the states of A as labeling nodes of a tree that we grow.At step m, A m is the set of leaves of the tree, and case (*) of the generalized Karp-Miller tree construction ensures that all pairs (g, a) ∈ F * × A m will eventually be picked for consideration.However, the generalized Karp-Miller tree construction does some useless work, e.g., when two nodes of the tree bear the same label.
Most existing proposals for generalizing the Karp-Miller construction do build such a tree [KM69, Fin90, Fin93, GRvB07], or a graph [EN98].We claim that this is mere algorithmic support for ensuring fairness, and that the goal of such procedures is to compute a finite representation of the cover.Our Clover S procedure computes the clover, which is the minimal such representation, and isolates algorithmic details from the core construction.
We shall also see that termination of Clover S has strong ties with the theory of flattening [BFLS05].However, Bardin et al. require one to enumerate sets of the form g * ( x), which is sometimes harder than computing the single element g ∞ ( x).For example, if g : N k → N k is an affine map g( x) = A x + b − a for some matrix A ∈ N k×k and vectors a, b ∈ N k , then g ∞ ( x) is computable as a vector in N k ω , as we have seen in Section 4.5.But g * ( x) is not even definable by a Presburger formula in general, in fact even when g is a composition of Petri net transitions; this is because reachability sets of Petri nets are not semi-linear in general [HP79].
Finally, we use a fixpoint test (line 2) that is not in the Karp-Miller algorithm; and this improvement allows Clover S to terminate in more cases than the Karp-Miller procedure when it is used for extended Petri nets (for reset Petri nets for instance, which are a special case of the affine maps above), as we shall see.To decide whether the current set A, which is always an under-approximation of Clover S (s 0 ), is the clover, it is enough to decide whether P ost S (A) ≤ A. The various Karp-Miller procedures only test each branch of a tree separately, to the partial exception of the minimal coverability tree algorithm [Fin90] and Geeraerts et al.'s recent coverability algorithm [GRvB07], which compare nodes across branches.That the simple test P ost S (A) ≤ A does all this at once does not seem to have been observed until now.5.1.Correctness and Termination of the Clover Procedure.By Proposition 4.6, we cannot hope to have Clover S terminate on all inputs.But we can at least start by showing that it is correct, whenever it terminates.This will be Theorem 5.5 below.
We first show that if Clover S terminates then the computed set A is contained in Lub(P ost * S (s 0 )).It is crucial that Lub(F ) = cl(F ) for any downward-closed set F , which holds because the state space S is a continuous dcpo.We use this through invocations to Proposition 3.7.
Lemma 5.2.Let S = (S, F →, ≤) be a complete (functional) WSTS.For any subset A of states, P ost * S (cl(A)) ⊆ cl(P ost * S (A)).Proof.We first observe that P ost S (cl(A)) ⊆ cl(P ost S (A)).Indeed, for any s ∈ P ost S (cl(A)), there is an f ∈ F and some t ∈ dom f ∩ cl(A) such that f (t) = s.Let U be the complement of cl(P ost S (A)): U is open by definition.Since f is partial continuous, f −1 (U ) is open.If s were in U , then t would be in f −1 (U ), and in cl(A).It is a general property of topological spaces that an open (here f −1 (U )) meets cl(A) iff it meets A. So there is also a state t in f −1 (U ) ∩ A. That is, t ∈ dom f , f (t ) ∈ U and t ∈ A. But t ∈ A implies f (t ) ∈ P ost S (A) ⊆ cl(P ost S (A)), contradicting the fact that f (t ) ∈ U .So s cannot be in U , i.e., s ∈ cl(P ost S (A)).
By an easy induction on k ∈ N, it follows that P ost k S (cl(A)) ⊆ cl(P ost k S (A)), hence that P ost * S (cl(A)) ⊆ cl(P ost * S (A)).Proposition 5.3.Let S be an ∞-effective complete functional transition system and A n be the value of the set A, computed by the procedure Clover S on input s 0 , after n iterations of the while statement at line 2. Then A n is finite, and A n ≤ A n+1 ≤ Clover S (s 0 ), for every n ∈ N.
Proof.It is obvious that A n is finite.Also, the inclusion A n ⊆ ↓ A n+1 is clear, and entails We show that A n ≤ Clover S (s 0 ), i.e., that A n ⊆ ↓ Clover S (s 0 ), by induction on n.By Proposition 3.7, it is equivalent to show that A n ⊆ cl(Cover S (s 0 )).
If n = 0, A 0 = {s 0 }, so A 0 ⊆ Cover S (s 0 ) ⊆ cl(Cover S (s 0 )).Assume A n ⊆ cl(Cover S (s 0 )), and let us prove that A n+1 ⊆ cl(Cover S (s 0 )).Let (g, a) be the selected pair at line (a).We must show that g ∞ (a) ∈ cl(Cover S (s 0 )).If the procedure Clover S does not stop, it will compute an infinite sequence of sets of states.In other words, Clover S does not deadlock.This is the progress property mentioned in Section 4.1.
Proposition 5.4 (Progress).Let S be an ∞-effective complete functional WSTS and A n be the value of the set A, computed by the procedure Clover S on input s 0 , after n iterations of the while statement at line 2. If n A n is finite, then the procedure Clover S terminates on input s 0 .
Proof.Assume Clover S does not stop on input s 0 , but A = n A n is finite.Since A n ≤ A n+1 , there is an index m such that A n = A m for all n ≥ m; also A = A m .Let (g, a) ∈ F * × A be arbitrary.We shall show that g(a) ≤ A, i.e., there is an element a ∈ A such that g(a) ≤ a .Since a ∈ A m , by fairness there is an n ∈ N with n ≥ m such that (g, a) is picked at line (a) after n iterations of the loop.Then g ∞ (a) ≤ A n+1 = A, so g(a) ≤ g ∞ (a) ≤ A n+1 = A. It follows that P ost * S (A) ≤ A, so P ost S (A) ≤ A, hence the procedure must stop after m turns of the loop: contradiction.The converse implication is obvious.
While Clover S is non-deterministic, this is don't care non-determinism: if one execution does not terminate, then no execution terminates.If Clover S terminates, then it computes the clover, and if it does not terminate, then at each step n, the set A n is contained in the clover.Let us recall that A n ≤ A n+1 .We can now prove: In any case, we can decide boundedness, i.e., whether P ost * S (s 0 ) is finite.But this is impossible [CFP96,May03b].A similar argument works with reset Petri nets, where boundedness is also undecidable [DFS98].5.2.Clover-Flattable Complete WSTS.We now characterize those ∞-effective complete WSTS on which Clover S terminates.
A functional transition system (S, F →) with initial state s 0 is flat iff there are finitely many words w 1 , w 2 , ..., w k ∈ F * such that any fireable sequence of transitions from s 0 is contained in the language w * 1 w * 2 ...w * k .(We equate functions in F with letters from the alphabet F .) corresponding composition of maps, i.e., f g denotes g • f .)Ginsburg and Spanier [GS64] call this a bounded language, and show that it is decidable whether any context-free language is flat.
Not all systems of interest are flat.The simplest example of a non-flat system has one state q and two transitions q a →q and q b →q.For an arbitrary system S, flattening [BFLS05] consists in finding a flat system S , equivalent to S with respect to reachability, and in computing on S instead of S. We adapt the definition in [BFLS05] to functional transition systems, without an explicit finite control graph for now (but see Definition 5.15).Definition 5.9 (Flattening).A flattening of a functional transition system S 2 = (S 2 , (2) and ϕ : S 1 → S 2 is a morphism of transition systems.That is, ϕ is a pair of two maps, both written ϕ, from S 1 to S 2 and from F 1 to F 2 , such that for all (s, s ) ∈ S 2 1 , for all f 1 ∈ F 1 such that s ∈ dom f 1 and s = f 1 (s), ϕ(s) ∈ dom ϕ(f 1 ) and ϕ(s ) = ϕ(f 1 )(ϕ(s)) (see Figure 5).
Let us recall that a pair (S, s 0 ) of a transition system and a state is P ost * -flattable iff there is a flattening S 1 of S and a state s 1 of S 1 such that ϕ(s 1 ) = s 0 and P ost * S (s 0 ) = ϕ(P ost * S 1 (s 1 )).Recall that we equate ordered functional transition systems (S, F →, ≤) with their underlying function transition system (S, F →).The notion of flattening then extends to ordered functional transition systems.However, it is then natural to consider monotonic flattenings, where in addition ϕ : S 1 → S 2 is monotonic.In the case of complete transition systems, the natural extension requires ϕ to be continuous: (2) and ϕ : S 1 → S 2 is continuous.
Definition 5.11 (Clover-Flattable).Let S be a complete transition system, and s 0 be a state.We say that (S, s 0 ) is clover-flattable iff there is an continuous flattening (S 1 , ϕ) of S, and a state s 1 of S 1 such that: (1) ϕ(s 1 ) = s 0 (ϕ maps initial states to initial states); (2) cl(Cover S (s 0 )) = cl(ϕ cl(Cover S 1 (s 1 )) ) (ϕ preserves the closures of the covers of the initial states).
On complete WSTS-our object of study-, the second condition can be simplified to ↓ Clover S (s 0 ) = ↓ ϕ(Clover S 1 (s 1 )) (using Proposition 3.7 and the fact that ϕ, as a continuous map, is monotonic), or equivalently to Clover S (s 0 ) = Max ϕ Clover S 1 (s 1 ) .Recall also that, when S is the completion X of a WSTS X = (X, F →, ≤), the clover of s 0 ∈ X is a finite description of the cover of s 0 in X (Proposition 3.9), and this is what ϕ should preserve, up to taking downward closures.
There are apparently weaker and stronger froms of clover-flattability, which we now introduce.Let us start with the weak form, where equality in the second condition is replaced by inclusion: Definition 5.12 (Weakly Clover-Flattable).Let S be a complete transition system, and s 0 be a state.We say that (S, s 0 ) is weakly clover-flattable iff there is an continuous flattening (S 1 , ϕ) of S, and a state s 1 of S 1 such that: (1) ϕ(s 1 ) ≤ s 0 ; (2) and cl(Cover S (s 0 )) ⊆ cl(ϕ cl(Cover S 1 (s 1 )) ).
The strong form of clover-flattability uses an explicit finite control graph, as in [BFLS05].Recall that a rlre (restricted linear regular expression) over the alphabet Σ is a regular expression of the form w * 1 w * 2 ...w * k , where w 1 , w 2 , ..., w k ∈ Σ * .The language of an rlre is clearly bounded, and the language Pfx(w * 1 w * 2 . . .w * k ) of prefixes of all words from the latter is then again bounded [GS64].
Recall that a deterministic finite automaton (DFA) is a tuple A = (Σ, Q, δ, q 0 , F in), where Σ is a finite alphabet, Q is a finite set of so-called control states, q 0 ∈ Q is the initial state, F in ⊆ Q is the set of final states, and δ : Q × Σ → Q is a partial function called the transition function.
One can convert any rlre to a DFA recognizing the same language.For example, Figure 6 displays a DFA for a * (bcc) * (bcaa) * over Σ = {a, b, c}, where final states are circled.The language Pfx(a * (bcc) * (bcaa) * ) is then recognized by the same DFA, except that now all states are final.This is general: Pfx(w * 1 w * 2 . . .w * k ) is always recognizable by a DFA whose states are all final.Let us therefore call rl-automaton any such DFA.Since all states are final, we shall omit the F in component, and say that A = (Σ, Q, δ, q 0 ) itself is an rl-automaton.a q 6 q 7 q 4 q 5 q 3 q 2 q 0 q 1

Figure 6: An rl-automaton
Let us define the synchronized product.
Definition 5.13 (Synchronized Product).Let S = (S, F →, ≤) be a complete functional transition system, and A = (F, Q, δ, q 0 ) be an rl-automaton on the same alphabet F .
Define the synchronized product S × A as the ordered functional transition system where F is the collection of all partial maps f δ : (s, q) → (f (s), δ(q, f )), for each f ∈ F such that δ(q, f ) is defined for some q ∈ Q.Let also (s, q) ≤ (s , q ) iff s ≤ s and q = q .
Let π 1 be the morphism of transition systems defined as first projection on states; i.e., π 1 (s, q) = s for all (s, q) ∈ S × Q, π 1 (f δ) = f for all f ∈ F .Lemma 5.14 (Synchronized Product).Let S = (S, F →, ≤) be a complete functional transition system, and A = (F, Q, δ, q 0 ) be an rl-automaton on the same alphabet F .
Then (S × A, π 1 ) is a continuous flattening of S.
Proof.First, the technical condition that δ(q, f ) should be defined for some q ∈ Q only excludes maps f δ with an empty domain, and is therefore benign.This technical condition is needed to define π 1 (f δ) as f : formally, we define π 1 (f ) for any f ∈ F by letting π 1 (f )(s) be the first component of the pair f (s, q), where q is some arbitrary state such that δ(q, f ) is defined, and let π 1 (f )(s) be undefined otherwise; when f = f δ, such a q exists by the technical condition, and this will yield f (s) when s ∈ dom f , and will be undefined otherwise.So indeed π 1 (f δ) = f .
(S × Q, ≤ ) is easily seen to be a dcpo.In fact, it is the disjoint sum of finitely many copies of S, and as such, is a continuous dcpo.It is also well-ordered, as a finite disjoint sum of well-ordered spaces.So S × Q is a continuous dcwo.Then we check that f δ is partial continuous.Its domain is q∈Q δ(q,f ) defined dom f × {q}, which is open.Moreover f δ is clearly continuous for every f ∈ F : for any directed family (s i , q i ) i∈I in dom(f δ), first all q i s must be equal, say q i = q ∈ Q, and second (s i ) i∈I must be directed in dom f .So Finally, the language of fireable transitions in S × A is contained in the language of A, which is of the form Pfx(w * 1 w * 2 . . .w * k ), hence bounded.So S × A is flat.
Strong flattenings are special: the decision to take the next action f ∈ F from state (s, q) is dictated by the current control state q only, while ordinary flattenings allow more complex decisions to be made.
We say that a transition system is strongly clover-flattable iff we can require that the flat system S 1 is a synchronized product, and the continuous morphism of transition systems ϕ is first projection π 1 : Definition 5.15 (Strongly Clover-Flattable).Let S = (S, F →) be a complete functional transition system.We say that (S, s 0 ) is strongly clover-flattable iff there is an rl-automaton A, say with initial state q 0 , such that cl(Cover S (s 0 )) = cl(π 1 cl(Cover S×A (s 0 , q 0 )) ).
The following is then obvious.
It is also easy to show that "weakly clover-flattable" also implies "clover-flattable".However, we shall show something more general in Theorem 5.21 below.
We show in Proposition 5.18 that Clover S (s 0 ) can only terminate when (S, s 0 ) is strongly clover-flattable.We shall require the following lemma.For notational simplicity, we equate words g 1 g 2 with compositions g 2 • g 1 .
Lemma 5.17.Let S = (S, F →) be a complete functional transition system, and and in some open subset U of S, for some g 1 , g 2 , . . ., g n ∈ F .Then there are natural numbers k 1 , k 2 , . . ., k n such that g k 1 1 g k 2 2 . . .g kn n (s 0 ) is defined, and in U .
Proof.By induction on n.This is clear if n = 0. Otherwise, let s = g 1 ∞ g 2 ∞ . . .g n−1 ∞ (s 0 ), so that g n ∞ (s) is defined and in U .If s < g n (s), then g n ∞ (s) = lub{g k n (s) | k ∈ N}.That the latter is in the Scott-open U implies that g kn n (s) is in U for some k n ∈ N. If s < g n (s), then g ∞ n (s) = g n (s), and we take k n = 1.Let V be the open (g kn n ) −1 (U ).(Note that, whereas ) is in V , in each case.We apply the induction hypothesis and obtain the existence of k 1 , k 2 , . . ., k n−1 such that g k 1 1 g k 2 2 . . .g k n−1 n−1 (s 0 ) is defined and in V .Hence g k 1 1 g k 2 2 . . .g kn n (s 0 ) is defined, and in U , by definition of V .
The inclusion from right to left is obvious: for any state (s, q) that is reachable from ↓(s 0 , q 0 ) in S × A, s is reachable from ↓ s 0 in S.So π 1 P ost * S (↓ s 0 ) ⊆ P ost * S×A (↓(s 0 , q 0 )).Taking downward closures yields π 1 Cover * S (s 0 ) ⊆ Cover * S×A (s 0 , q 0 ), and taking closures Let us proceed with ϕ(w 2 ).Fix an arbitrary element s of A n 1 , and apply Lemma 5.19 with g = ϕ(w 2 ).Proceeding as above, we observe that there is an n 2 ≥ n 1 such that every element of the form ϕ(w 2 ) k 2 (s), n ∈ N, is below some element of A n 2 .Since s is arbitrary in A n 1 , we conclude that every element of the form ϕ(w 2 ) k 2 (ϕ(w 1 ) k 1 (s 0 )), k 1 , k 2 ∈ N, is below some element of A n 2 .
We now induct on i, 1 ≤ i ≤ m, to show similarly that there is an n i ∈ N such that every element of the form ϕ(w i ) k i (ϕ(w i−1 ) k i−1 (. . .ϕ(w 1 ) k 1 (s 0 ))), where k 1 , . . ., k i ∈ N, is below some element of A n i .
In particular, for i = m, writing n for n m : ( * ) there is an n ∈ N such that every element of the form ϕ(w m ) km (ϕ(w m−1 ) k m−1 (. . .ϕ(w 1 ) k 1 (s 0 ))), where k 1 , . . ., k m ∈ N, is below some element of A n .We claim that Clover S (s 0 ) must stop after step n.
Let U be the (open) complement of the closed set ↓ A n , and assume that U intersects ↓ Clover S (s 0 ).Then U must also intersect ↓ ϕ Clover S 1 (s 1 ) , hence ϕ Clover S 1 (s 1 ) .(Remember that open subsets are upward-closed.)So ϕ −1 (U ) intersects Clover S 1 (s 1 ), whence ϕ −1 (U ) intersects ↓ Clover S 1 (s 1 ), since ϕ −1 (U ) is upward-closed, using the fact that U is and that ϕ is monotonic.By Proposition 3.7, ϕ −1 (U ) intersects cl(Cover S 1 (s 1 )).Since ϕ is continuous, ϕ −1 (U ) is open.We now use the fact that an open intersects the closure of a set iff it intersects that set.So ϕ −1 (U ) must intersect Cover S 1 (s 1 ).So U intersects ϕ Cover S 1 (s 1 ) , say at a.In particular, there is an a 1 ∈ S 1 such that a ≤ ϕ(a 1 ), and But this contradicts the fact that a ∈ U .So the complement U of ↓ A n does not intersect ↓ Clover S (s 0 ), i.e., ↓ Clover S (s 0 ) ⊆ ↓ A n .
By Proposition 5.3, the converse inclusion holds.We conclude that the procedure Clover S stops after the nth turn of the loop, because of the fixpoint test at line 2.
Next, (4) is equivalent to (5), by Theorem 5.21.Note in particular that X is a complete WSTS by Theorem 4.4, and is ∞-effective by assumption.

Application: Well Structured Counter Systems
We now demonstrate how the fairly large class of counter systems fits with our theory.We show that counter systems composed of affine monotonic functions with upward-closed definition domains are complete (strongly monotonic) WSTS.This result is obtained by showing that every monotonic affine function f is continuous and its lub-acceleration f ∞ is computable [CFS11].Moreover, we prove that it is possible to decide whether a general counter system (given by a finite set of Presburger relations) is a monotonic affine counter system, but that one cannot decide whether it is a WSTS.Definition 6.1.A relational counter system (with n counters), for short an R-counter system, C is a tuple C = (Q, R, →) where Q is a finite set of control states, R = {r 1 , r 2 , ...r k } is a finite set of Presburger relations r i ⊆ N n × N n and →⊆ Q × R × Q.
We will consider a special case of Presburger relations, those which allow us to encode the graph of affine functions.A (partial) function f : N n −→ N n is non-negative affine, for short affine if there exist a matrix A ∈ N n×n with non-negative coefficients and a vector b ∈ Z n such that for all x ∈ dom f, f ( x) = A x + b.When necessary, we will extend affine maps f : N n −→ N n by continuity to f : N n ω −→ N n ω , by f (lub i∈N ( x i )) = lub i∈N (f ( x i )) for every countable chain ( x i ) i∈N in N n .That is, we just write f instead of Sf .Definition 6.2.An affine counter system (with n counters), a.k.a. an ACS C = (Q, R, →) is a R-counter system where all relations r i are (partial) affine functions.
The domain of maps f in an affine counter system ACS are Presburger-definable.A reset/transfer Petri net is an ACS where every line or column of every matrix contains at most one non-zero coefficient equal to 1, and, all domains are upward-closed sets.A Petri net is an ACS where all affine maps are translations with upward-closed domains.Theorem 6.3.One can decide whether an effective relational counter system is an ACS.
Proof.The formula expressing that a relation is a function is a Presburger formula, hence one can decide whether R is the graph of a function.One can also decide whether the graph G f of a function f is monotonic because monotonicity of a Presburger-definable function can be expressed as a Presburger formula.Finally, one can also decide whether a Presburger formula represents an affine function f ( x) = A x+ b with A ∈ N n×n and b ∈ Z n , using results by Demri et al. [DFGvD06].
For counter systems (which include Minsky machines), monotonicity is undecidable.Clearly, a counter system S is well-structured iff S is monotonic: so there is no algorithm to decide whether a relational counter system is a WSTS.However, an ACS is strongly monotonic iff each map f is partial monotonic; this is equivalent to requiring that dom f is upward-closed, since all matrices A have non-negative coefficients.This is easily cast as Presburger formula, and therefore decidable.Proposition 6.4.There is an algorithm to decide whether an ACS is a strongly monotonic WSTS.
Proof.The strong monotony of an ACS C means that every function of C is monotonic and this can be expressed by a Presburger formula saying that all the (Presburger-definable) definition domains are upward-closed (the matrices are known to be positive).
We have recalled that the transitions function of Petri nets (f (x) = x + b, b ∈ Z n and dom(f ) upward-closed) can be lub-accelerated effectively.This result was generalized to broadcast protocols (equivalent to transfer Petri nets) by Emerson and Namjoshi [EN98] and to another class of monotonic affine functions f ( x) = A x + b such that A ∈ N n×n , b ∈ N n (note that b is not in Z n ) and dom(f ) is upward closed [FMP04].
[CFS11] recently extended this result to all monotonic affine functions: for every f ( x) = A x + b with A ∈ N n×n , b ∈ Z n and dom(f ) upward-closed, the function f ∞ is recursive.
We deduce the following strong relationship between well-structured ACS and complete well-structured ACS.
Theorem 6.5.The completion of an ACS S is an ∞-effective complete WSTS iff S is a strongly monotonic WSTS.
Proof.Strong monotonicity reduces to partial monotonicity of each map f , as discussed above.Well-structured ACS are clearly effective, since P ost( s) = { t | ∃f ∈ F • f ( t) = s} is Presburger-definable.Note also that monotonic affine function are continuous, and N n ω is a continuous dcwo.Finally, for every Presburger monotonic affine function f , the function f ∞ is recursive, so the considered ACS is ∞-effective.Corollary 6.6.One can decide whether the completion of an ACS is an ∞-effective complete WSTS.

Conclusion and Perspectives
We have provided a framework of complete WSTS , and of completions of WSTS, on which forward reachability analyses can be conducted, using natural finite representations for downward-closed sets.The central element of this theory is the clover , i.e., the set of maximal elements of the closure of the cover.We have shown that, for complete WSTS, the clover is finite and describes the closure of the cover exactly.When the original WSTS is not complete, we have shown the general completion of WSTS defined in [FG09] is still a WSTS, iff the original WSTS is an ω 2 -WSTS .This delineates a new, robust class of WSTS: all known WSTS are ω 2 -WSTS.The property of being an ω 2 -WSTS is also important to ensure progress in Karp-Miller-like procedures.
We have also defined a simple procedure, Clover S for computing the clover for ∞effective complete WSTS S. This captures the essence of generalized forms of the Karp-Miller procedure, while terminating in more cases.We have shown that that Clover S terminates iff the WSTS is clover-flattable, i.e., that it is some form of projection of a flat system, with the same clover.We have also shown that several variants of the notion of clover-flattability were in fact equivalent.We believe that this characterization is an important, and non-trivial result.
In the future, we shall explore efficient strategies for choosing sequences g ∈ F * to lubaccelerate in the Clover S procedure.We will also analyze whether Clover S terminates in models such as BVASS [VG05], reconfigurable nets, timed Petri nets [ADMN04a], post-selfmodifying Petri nets [Val78] and strongly monotonic affine well-structured nets [FMP04]), i.e., whether they are cover-flattable.
One potential use of the clover is in deciding coverability.But the Clover S procedure may fail to terminate.This is in contrast to the Expand, Enlarge and Check forward algorithm of [GRvB07], which always terminates, hence decides coverability.One may want to combine the best of both worlds, and the lub-accelerations of Clover S can profitably be used to improve the efficiency of the Expand, Enlarge and Check algorithm.This remains to be explored.
Finally, recall that computing the finite clover is a first step [EN98] in the direction of solving liveness properties (and not only safety properties which reduce to coverability).We plan to clarify the construction of a cloverability graph which would be the basis for liveness model checking.

Figure 1 :
Figure 1: The clover and the cover, in a complete space and S is a continuous dcpo.Now use Lemma 3.5 on the closed set Lub(Cover S (s 0 )).

Figure 4 :
Figure 4: Ideals in Rado's Structure , and: S(recv a )(a ?P ) = P S(recv a )(b ?P ) = S(recv a )(P ) (b = a) S(recv a )(A * P ) = A * P if a ∈ A S(recv a )(A * P ) = S(recv a )(P ) otherwise If a < g(a), then g ∞ (a) = g(a) is in P ost * S (a), and since a ∈ A n and A n ⊆ cl(Cover S (s 0 )) by induction hypothesis, g(a) is in P ost * S (cl(Cover S (s 0 ))).The latter is contained in cl(P ost * S (Cover S (s 0 ))) by Lemma 5.2, i.e., in cl(Cover S (s 0 )) by monotonicity.If a < g(a), then g ∞ (a) = lub{g n (a) | n ∈ N} is a least upper bound of a directed chain of elements in P ost * S (a).So g ∞ (a) ∈ Lub(P ost * S (a)) ⊆ cl(P ost * S (a)).Since a ∈ A n and A n ⊆ cl(Cover S (s 0 )) by induction hypothesis, g ∞ (a) is in cl(P ost * S (cl(Cover S (s 0 )))).The latter is contained in cl(cl(P ost * S (Cover S (s 0 )))) = cl(P ost * S (Cover S (s 0 ))) by Lemma 5.2, i.e., in cl(Cover S (s 0 )) by monotonicity.