Stochastic Parity Games on Lossy Channel Systems

We give an algorithm for solving stochastic parity games with almost-sure winning conditions on lossy channel systems, for the case where the players are restricted to finite-memory strategies. First, we describe a general framework, where we consider the class of 2.5-player games with almost-sure parity winning conditions on possibly infinite game graphs, assuming that the game contains a finite attractor. An attractor is a set of states (not necessarily absorbing) that is almost surely re-visited regardless of the players' decisions. We present a scheme that characterizes the set of winning states for each player. Then, we instantiate this scheme to obtain an algorithm for stochastic game lossy channel systems.


INTRODUCTION
Background. 2-player games can be used to model the interaction of a controller (player 0) who makes choices in a reactive system, and a malicious adversary (player 1) who represents an attacker.To model randomness in the system (e.g., unreliability; randomized algorithms), a third player 'random' is defined who makes choices according to a predefined probability distribution.The resulting stochastic game is called a 2 1 2 -player game in the terminology of [CJH03].The choices of the players induce a run of the system, and the winning conditions of the game are expressed in terms of predicates on runs.
Most classic work on algorithms for stochastic games has focused on finite-state systems (e.g., [Sha53,Con92,dAHK98,CJH03]), but more recently several classes of infinite-state systems have been considered as well.Stochastic games on infinite-state probabilistic recursive systems (i.e., probabilistic pushdown automata with unbounded stacks) were studied in [EY05,EY08,EWY08].A different (and incomparable) class of infinite-state systems are channel systems, which use unbounded communication buffers instead of unbounded recursion.
Channel Systems consist of nondeterministic finite-state machines that communicate by asynchronous message passing via unbounded FIFO communication channels.They are also known as communicating finite-state machines (CFSM) [BZ83].Channel Systems are a very expressive model that can encode the behavior of Turing machines, by storing the content of an unbounded tape in a channel [BZ83].Therefore, all verification questions are undecidable on Channel Systems.
A Lossy Channel System (LCS) [AJ93,Fin94] consists of finite-state machines that communicate by asynchronous message passing via unbounded unreliable (i.e., lossy) FIFO communication channels, i.e., messages can spontaneously disappear from channels.The original motivation for LCS is to capture the behavior of communication protocols which are designed to operate correctly even if the communication medium is unreliable (i.e., if messages can be lost).Additionally (and quite unexpectedly at the time), the lossiness assumption makes safety/reachability and termination decidable [AJ93,Fin94], albeit of non-primitive recursive complexity [Sch02].However, other important verification problems are still undecidable for LCS, e.g., recurrent reachability (i.e., Büchi properties), boundedness, and behavioural equivalences [AJ96,Sch01,May03].
A Probabilistic Lossy Channel System (PLCS) [BS03,AR03] is a probabilistic variant of LCS where, in each computation step, each message can be lost independently with a given probability.This solves two limitations of LCS.First, from a modelling viewpoint, probabilistic losses are more realistic than the overly pessimistic setting of LCS where all messages can always be lost at any time.Second, in PLCS almost-sure recurrent reachability properties become decidable (unlike for LCS) [BS03,AR03].Several algorithms for symbolic model checking of PLCS have been presented [ABRS05,Rab03].The only reason why certain questions are decidable for LCS/PLCS is that the message loss induces a quasi-order on the configurations, which has the properties of a simulation.Similarly to Turing machines and CFSM, one can encode many classes of infinite-state probabilistic transition systems into a PLCS.Some examples are: • Queuing systems where waiting customers in a queue drop out with a certain probability in every time interval.This is similar to the well-studied class of queuing systems with impatient customers which practice reneging, i.e., drop out of a queue after a given maximal waiting time; see [WLJ10] section II.B.Like in some works cited in [WLJ10], the maximal waiting time in our model is exponentially distributed.In basic PLCS, unlike in [WLJ10], this exponential distribution does not depend on the current number of waiting customers.However, an extension of PLCS with this feature would still be analyzable in our framework (except in the pathological case where a high number of waiting customers increases the customers patience exponentially, because such a system would not necessarily have a socalled finite attractor; see below).• Probabilistic resource trading games with probabilistically fluctuating prices.The given stores of resources are encoded by counters (i.e., channels), which exhibit a probabilistic decline (due to storage costs, decay, corrosion, obsolescence, etc).• Systems modelling operation cost/reward, which is stored in counters/channels, but probabilistically discounted/decaying over time.• Systems which are periodically restarted (though not necessarily by a deterministic schedule), due to, e.g., energy depletion or maintenance work.Due to this wide applicability of PLCS, we focus on this model in this paper.However, our main results are formulated in more general terms referring to infinite Markov chains with a finite attractor; see below.Previous work.In [BBS07], a non-deterministic extension of PLCS was introduced where one player controls transitions in the control graph and message losses are fully probabilistic.This yields a Markov decision process (i.e., a 1 1 2 -player game) on the infinite graphs induced by PLCS.It was shown in [BBS07] that 1 1 2 -player games with almost-sure repeated reachability (Büchi) objectives are decidable and pure memoryless determined.
In [AHdA + 08], 2 1 2 -player games on PLCS are considered, where the players control transitions in the control graph and message losses are probabilistic.Almost-sure Büchi objectives are decidable for this class, and pure memoryless strategies suffice for both players [AHdA + 08].Generalized Büchi objectives are also decidable, and finite-memory strategies suffice for the player, while memoryless strategies suffice for the opponent [BS13].
On the other hand, 1 1 2 -player games on PLCS with positive probability Büchi objectives, i.e., almost-sure co-Büchi objectives from the (here passive) opponent's point of view, can require infinite memory to win and are also undecidable [BBS07].However, if the player is restricted to finitememory strategies, 1 1 2 -player games with positive probability parity objectives (even the more general Streett objectives) become decidable and memoryless strategies suffice for the player [BBS07].Note that the finite-memory case and the infinite-memory one are a priori incomparable problems, and neither subsumes the other.Cf.Section 6.
Non-stochastic (2-player) parity games on infinite graphs were studied in [Zie98], where it is shown that such games are determined, and that both players possess winning memoryless strategies in their respective winning sets.Furthermore, a scheme for computing the winning sets and winning strategies is given.Stochastic games (2 1 2 -player games) with parity conditions on finite graphs are known to be memoryless determined and effectively solvable [dAH00, CJH03, CdAH06].Our contribution.We give an algorithm to decide almost-sure parity games for probabilistic lossy channel systems in the case where the players are restricted to finite memory strategies.We do that in two steps.First, we give our result in general terms (Section 4): We consider the class of 2 1 2 -player games with almost-sure parity wining conditions on possibly infinite game graphs, under the assumption that the game contains a finite attractor.An attractor is a set A of states such that, regardless of the strategies used by the players, the probability measure of the runs which visit A infinitely often is one. 1 Note that this means neither that A is absorbing, nor that every run must visit A. We present a general scheme characterizing the set of winning states for each player.The scheme is a generalization of the well-known scheme for non-stochastic games in [Zie98].In fact, the constructions are equivalent in the case that no probabilistic states are present.We show correctness of the scheme for games where each player is restricted to a finite-memory strategy.The correctness proof here is more involved than in the non-stochastic case of [Zie98]; we rely on the existence of a finite attractor and the restriction of the players to use finite-memory strategies.Furthermore, we show that if a player is winning against all finite-memory strategies of the other player then he can win using a memoryless strategy.
In the second step (Section 5), we show that the scheme can be instantiated for lossy channel systems.The above two steps yield an algorithm to decide parity games in the case when the players are restricted to finite memory strategies.If the players are allowed infinite memory, then the problem is undecidable already for 1 1 2 -player games with co-Büchi objectives (a special case of 2-color parity objectives) [BBS07].Note that even if the players are restricted to finite memory strategies, such a strategy (even a memoryless one) on an infinite game graph is still an infinite object.Thus, unlike for finite game graphs, one cannot solve a game by just guessing strategies and then checking if they are winning.Instead, we show how to effectively compute a finite, symbolic representation of the (possibly infinite) set of winning states for each player as a regular language (Section 5.2), and a finite description of winning strategies (Section 5.3). 1 In the game community (e.g., [Zie98]) the word attractor is used to denote what we call a force set in Section 3. In the infinite-state systems community (e.g., [ABRS05,AHM07]), the word is used in the same way as we use it in this paper.

Notation.
Let O and N denote the set of ordinal resp.natural numbers.With α, β, and γ we denote arbitrary ordinals, while with λ we denote limit ordinals.We use f : X → Y to denote that f is a total function from X to Y , and use f : X ⇀ Y to denote that f is a partial function from X to Y .We write f (x) = ⊥ to denote that f is undefined on x, and define dom( f ) := {x : f (x) = ⊥}.We say that f is an extension of g if g(x) = f (x) whenever g(x) = ⊥.For X ′ ⊆ X , we use f |X ′ to denote the restriction of f to X ′ .We will sometimes need to pick an arbitrary element from a set.To simplify the exposition, we let select(X ) denote an arbitrary but fixed element of the nonempty set X .
A probability distribution on a countable set X is a function f : For a set X , we use X * and X ω to denote the sets of finite and infinite words over X , respectively.The empty word is denoted by ε.

Games.
A game (of rank n) is a tuple G = (S, S 0 , S 1 , S R , −→, P, Col) defined as follows.S is a set of states, partitioned into the pairwise disjoint sets of random states S R , states S 0 of Player 0, and states S 1 of Player 1. −→ ⊆ S × S is the transition relation.We write s−→s ′ to denote that (s, s ′ ) ∈ −→.We assume that for each s there is at least one and at most countably many s ′ with s−→s ′ .The probability function P : S R × S → [0, 1] satisfies both ∀s ∈ S R .∀s′ ∈ S.(P(s, s ′ ) > 0 ⇐⇒ s−→s ′ ) and ∀s ∈ S R .∑ s ′ ∈S P(s, s ′ ) = 1.(The sum is well-defined since we assumed that the number of successors of any state is at most countable.)The coloring function is defined as Col : S → {0, . . ., n}, where Col(s) is called the color of state s.
Let Q ⊆ S be a set of states.We use A run ρ in G is an infinite sequence s 0 s 1 • • • of states s.t.s i −→s i+1 for all i ≥ 0; ρ(i) denotes s i .A path π is a finite sequence s 0 • • • s n of states s.t.s i −→s i+1 for all i : 0 ≤ i < n.We say that ρ (or π) visits s if s = s i for some i.For any Q ⊆ S, we use Π Q to denote the set of paths that end in some state in Q. Intuitively, the choices of the players and the resolution of randomness induce a run s 0 s 1 • • • , starting in some initial state s 0 ∈ S; state s i+1 is chosen as a successor of s i , and this choice is made by Player 0 if s i ∈ S 0 , by Player 1 if s i ∈ S 1 , and it is chosen randomly according to the probability distribution P(s i , •) if s i ∈ S R .
Strategies.For x ∈ {0, 1}, a strategy for Player x prescribes the next move, given the current prefix of the run.Formally, a strategy of Player x is a partial function The strategy f x prescribes for Player x the next move, given the current prefix of the run.A run ρ = s 0 s 1 • • • is said to be consistent with a strategy f x of Player ρ is consistent with both f x and f 1−x .We use Runs(G, s, f x , f 1−x ) to denote the set of runs in G induced by (s, f x , f 1−x ).We say that f x is total if it is defined for every π ∈ Π S x .
A strategy f x of Player x is memoryless if the next state only depends on the current state and not on the previous history of the run, i.e., for any path A finite-memory strategy updates a finite memory each time a transition is taken, and the next state depends only on the current state and memory.Formally, we define a memory structure for Player x as a quadruple M = (M, m 0 , τ, µ) satisfying the following properties.The nonempty set M is called the memory and m 0 ∈ M is the initial memory configuration.For a current memory configuration m and a current state s, the next state is given by τ : S x × M → S, where s−→τ(s, m).The next memory configuration is given by µ : S × M → M. We extend µ to paths by µ(ε, m) = m and A total strategy f x is said to have finite memory if there is a memory structure M = (M, m 0 , τ, µ) where M is finite and where f 1−x is induced by M .We say that ρ visits the configuration (s, m) if there is an i such that s i = s and µ(s We use F x all (G), F x finite (G), and F x / 0 (G) to denote the set of all, finite-memory, and memoryless strategies respectively of Player x in G.Note that memoryless strategies and strategies in general can be partial, whereas for simplicity we only define total finite-memory strategies.Probability Measures.We use the standard definition of probability measures for a set of runs [Bil86].First, we define the measure for total strategies, and then we extend it to general (partial) strategies.Consider a game G = (S, S 0 , S 1 , S R , −→, P, Col), an initial state s, and total strategies f x and f 1−x of Players x and 1 − x.Let Ω s = sS ω denote the set of all infinite sequences of states starting from s.For a measurable set R ⊆ Ω s , we define P G,s, f x , f 1−x (R) to be the probability measure of R under the strategies f x , f 1−x .This measure is well-defined [Bil86].For (partial) strategies f x and f 1−x of Players x and 1 − x, ∼ ∈ {<, ≤, =, ≥, >}, a real number c ∈ [0, 1], and any measurable set R ⊆ Ω s , we define P G,s, f x , f 1−x (R) ∼ c iff P G,s,g x ,g 1−x (R) ∼ c for all total strategies g x and g 1−x that are extensions of f x resp.f 1−x .
Winning Conditions.The winner of the game is determined by a predicate on infinite runs.We assume familiarity with the syntax and semantics of the temporal logic CT L * (see, e.g., [CGP99]).Formulas are interpreted on the structure (S, −→).We use ϕ s to denote the set of runs starting from s that satisfy the CT L * path-formula ϕ.This set is measurable [Var85], and we just write We will consider games with parity winning conditions, whereby Player 1 wins if the largest color that occurs infinitely often in the infinite run is odd, and Player 0 wins if it is even.Thus, the winning condition for Player x can be expressed in CT L * as x-Parity := i∈{0,...,n}∧(i mod 2)=x Winning Sets.For a strategy f x of Player x, and a set F 1−x of strategies of Player 1 − x, we define If there is a strategy f x such that s ∈ W x ( f x , F 1−x )(G, ϕ ∼c ), then we say that s is a winning state for Player x in G wrt. ϕ ∼c (and f x is winning at s), provided that Player 1 − x is restricted to strategies in F 1−x .Sometimes, when the parameters G, s, F 1−x , ϕ, and ∼ c are known, we will not mention them and may simply say that "s is a winning state" or that " f x is a winning strategy", etc.If ), then we say that Player x wins from s almost surely (a.s.).
), then we say that Player x wins from s with positive probability (w.p.p.).
We also define ), then we say that Player x surely wins from s.Notice that any strategy that is surely winning from a state s is also winning from s a.s., and any strategy that is winning a.s. is also winning w.p.p., i.e., V x ( f x ,

Determinacy and Solvability.
A game is called determined wrt.an objective ϕ ∼c and two sets F 0 , F 1 of strategies of Player 0, resp.Player 1, if, for every state s, Player x has a strategy f x ∈ F x that is winning against all strategies g ∈ F 1−x of the opponent, i.e., s ∈ W x ( f x , F 1−x )(G, cond x ), where cond 0 = ϕ ∼c and cond 1 = ϕ ∼c .By solving a determined game, we mean giving an algorithm to compute symbolic representations of the sets of states which are winning for either player and a symbolic representation of the corresponding winning strategies.

Attractors.
A set A ⊆ S is said to be an attractor if, for each state s ∈ S and strategies f 0 , f 1 of Player 0 resp.Player 1, it is the case that P G,s, f 0 , f 1 (✸A) = 1.In other words, regardless of where we start a run and regardless of the strategies used by the players, we will reach a state inside the attractor a.s..It is straightforward to see that this also implies that P G,s, f 0 , f 1 (✷✸A) = 1, i.e., the attractor will be visited infinitely often a.s.
finite of Player x resp.Player 1 − x, where f x is memoryless and f 1−x is finite-memory.Suppose that f 1−x is induced by memory structure M = (M, m 0 , τ, µ).We define the transition system T induced by G, f 1−x , f x to be the pair , and one of the following three conditions is satisfied: (i) s 1 ∈ S x and either Consider the directed acyclic graph (DAG) of maximal strongly connected components (SCCs) of the transition system T .An SCC is called a bottom SCC (BSCC) if no other SCC is reachable from it.Observe that the existence of BSCCs is not guaranteed in an infinite transition system.
However, if G contains a finite attractor A and M is finite then T contains at least one BSCC, and in fact each BSCC contains at least one element (s A , m) with s A ∈ A. In particular, for any state s ∈ S, any run ρ ∈ Runs(G, s, f x , f 1−x ) will visit a configuration (s A , m) infinitely often a.s.where s A ∈ A and (s A , m) ∈ B for some BSCC B.

REACHABILITY
In this section we present some concepts related to checking reachability objectives in games.First, we define basic notions.Then we recall a standard scheme (described e.g. in [Zie98]) for checking reachability winning conditions, and state some of its properties that we use in the later sections.In this section, we do not use the finite attractor property, nor do we restrict the class of strategies in any way.Below, fix a game G = (S, S 0 , S 1 , S R , −→, P, Col).
Reachability Properties.Fix a state s ∈ S and sets of states Q, Q ′ ⊆ S. Let Post G (s) := {s ′ : s−→s ′ } denote the set of successors of s.Extend it to sets of states by Post G (Q) := s∈Q Post G (s).Note that for any given state s ∈ S R , P(s, •) is a probability distribution over Post G (s).Let Pre G (s) := {s ′ : s ′ −→s} denote the set of predecessors of s, and extend it to sets of states as above.We define , it denotes the set of states whose successors all belong to Q.We say that Notice that S is both a 0-trap and a 1-trap, and in particular it is both sink-free and closable.The following lemma states that, starting from a state inside a set of states Q that is a trap for one player, the other player can surely keep the run inside Q.
We define a memoryless strategy f x of Player x that is surely winning from any state We can now show that any run that starts from a state s ∈ Q and that is consistent with f x will surely remain inside Q.Let f 1−x be any strategy of Player 1 − x, and let s 0 s 1 . . .∈ Runs(G, s, f x , f 1−x ).We show, by induction on i, that s i ∈ Q for all i ≥ 0. The base case is clear since s 0 = s ∈ Q.For i > 1, we consider three cases depending on s i : x .By the induction hypothesis we know that s i ∈ Q, and hence by definition of f x we know that By the induction hypothesis we know that s i ∈ Q, and hence By the induction hypothesis we know that s i ∈ Q, and hence Scheme.Given a set Target ⊆ S, we give a scheme for computing a partitioning of S into two sets Force x (G, Target) and Avoid 1−x (G, Target) s.t. 1) Player x has a memoryless strategy on Force x (G, Target) to force the game to Target w.p.p., and 2) Player 1 − x has a memoryless strategy on Avoid 1−x (G, Target) to surely avoid Target.The scheme and its correctness is adapted from [Zie98] to the stochastic setting.First, we characterize the states that are winning for Player x, by defining an increasing set of states each of which consists of winning states for Player x, as follows: Clearly, the sequence is non-decreasing, i.e., R α ⊆ R β when α ≤ β, and since the sequence is bounded by S, it converges at some (possibly infinite) ordinal.We state this as a lemma: Let γ be the smallest ordinal s.t.R γ = R γ+1 (it exists by the lemma above).We define Force x (G, Target) There are two cases to consider: Second, when proving sink-freeness above, we showed that which means that G ¬ R γ is an x-trap, thus concluding the proof.
The following lemma shows correctness of the construction.In fact, it shows that a winning player also has a memoryless strategy which is winning against an arbitrary opponent.Lemma 3.4.There are memoryless strategies force x (G, Target) ∈ F x / 0 (G) for Player x and To prove the first claim, we define a memoryless strategy f x of Player x that is winning from R .For any s ∈ We show that f x forces the run to the target set Target w.p.p. against an arbitrary opponent.Fix a strategy f 1−x for Player 1 − x.We show that P G,s, f x , f 1−x (✸Target) > 0 by transfinite induction.If s ∈ R 0 , then the claim follows trivially.
If s ∈ R α+1 , then either s ∈ R α in which case the claim holds by the induction hypothesis, or s ∈ R α+1 \ R α .In the latter case, there are three sub-cases: . By definition of f x , we know that f x (s) = s ′ for some s ′ ∈ R α .By the induction hypothesis, P G,s ′ , f 0 , f 1 (✸Target) > 0, and hence P G,s, Then, the proof follows as in the previous case.
Finally, if s ∈ R λ for a limit ordinal λ, then s ∈ R α for some α < λ, and the claim follows by the induction hypothesis.
From Lemma 3.3 and Lemma 3.1 it follows that there is a strategy The second claim follows then from the fact that Target ∩ Avoid 1−x (G, Target) = / 0.

PARITY CONDITIONS
We describe a scheme for solving stochastic parity games with almost-sure winning conditions on infinite graphs, under the conditions that the game has a finite attractor (as defined in Section 2), and that the players are restricted to finite-memory strategies.
We define a sequence of functions C 0 , C 1 , . . .Each C n takes a single argument, a game of rank at most n, and it returns the set of states where Player x wins a.s., with x = n mod 2. In other words, the player that has the same parity as color n wins a.s. in C n (G).We provide a memoryless strategy that is winning a.s.for Player x in C n (G) against any finite-memory strategy of Player 1 − x, and a memoryless strategy that is winning w.p.p. for Player 1 − x in G ¬ C n (G) against any finite-memory strategy of Player x.
The scheme is by induction on n and is related to [Zie98].In the rest of the section, we make use of the following notion of sub-game.For a closable G ¬ Q, we define the sub-game For the base case, let C 0 (G) := S for any game G of rank 0. Indeed, from any configuration Player 0 trivially wins a.s.(even surely) because there is only color 0.
The construction of the various sets involved in the inductive step.The grey area is Y α .
For n ≥ 1, let G be a game of rank n.In the following, let C n (G) is defined with the help of two auxiliary transfinite sequences of sets of states {X α } α∈O and {Y α } α∈O .The construction ensures that X 0 ⊆ Y 0 ⊆ X 1 ⊆ Y 1 ⊆ • • • , and that the states of X α , Y α are winning w.p.p. for Player 1 − x.We use strong induction, i.e., to construct X α we assume that X β has been constructed for all β < α, and it suffices to state one unified inductive step rather than distinguishing between base case, successor ordinals and non-zero limit ordinals.In the (unified) inductive step, we have already constructed X β and Y β for all β < α.Our construction of X α and Y α is in three steps (cf. Figure 1): (1) X α is the set of states where Player 1 − x can force the run to visit β<α Y β w.p.p.
(2) Find a set of states where Player 1 − x wins w.p.p. in the sub-game G ⊖ X α .
(3) Take Y α to be the union of X α and the set constructed in step 2. We next show how to find the winning states in the sub-game G ⊖ X α in step 2. We first compute the set of states where Player x can force the play in G ⊖ X α to reach a state with color n w.p.p.We call this set Z α .The sub-game G ⊖ X α ⊖ Z α does not contain any states of color n.Therefore, this game can be completely solved, using the already constructed function C n−1 (G ⊖ X α ⊖ Z α ).The resulting winning set is winning a.s. in G ⊖ X α ⊖ Z α , hence it is winning w.p.p.We will prove that the states where Player 1 − x wins w.p.p. in G ⊖ X α ⊖ Z α are winning w.p.p. also in G.We thus take Y α as the union of X α and C n−1 (G ⊖ X α ⊖ Z α ).
We define the sequences formally: Notice that the sub-games G ⊖ X α and G ⊖ X α ⊖ Z α are well-defined, since G ¬ X α is closable in G (by Lemma 3.3), and G⊖Xα ¬ Z α is closable in G ⊖ X α .By the definition, for α ≤ β we get Y α ⊆ X β ⊆ Y β .As in Lemma 3.2, we can prove that this sequence converges: Let γ be the least ordinal s.t.X γ+1 = X γ (which exists by the lemma above).We define The following lemma shows the correctness of the construction.Recall that we assume that G is of rank n and that it contains a finite attractor.(G) for Player 1 − x such that the following two properties hold: Proof.Using induction on n, we define the strategies , and prove that the strategies are indeed winning.Construction of f x c .For n ≥ 1, recall that γ is the least ordinal s.t.X γ+1 = X γ (as defined above), and define depending on the membership of s in one of the following three partitions of X γ : By the definition of γ, we have that X γ+1 \ X γ = / 0.
By the construction of Y α we have, for an arbitrary α, that and by the construction of X α+1 , we have that Y α \ X α ⊆ X α+1 \ X α .By combining these facts, we obtain C n−1 (G ′ ) ⊆ X γ+1 \ X γ = / 0. Since G ⊖ X γ ⊖ Z γ does not contain any states of color n (or higher), it follows by the induction hypothesis that there is a memoryless strategy ).We define f x c (s) := f 1 (s).(Later, we will prove that in fact f 1 is winning a.s.) x finite (G) be a finite-memory strategy for Player 1 − x.We show that First, we give a straightforward proof that any run will always stay inside X γ , i.e., s i ∈ X γ for all i ≥ 0. We use induction on i.The base case follows from s 0 = s ∈ X γ .For the induction step, we assume that s i ∈ X γ , and show that s i+1 ∈ X γ .We consider the following cases: We have s i+1 ∈ Post G (s i ) ∩ X γ , and in particular s i+1 ∈ X γ .
We now prove the main claim.This is where we need the assumption of finite attractor and finitememory strategies.Let us again consider a run ρ ∈ Runs(G, s, f x c , f 1−x ).We show that ρ is a.s.
winning for Player x with respect to x-Parity in G. Let f 1−x be induced by a memory structure M = (M, m 0 , τ, µ).Let T be the transition system induced by G, f x c , and f 1−x .As explained in Section 2, ρ will a.s.visit a configuration (s A , m) ∈ B for some BSCC B in T .Since there exists a finite attractor, each state that occurs in B will a.s.be visited infinitely often by ρ.Let n max be the maximal color occurring among the states of B. There are two possible cases: • n max = n.Since each state in G has color at most n, Player x will a.s.win.
• n max < n.This implies that {s B : (s B , m) ∈ B} ⊆ Z γ , and hence Player x uses the strategy f 1 to win the game in G ⊖ X γ ⊖ Z γ w.p.p.Then, either (i) n max mod 2 = x in which case all states inside B are almost sure winning for Player x; or (ii) n max mod 2 = 1 − x in which case all states inside B are almost sure losing for Player x.The result follows from the fact that case (ii) gives a contradiction since all states in G ⊖ X γ ⊖ Z γ (including those in B) are winning for Player x w.p.p.

Construction of
. We define a strategy f 1−x c such that, for all α, the following inclusion holds: By the induction hypothesis (on n), there is a memoryless strategy finite (G) be a finite-memory strategy for Player x.We now use induction on α to show that P G,s, f 1−x c , f x ((1 − x)-Parity) > 0 for any state s ∈ Y α .There are three cases: (1) If s ∈ β<α Y β , then s ∈ Y β for some β < α and the result follows by the induction hypothesis on β.
(2) If s ∈ X α \ β<α Y β , then we know that Player 1 − x can use f 1 to force the game w.p.p. to β<α Y β from which she wins w.p.p.
There are now two sub-cases: either (i) there is a run from s consistent with f x and f 1−x c that reaches X α ; or (ii) there is no such run.In sub-case (i), the run reaches X α w.p.p.Then, by cases 1 and 2, Player 1 − x wins w.p.p.In sub-case (ii), all runs stay forever outside X α .So the game is in effect played on G ⊖ X α .Notice then that any run from s that is consistent with f x and f 1−x c stays forever in G ⊖ X α ⊖Z α .The reason is that (by Lemma 3.3) G⊖Xα ¬ Z α is an x-trap in G ⊖X α .Since all runs remain inside G ⊖ X α ⊖ Z α , Player 1 − x wins w.p.p. (even a.s.) wrt.(1 − x)-Parity using f 2 .
The following theorem follows immediately from the previous lemmas.
Theorem 4.3.Stochastic parity games with almost sure winning conditions on infinite graphs are memoryless determined, provided there exists a finite attractor and the players are restricted to finite-memory strategies.
Remark.We can compute both the a.s.winning set and the w.p.p. winning set for both players as follows.Let n max be the maximal color occurring in the game.Then: • Player x wins a.s. in C n max (G) and w.p.p. in G ¬ C n max +1 (G); • Player 1 − x wins a.s. in C n max +1 (G) and w.p.p. in G ¬ C n max (G).

APPLICATION TO LOSSY CHANNEL SYSTEMS
5.1.Lossy channel systems.A lossy channel system (LCS) is a finite-state machine equipped with a finite number of unbounded fifo channels (queues) [AJ93].The system is lossy in the sense that, before and after a transition, an arbitrary number of messages may be lost from the channels.We consider stochastic game-LCS (SG-LCS): each individual message is lost independently with probability λ in every step, where λ > 0 is a parameter of the system.The set of control states is partitioned into states belonging to Player 0 and 1.The player who owns the current control state chooses an enabled outgoing transition.
Formally, a SG-LCS of rank n is a tuple L = (S, S 0 , S 1 , C, M, T, λ, Col) where S is a finite set of control states partitioned into control states S 0 , S 1 of Player 0 and 1; C is a finite set of channels, M is a finite set called the message alphabet, T is a set of transitions, 0 < λ < 1 is the loss rate, and Col : S → {0, . . ., n} is the coloring function.Each transition t ∈ T is of the form s op −→s ′ , where s, s ′ ∈ S and op is one of the following three forms: c!m (send message m ∈ M in channel c ∈ C), c?m (receive message m from channel c), or nop (do not modify the channels).
That is, each state in the game (also called a configuration) consists of a control state, a function that assigns a finite word over the message alphabet to each channel, and one of the symbols 0 or 1. States where the last symbol is 0 are random: S R = S × (M * ) C × {0}.The other states belong to a player according to the control state: S x = S x × (M * ) C × {1}.Transitions out of states of the form s = (s, x, 1) model transitions in T leaving control state s.On the other hand, transitions leaving configurations of the form s = (s, x, 0) model message losses.More precisely, transitions are defined as follows: where the notation x[c → w] represents the channel assignment which is the same as x except that it maps c to the word w ∈ M * .
• To model message losses, we introduce the subword ordering on words: x y iff x is a word obtained by removing zero or more messages from arbitrary positions of y.This is extended to channel contents x, x ′ ∈ (M * ) C by x x ′ iff x(c) x ′ (c) for all channels c ∈ C, and to configurations s = (s, x, i), s ′ = (s ′ , x ′ , i ′ ) ∈ S by s s ′ iff s = s ′ , x x ′ , and i = i ′ .For any s = (s, x, 0) and any x ′ x, there is a transition s−→(s, x ′ , 1).The probability of random transitions is given by P((s, x, 0), (s, , where a is the number of ways to obtain x ′ by losing messages in x, b is the total number of messages in all channels of x, and c is the total number of messages in all channels of x ′ (see [ABRS05] for details).Every configuration of the form (s, x, 0) has at least one successor, namely (s, x, 1).If a configuration (s, x, 1) does not have successors according to the rules above, then we add a transition (s, x, 1)−→(s, x, 0), to ensure that the induced game is sink-free.
Finally, for a configuration s = (s, x, i), we define Col(s) := Col(s).Notice that the graph of the game is bipartite, in the sense that a configuration in S R has only transitions to configurations in [S] 0,1 , and vice versa.
We say that a set of channel contents X ⊆ (M * ) C is regular if it is a finite union of sets of the form Y ⊆ (M * ) C where Y(c) is a regular subset of M * for every c ∈ C (this coincides with the notion of recognisable subset of (M * ) C ; cf.[Ber79]).We extend the notion of regularity to a set of configurations P ⊆ S by saying that P is regular iff, for every control state s ∈ S and i ∈ {0, 1}, there exists a regular set of channel contents In the qualitative parity game problem for SG-LCS, we want to characterize the sets of configurations where Player x can force the x-Parity condition to hold a.s., for both players.5.2.From scheme to algorithm.We transform the scheme of Section 4 into an algorithm for deciding the a.s.parity game problem for SG-LCS.Consider an SG-LCS L = (S, S 0 , S 1 , C, M, T, λ, Col) and the induced game G = (S, S 0 , S 1 , S R , −→, P, Col) of some rank n.Furthermore, assume that the players are restricted to finite-memory strategies.We show the following.
Theorem 5.1.The sets of winning configurations for Players 0 and 1 are effectively computable as regular sets of configurations.Furthermore, from each configuration, memoryless strategies suffice for the winning player.
In the statement of the theorem, "effectively" means that a finite description of the regular sets of winning configurations is computable.We give the proof in several steps.First, we show that the game induced by an SG-LCS contains a finite attractor (Lemma 5.2).Then, we show that the scheme in Section 3 for computing winning configurations wrt.reachability objectives is guaranteed to terminate (Lemma 5.4).Furthermore, we show that the scheme in Section 4 for computing winning configurations wrt.a.s.parity objectives is guaranteed to terminate (Lemma 5.7).Notice that Lemmas 5.4 and 5.7 imply that for SG-LCS our transfinite constructions stabilize below ω (the first infinite ordinal).Finally, we show that each step in the above two schemes can be performed using standard operations on regular languages (Lemmas 5.11 and 5.12).Finite attractor.In [ABRS05] it was shown that any Markov chain induced by a Probabilistic LCS contains a finite attractor.The proof can be carried over in a straightforward manner to the current setting.More precisely, the finite attractor is given by A = (S × ε ε ε × {0, 1}) where ε ε ε(c) = ε for each c ∈ C. In other words, A is given by the set of configurations in which all channels are empty.The proof relies on the observation that if the number of messages in some channel is sufficiently large, it is more likely that the number of messages decreases than that it increases in the next step.This gives the following.

Lemma 5.2. G contains a finite attractor.
Termination of Reachability Scheme.For a set of configurations Q ⊆ S, we define the upward closure of Q by Q ↑:= {s : ∃s Proof.By Higman's lemma [Hig52], there is a j ∈ N s.t.
Now, we can show termination of the reachability scheme.
Lemma 5.4.There exists a finite j ∈ N such that R i = R j for all i ≥ j.
Proof.First, we show that . for all i ∈ N. We use induction on This means that s−→(s, x ′ , 1) ∈ R i−1 for some x ′ x, and hence s ′ −→(s, x ′ , 1) for all s ′ s.t.s s ′ .By Lemma 5.3, there is a Since the graph of G is bipartite (as explained in Section 5.1), Termination of Parity Scheme.We prove that the scheme from Section 4 terminates under the condition that the reachability sets are computable and that there exists a finite attractor.This suffices since, by the part above, the reachability scheme terminates, thus yielding computability of the reachability set.However, here we prove termination of the parity scheme with no further assumption on the reachability sets other than their computability.
We first prove two immediate auxiliary lemmas.
Proof.In any closable set, the players can choose strategies that force the game to remain in the set surely.The lemma now follows since an attractor is visited almost surely by any run, and this would be impossible if the attractor did not have any element in the set.
Proof.C 0 (G) is trivially a (1 − x)-trap.For i ≥ 1, the result follows immediately from the definition of C n (G) in Eq 4.1 as the complement of a force set (by Lemma 3.3).
Lemma 5.7.There is a finite j ∈ N such that X i = X j for all i ≥ j.
Proof.We will prove the claim by showing that C n−1 (G ⊖ X i ⊖ Z i ) in the definition of Y i contains an element from the attractor, and that the ) is an x-trap by Lemma 5.6.Hence it is closable, and therefore Lemma 5.5 implies that it contains an element from the attractor.Second, by the definition of the ⊖ operator, X i and sets are disjoint, and each of them contains at least one element of the attractor, and the attractor is finite, the algorithm terminates in at most |A| steps.
Computability.Regular languages of configurations are effectively closed under the operations of upward-closure, predecessor, set-theoretic union, intersection, and complement [ABD08].For completeness, we show these properties below.
Lemma 5.8.If P is a regular set of configurations, then its upward-closure P↑ is effectively regular.
Proof.A regular set P of configurations is by definition of the form where the X s,i 's are regular sets of channel contents.It thus suffices to show that X ↑:= {x : ∃x ′ ∈ X. x ′ x} is an effectively regular set of channel contents when X is a regular set of channel contents.By definition, X is a finite union of sets of the form Y ⊆ (M * ) C with Y(c) regular for every c ∈ C. Let X ↑ be the union of the Y ↑, where, for every c ∈ C, a finite automaton recognizing Y ↑ (c) is obtained from a finite automaton recognizing Y(c) by adding a self-loop labeled with M on every state thereof.
Lemma 5.9.If P, Q are regular sets of configurations, then P ∪ Q, P ∩ Q, and S \ P are effectively regular sets of configurations.
Proof.The proof is very similar to the one in the previous lemma, by exploiting the fact that regular languages are closed under the operations of union, intersection, and complement.
Lemma 5.10.If P is a regular set of configurations, then Pre G (P) is an effectively regular set of configurations.
Proof.Let P be a regular set of configurations.By a case analysis on which transition is taken, we can write where Lemma 5.12.For each n, C n (G) is effectively regular.Proof.The set S is regular, and hence C 0 (G) = S is effectively regular.The result for n > 0 follows from Lemma 5.11 and from the fact that the rest of the operations used to build C n (G) are those of set complement and union.5.3.Construction of regular winning strategies.In this section, we show that the memoryless winning strategies constructed in Theorem 5.1 can be finitely represented as a (finite) list of rules with regular guards on the channel contents.This representation can be easily turned in a more lowlevel one, e.g., a finite automaton with output reading the channel contents and outputting the rule do be played next, but for the ease of presentation we have chosen a more high-level description.
Preliminaries.Let L = (S, S 0 , S 1 , C, M, T, λ, Col) be a SG-LCS.A (memoryless) regular SG-LCS strategy f for Player x is a finite list of guarded rules {s i , X i , where the guard X i ⊆ (M * ) C is a regular set of channel contents and s i op i

−→s ′
i is a transition in T s.t.s i ∈ S x and: • If op i = c?m, every x ∈ X i has m as the first symbol of x(c).
• Guards for the same control state are disjoint; i.e., for each i, j, if s i = s j then X i ∩ X j = / 0.
The domain of a regular SG-LCS strategy f is The (partial, memoryless) induced strategy f of a regular SG-LCS strategy f is defined, for every (s, x) ∈ dom(f), as f(s, x, 1) = (s ′ i , x ′ , 0), where s i , X i op −→s ′ i is the unique guarded rule in f such that s i = s and x ∈ X i , and x ′ is the unique channel contents s.t.(s, x, 1)−→(s ′ , x ′ , 0) in the game Given two regular SG-LCS strategies f 0 , f 1 with disjoint domains, their union f 0 ∪ f 1 is the regular SG-LCS strategy obtained by concatenating the lists of guarded rules of f 0 and f 1 .
Given two sets of configurations In other words, a selection function picks a legal successor in Q ′ for every configuration in Q.
Construction.The rest of this section is devoted to the construction of regular winning strategies for both players, as summarised by the following theorem.Theorem 5.13.Memoryless winning strategies for both players are effectively computable as regular SG-LCS strategies.
We begin by showing that, if the set of selection functions is non-empty, then there are simple selection functions induced by regular SG-LCS strategies.
Lemma 5.14.Let Q, Q ′ ⊆ S be two regular sets of configurations.If there exists a selection function from Q to Q ′ , then there exists a regular SG-LCS strategy f s.t.
k } be the finitely many transitions of L.
For every i ∈ {0, . . ., k}, let P i be the set of predecessors of −→s ′ i is regular (cf.Lemma 5.10), and thus P i is regular too.Consider the sequence of (regular) sets Q 0 = P 0 , and, for 0 < i ≤ k, Q i = P i \ 0≤ j<i Q j , and let Q i 0 , . . ., Q i h be the subsequence of non-empty sets.Then, {Q i 0 , . . ., Q i h } is a (regular) partition of Q: The sets are disjoint by definition, and each (s, x) ∈ Q belongs to some Q i j since Post G (s, x) ∩ Q ′ is non-empty.Let {X i 0 , . . ., X i h } ⊆ 2 (M * ) C be the set of regular channel contents s.t., for 0 ≤ j ≤ h, Q i j is of the form {(s i j , x) : x ∈ X i j }.Let f be the following regular SG-LCS strategy: In the next lemma, we show that regular SG-LCS strategies suffice to keep the game in regular traps.
Lemma 5.15.If Q is a (1 − x)-trap and regular, then there exists a regular SG-LCS strategy f for Player x such that Q ⊆ V x (f, F 1−x all (G))(G, ✷Q).Proof.By Lemma 3.1, there exists a memoryless strategy f x for Player x with the required property.Moreover, by inspecting the proof of the lemma, we can see that f x is defined as and, in fact, any such selection function can be taken.By Lemma 5.14, there exists a regular SG-LCS strategy f s.t. the induced strategy f is a selection function from [Q] x to Q.
The following lemma shows that there are regular SG-LCS strategies for the reachability and safety objective (cf.Lemma 3.4).
Lemma 5.16.Let Target ⊆ S be a regular set of configurations.There exist regular SG-LCS strategies force x (G, Target) for Player x and avoid Proof.We first show a regular SG-LCS strategy for Player x for the reachability objective.Consider the sequence of sets R 0 , R 1 , . . .constructed in Section 3. By Lemma 5.4, there exists j ∈ N s.t.∀i > j, R i = R j .Moreover, since R i is built starting from the regular set Target and according to regularity-preserving operations (union, predecessor, and complement; cf.Lemmas 5.9 and 5.10), R i is regular for every 0 ≤ i ≤ j.Consider the sequence of regular sets R 0 = R 0 and for every 0 < i ≤ j.Recall the definition of force x (G, Target) in the proof of Lemma 3.4: For every 0 < i ≤ j, force x (G, Target) was uniformly defined on R i as ).Therefore, there exists a selection function from R i to R i−1 , for every 0 < i ≤ j.Since the R i 's and R i 's are regular, by Lemma 5.14, there exists a regular SG-LCS strategy f i with domain R i inducing such a selection function.Since the R i 's are disjoint, and since any selection function is correct, take as force x (G, Target) the union strategy f 0 ∪ • • • ∪ f j .Since the actual choice of the selection function is irrelevant, we conclude that Force x (G, Target) ⊆ W x (force x (G, Target), F 1−x all (G))(G, ✸Target >0 ) We conclude the proof by providing the required regular SG-LCS strategy for Player 1 − x for the safety objective.By Lemma 3.3, Avoid 1−x (G, Target) is an x-trap.Since Avoid 1−x (G, Target) is regular, by Lemma 5.15 there exists a regular SG-LCS strategy avoid 1−x (G, Target) such that Avoid To conclude the proof of Theorem 5.13, we show that regular SG-LCS strategies suffice for the parity objective (cf.Lemma 4.2).Lemma 5.17.There are regular SG-LCS strategies f x c for Player x and We define regular SG-LCS strategies f x c for Player x and f 1−x c for Player 1 − x by induction on n ≥ 1.By inspecting the proof of Lemma 4.2, we note that winning strategies for both players are constructed according to a case analysis on disjoint regular domains, for which winning regular SG-LCS strategies exist either by induction hypothesis, or by Lemma 5.16 (for reachability).Recall that, by Lemma 5.7, there exists i ∈ N s.t.X j = X i for every j > i.Moreover, all the sets X j , Y j , Z j involved in the construction are regular for every 0 ≤ j ≤ i since they are constructed starting from regular sets and according to regularity-preserving operations (boolean operations, cf.Lemma 5.9; force-sets, cf.Lemma 5.16).Construction of f x c .Define the two regular sets of configurations X j := G ¬ X j and Z j := G ¬ Z j .By definition, C n (G) = X j .Following Lemma 4.2, we define f x c (s) depending on the membership of s in one of the following three partitions of X j : Col=n } In the first case, note that G ⊖ X j ⊖ Z j does not contain any configurations of color ≥ n (cf.Lemma 4.2).Thus, by the induction hypothesis, there is a regular SG-LCS strategy f j for Player x in G ⊖ X j ⊖ Z j such that the induced strategy has domain X j ∩ Z j .In the second case, let f 2 be the regular SG-LCS strategy force x (G ⊖ X j , [Z j ] Col=n ), for which the induced strategy has domain X j ∩ [Z j ] Col<n (it exists by Lemma 5.16).Finally, in the third case, the strategy select(Post G (•)∩ X j ) witnesses the existence of a selection function from X j ∩ [Z j ] Col=n to X j .Let f 3 be a regular SG-LCS strategy inducing a selection function from X j ∩ [Z j ] Col=n to X j (it exists by Lemma 5.14).
Then, f x c is defined as the union of the three previously constructed strategies: it exists by Lemma 5.16).By the induction hypothesis, there is also a regular SG-LCS strategy f 2 i such that the induced strategy has domain C n−1 (G ⊖ X i ⊖ Z i ), which is winning a.s.for Player 1 − x on this domain.Then, f 1−x c is defined as

CONCLUSIONS AND DISCUSSION
We have presented a scheme for solving stochastic games with a.s. and w.p.p. parity winning conditions under the two requirements that (i) the game contains a finite attractor and (ii) both players are restricted to finite-memory strategies.We have shown that this class of games is memoryless determined.The method is instantiated to prove decidability of a.s. and w.p.p. parity games induced by lossy channel systems.The two above requirements are both necessary for our method.To see why our scheme fails if the game lacks a finite attractor, consider the game in Figure 2 (a variant of the Gambler's ruin problem).All states are random, i.e., S 0 = S 1 = / 0, and Col(s 0 ) = 1 and Col(s i ) = 0 when i > 0.
The probability to go right from any state is 0.7 and the probability to go left (or to make a self-loop in s 0 ) is 0.3.This game does not have any finite attractor.It can be shown that the probability to reach s 0 infinitely often is 0 for all initial states.However, our construction will classify all states as winning for Player 1.More precisely, the construction of C 1 (G) converges after one iteration, with Z α = S and X α = Y α = / 0 for all α, and C 1 (G) = S. Intuitively, the problem is that even if the force-set of {s 0 } (which is the entire set of states) is visited infinitely many times, the probability of visiting {s 0 } infinitely often is still zero, since the probability of returning to {s 0 } gets smaller and smaller.Such behavior is impossible in a game graph that contains a finite attractor.Our scheme also fails when the players are not both restricted to finite-memory strategies.Solving a game under a finite-memory restriction is a different problem from when arbitrary strategies are allowed (not a sub-problem).In fact, it was shown in [BBS07] that for arbitrary strategies, the problem is undecidable.We show two simple examples of stochastic games on LCSs where the two problems yield different results (see also [BBS07]).In one case, we show that infinite memory is more powerful for Player 1 with a w.p.p. objective (cf. Figure 3a), while in the other case infinite memory helps w.r.t. an a.s.objective (cf. Figure 3b).In both cases, Player 0 does not play in the game, thus the memory allowed to her is irrelevant.
First, we show that infinite memory is more powerful for w.p.p. objectives.In Figure 3a, Player 1 plays on control states p, q, and r.Player 1's objective is to visit state r infinitely often w.p.p.. To ensure this, from state p Player 1 pumps up the channel to a sufficiently large size k (which can be done a.s.for any k given enough time), and then she goes to the risk state q.If each message can be lost independently with probability 1 2 , the probability that all messages are lost, and thus that Player 1 is stuck forever in q, is 2 −k .Otherwise, with probability 1 − 2 −k Player 1 can visit r once, and then go back to p.The strategy of Player 1 is to realise an infinite sequence k 0 < k 1 < • • • s.t. the probability of visiting state r infinitely often, which is ∏ ∞ i=0 (1 − 2 −k i ), can be made strictly positive.Clearly, if Player 1 has infinite memory, then she can realize such a sequence by distinguishing different visits to control state p and same channel contents.On the other side, if Player 1 is restricted to finite memory, then either the game eventually stays forever in p (which is losing), or the infinite sequence k 0 , k 1 , . . . is upper-bounded by some finite n, which makes the infinite product above equal to 0. In both cases, Player 1 loses if she has only finite memory.
Notice that Player 1 wins not only w.p.p., but even limit-sure in this example.In other words, for every ε > 0 there is an infinite-memory strategy s.t. the parity objective is satisfied with probability ≥ ε.We don't know whether there are examples where a similar phenomenon can be reproduced under finite-memory/memoryless strategies.
We now show that infinite memory is more powerful for a.s.objectives.An example similar to the previous case can be given for the a.s.winning mode with a 3-color parity condition.In Figure 3b, Player 1 controls states 0, 1, and 2, whose color equals their name.Thus, the objective of Player 1 is to a.s.visit state 1 infinitely often and state 2 only finitely often.The strategy is similar as in the previous example: Player 1 tries to pump up the channel in state 0, and then she goes to the risk state 1.From here, with low probability all messages are lost, and the penalty is to visit state 2 once.Otherwise, the game can go back directly to state 0 without visiting state 2. In both cases, the game restarts afresh from state 0. An analysis as in the previous example shows that, if Player 1 is restricted to finite memory, then the probability of visiting state 2 from state 1 can be bounded from below.This implies that, whenever state 1 is visited infinitely often, then so is state 2 a.s., and so Player 1 is losing.On the other hand, there is an infinite-memory strategy for Player 1 s.t. the probability of visiting state 2 for n times goes to 0 as n goes to infinity, which implies that the probability of visiting state 2 only finitely often is 1.
As future work, we will consider extending our framework to (fragments of) probabilistic extensions of other models such as Petri nets and noisy Turing machines [AHM07].

Lemma 4. 2 .
There are memoryless strategies f x c ∈ F x / 0 (G) for Player x and f 1−x c ∈ F 1−x / 0 Parity >0 ).The result then follows from the definition of C n (G).The inclusion X α ⊆ Y α holds by the definition of Y α .For any state s ∈ G ¬ C n (G), we define f 1−x c (s) as follows.Let α be the smallest ordinal such that s ∈ Y α .Such an α exists by the well-ordering of ordinals and since G regular, because regular languages are effectively closed under (right) quotients, Pre G P, s c?m −→s ′ is regular, because regular language are effectively closed under (left) concatenation with single symbols, and Pre R G (P) is effectively regular by Lemma 5.8.The lemmas above show that all operations used in computing Force x (G, Target) effectively preserve regularity.Thus we obtain the following lemma.Lemma 5.11.If Target is regular, then Force x (G, Target) is effectively regular.
x) : there exists a guarded rule s, X ) should be applied from control state s i if the channel contents belong to the guard X i .Formally, let G = (S, S 0 , S 1 , S R , −→, P, Col) be the game induced by L.
strategy by the same arguments as in the proof of Lemma 4.2, i.e., C n