Decisive Markov Chains

We consider qualitative and quantitative verification problems for infinite-state Markov chains. We call a Markov chain decisive w.r.t. a given set of target states F if it almost certainly eventually reaches either F or a state from which F can no longer be reached. While all finite Markov chains are trivially decisive (for every set F), this also holds for many classes of infinite Markov chains. Infinite Markov chains which contain a finite attractor are decisive w.r.t. every set F. In particular, this holds for probabilistic lossy channel systems (PLCS). Furthermore, all globally coarse Markov chains are decisive. This class includes probabilistic vector addition systems (PVASS) and probabilistic noisy Turing machines (PNTM). We consider both safety and liveness problems for decisive Markov chains, i.e., the probabilities that a given set of states F is eventually reached or reached infinitely often, respectively. 1. We express the qualitative problems in abstract terms for decisive Markov chains, and show an almost complete picture of its decidability for PLCS, PVASS and PNTM. 2. We also show that the path enumeration algorithm of Iyer and Narasimha terminates for decisive Markov chains and can thus be used to solve the approximate quantitative safety problem. A modified variant of this algorithm solves the approximate quantitative liveness problem. 3. Finally, we show that the exact probability of (repeatedly) reaching F cannot be effectively expressed (in a uniform way) in Tarski-algebra for either PLCS, PVASS or (P)NTM.


Introduction
Verification of infinite systems.The aim of model checking is to decide algorithmically whether a transition system satisfies a specification.Specifications which are formulated as reachability or repeated reachability of a given set of target states are of particular interest since they allow to analyze safety and progress properties respectively.In particular, model checking problems w.r.t.ω-regular specifications are reducible to the repeated reachability problem.
A main challenge has been to extend the applicability of model checking to systems with infinite state spaces.Algorithms have been developed for numerous models such as timed automata, Petri nets, pushdown systems, lossy channel systems, parameterized systems, etc.
Probabilistic systems.In a parallel development, methods have been designed for the analysis of models with stochastic behaviors (e.g.[LS82, HS84, Var85, CY88, CY95, HK97, CSS03]).The motivation is to capture the behaviors of systems with uncertainty such as programs with unreliable channels, randomized algorithms, and fault-tolerant systems.The underlying semantics for such models is often that of a Markov chain.In a Markov chain, each transition is assigned a probability by which the transition is performed from a state of the system.In probabilistic model checking, three classes of problems are relevant: • The qualitative problem: check whether a certain property Φ holds with probability one (or zero).• The approximate quantitative problem: compute the probability p of satisfying a given property Φ up-to arbitrary precision, i.e., for any pre-defined error margin ǫ > 0, compute a value p ′ s.t.p ′ ≤ p ≤ p ′ + ǫ. • The exact quantitative problem: compute the probability p of satisfying a given property Φ exactly and decide exact questions, e.g., if p ≥ 0.5.Recently, several attempts have been made to consider systems which combine the above two features, i.e., systems which are infinite-state and which exhibit probabilistic behavior.For instance the works in [Rab03, BS03, AR03, BE99, IN97, ABIJ00] consider Probabilistic Lossy Channel Systems (PLCS): systems consisting of finite-state processes, which communicate through channels which are unbounded and unreliable in the sense that they can spontaneously lose messages.The motivation for these works is that, since we are dealing with unreliable communication, it is relevant to take into consideration the probability by which messages are lost inside the channels.The papers [EKM04, EKM05, EKM06, EY05b, EY05a, EE04, EY05c] consider probabilistic pushdown automata (recursive state machines) which are natural models for probabilistic sequential programs with recursive procedures.
Our contribution.Here we consider more abstract conditions on infinite Markov chains.We show how verification problems can be solved for Markov chains with these conditions and that several infinite-state probabilistic process models satisfy them.In particular, we consider probabilistic lossy channel systems (PLCS), probabilistic vector addition systems with states (PVASS) and probabilistic noisy Turing machines (PNTM).
Let F be a given set of target states in a Markov chain, and F the set of states from which F cannot be reached, i.e., F := {s | s eventually reaches either F or F .In other words, decisiveness means that if F is always reachable then it will almost certainly be reached.
While all finite Markov chains are trivially decisive (for every set F ), this also holds for several classes of infinite-state Markov chains.
It is not a meaningful question if the decisiveness property is decidable for general Markov chains.For finite Markov chains the answer is always yes, and for general infinite Markov chains the problem instance is not finitely given, unless one restricts to a particular subclass.For some such subclasses decisiveness always holds, while for others (e.g., probabilistic pushdown automata (PPDA)) it is decidable (see below).
• Markov chains which contain a finite attractor.An attractor is a set of states which is eventually reached with probability one from every state in the Markov chain.Examples of Markov chains with finite attractors are all Markov chains induced by probabilistic lossy channel systems (PLCS).We show that infinite Markov chains which contain a finite attractor are decisive w.r.t.every set F .
• Markov chains which are globally coarse.A Markov chain is globally coarse w.r.t.F if there exists some α > 0 such that, from every state, the probability of eventually reaching the set F is either zero or ≥ α.Global coarseness w.r.t.F also implies decisiveness w.r.t.F .We consider two probabilistic process models which induce globally coarse Markov chains.
-Any probabilistic vector addition system with states (PVASS) with an upward-closed set of final states F induces a globally coarse Markov chain.-Noisy Turing machines (NTM) have been defined by Asarin and Collins [AC05].These are Turing machines where the memory tape cells are subject to 'noise', i.e., random changes.We consider probabilistic noisy Turing machines (PNTM), a generalization of noisy Turing machines (NTM) where the transition steps are also chosen probabilistically.Probabilistic noisy Turing machines induce globally coarse Markov chains w.r.t.every set F defined by a set of control-states.• Another subclass of infinite Markov chains are those induced by probabilistic pushdown automata (PPDA; also called recursive state machines) [EKM04, EKM05, EKM06, EY05b, EY05a, EE04, EY05c].These infinite Markov chains are not decisive in general.However, it follows directly from the results in [EKM06] that decisiveness is decidable for PPDA, provided that the set of final states F is effectively regular.The focus of this paper are the classes PLCS, PVASS and PNTM, not PPDA.We strive to be as general as possible and use only the weak condition of decisiveness.We do not advocate the use of our algorithms for PPDA, even for those instances which are decisive.Since PPDA is a special class with a particular structure, specialized algorithms like those described in [EKM04, EKM05, EKM06, EY05b, EY05a, EE04, EY05c] are more suitable for it.However, we show in Section 9 that the techniques used for analyzing PPDA cannot be applied to PLCS, PVASS or PNTM.
We consider both qualitative and quantitative analysis for decisive Markov chains.The main contributions of the paper are the following.
• The qualitative reachability problem, i.e., the question if F is reached with probability 1 (or 0).For decisive Markov chains, this problem is equivalent to a question about the underlying (non-probabilistic) transition system.For PVASS, the decidability of this question depends on the set of target states F .It is decidable if F is defined by a set of control-states, but undecidable if F is a more general upward-closed set of configurations.This is in contrast to all known decidability results for other models such as non-probabilistic VASS, and PLCS, where the two problems can effectively be reduced to each other.
For both PLCS and PNTM, the qualitative reachability problem is generally decidable.In particular for PLCS, although this was already shown in [AR03,BS03], our construction is more abstract and simpler.In particular, our algorithm does not require explicit construction of the attractor as in [AR03,BS03].
• The qualitative repeated reachability problem.
If a Markov chain is decisive w.r.t.F then the question whether F will be visited infinitely often with probability 1 is equivalent to a simple question about the underlying transition graph, which is decidable for PVASS, PLCS and PNTM.For PVASS, the decidability of probabilistic repeated reachability is surprising, given the undecidability of probabilistic simple reachability above.
If a Markov chain is decisive w.r.t.both F and F then the question whether F will be visited infinitely often with probability 0 is equivalent to another question about the underlying transition graph.The precondition holds for all Markov chains with a finite attractor (such a PLCS) since they are decisive w.r.t.every set, and the question is decidable for PLCS.For PNTM, we show that if F is defined by a set of control-states then so is F .Since PNTM induce globally coarse Markov chains w.r.t.any set defined by control-states, the question is also decidable.
However, for PVASS, decisiveness w.r.t.F does not generally imply decisiveness w.r.t.F and thus our algorithm is not always applicable.For PVASS, decidability of the question whether F is visited infinitely often with probability 0 is an open problem.
• To approximate the probability of eventually reaching F , we recall an algorithm from [IN97] which was also used in [Rab03] for PLCS.We show that the algorithm can be used to solve the problem for all decisive Markov chains (in particular also for both PVASS and PNTM).Furthermore, we show that a minor modification of the algorithm yields an algorithm for approximating the probability of visiting F infinitely often for all Markov chains which are decisive w.r.t.F and F .In particular this works for all Markov chains with a finite attractor, such as PLCS.This is a more abstract, general and simpler solution than the result for PLCS in [Rab03].However, it does not yield precise complexity bounds as [Rab03].
• The question if the exact probability of (either eventually, or infinitely often) reaching F in PLCS is expressible by standard mathematical functions was stated as an open problem in [Rab03].We provide a partial answer by showing that for PVASS, PLCS and (P)NTM, this probability cannot be effectively expressed (in a uniform way) in Tarskialgebra, the first-order theory of the reals (IR, +, * , ≤). (By 'in a uniform way' we mean that quantitative parameters in the system should be reflected directly by constants in the Tarski-algebra-formula.)This is in contrast to the situation for probabilistic pushdown automata for which these probabilities can be effectively expressed, in a uniform way, in (IR, +, * , ≤) [EKM04, EKM06, EY05b, EE04].

Transition Systems and Markov Chains
We introduce some basic concepts for transition systems and Markov chains.Let N and Q ≥0 denote the set of natural numbers (including 0) and non-negative rational numbers, respectively.
2.1.Transition Systems.A transition system T is a tuple (S, −→) where S is a (potentially) infinite set of states, and −→ is a binary relation on S. We write s −→ s ′ for (s, s ′ ) ∈−→ and let Post(s) := {s ′ | s −→ s ′ }.A run ρ (from s 0 ) of T is an infinite sequence s 0 s 1 . . . of states such that s i −→ s i+1 for i ≥ 0. We use ρ(i) to denote s i and say that ρ is an s-run if ρ(0) = s.A path is a finite prefix of a run.We assume familiarity with the syntax and semantics of the temporal logic CTL * [CGP99].We use (s |= φ) to denote the set of s-runs that satisfy the CTL * path-formula φ.For s ∈ S and Q ⊆ S, we say , there exists a run which reaches a state in Q 1 without having previously passed through any state in Q 2 .Given a set of states F ⊆ S, we define Pre * (F ) := {s ′ | ∃s ∈ F : s ′ * −→ s} as the set of its predecessors.Furthermore, let F := Pre * (F ) = {s| s |= ∃3F }, the set of states from which F is not reachable.For s ∈ S and F ⊆ S, we define the distance dist F (s) of s to F to be the minimal natural number n with s n −→ F .In other words, dist F (s) is the length of the shortest path leading from s to F .In case s ∈ F , we define dist F (s) = ∞.A transition system T is said to be of span N with respect to a given set F if for each s ∈ S we either have dist We say that T is finitely spanning with respect to a given set F if T is of span N w.r.t.F for some N ≥ 0. A transition system T = (S, −→) is said to be effective w.r.t. a given set F if for each s ∈ S, we can (1) compute elements of the set Post(s) (notice that this implies that T is finitely branching); and (2) check whether s |= ∃3F .

Markov Chains.
A Markov chain M is a tuple (S, P ) where S is a (potentially infinite) set of states, and P : S × S → [0, 1], such that s ′ ∈S P (s, s ′ ) = 1, for each s ∈ S. A Markov chain induces a transition system, where the transition relation consists of pairs of states related by positive probabilities.In this manner, concepts defined for transition systems can be lifted to Markov chains.For instance, for a Markov chain M, a run of M is a run in the underlying transition system, and M is finitely spanning w.r.t.given set F if the underlying transition system is finitely spanning w.r.t.F , etc. Consider a state s 0 of a Markov chain M = (S, P ).On the sets of s 0 -runs, the probability space (Ω, ∆, Prob M ) is defined as follows (see also [KSK66]): Ω = s 0 S ω is the set of all infinite sequences of states starting from s 0 , ∆ is the σ-algebra generated by the basic cylindric sets D u = uS ω , for every u ∈ s 0 S * , and the probability measure Prob M is defined by Prob M (D u ) = i=0,...,n−1 P (s i , s i+1 ) where u = s 0 s 1 ...s n ; this measure is extended in a unique way to the elements of the σ-algebra generated by the basic cylindric sets.
Given a CTL * path-formula φ, we use (s |= φ) to denote the set of s-runs that satisfy φ.We use Prob M (s |= φ) to denote the measure of the set of s-runs (s |= φ) (which is measurable by [Var85]).For instance, given a set F ⊆ S, Prob M (s |= 3F ) is the measure of s-runs which eventually reach F .In other words, it is the probability by which s satisfies 3F .We say that almost all runs of a Markov chain satisfy a given property φ if Prob M (s |= φ) = 1.In this case one says that (s |= φ) holds almost certainly.

Classes of Markov Chains
In this section we define several abstract properties of infinite-state Markov chains: decisiveness, the existence of a finite attractor, and global coarseness.We show that both the existence of a finite attractor and global coarseness imply decisiveness.In particular, all three properties hold trivially for finite Markov chains.
In the rest of this section, we assume a Markov chain M = (S, P ).
Definition 3.1.Given a Markov chain M = (S, P ) and a set of states F ⊆ S, we say that M is decisive w.r.t.
In other words, the set of runs, along which F is always reachable but which never reach F , is almost empty (i.e., has probability measure zero).
Similarly, we say that M is strongly decisive w.r.t.F if Prob M (s |= 3 F ∨ 23F ) = 1.Intuitively, this means that the set of runs along which F is always reachable and which visit F only finitely many times is almost empty.Lemma 3.2.Given a Markov chain M = (S, P ) and a set F ⊆ S, M is decisive w.r.t.F iff it is strongly decisive w.r.t.F .Proof.Given a Markov chain M = (S, P ) and a set F ⊆ S, we want to prove that ∀s ∈ S, Prob M (s Let U be a set of sequences of states.U is called proper if no sequence in U is a prefix of another sequence in U .If all sequences in U are finite and start at the same state, we define P (U ) := Prob M (D U ) where D U = {uS ω | u ∈ U }. Given a proper set U of finite sequences (namely paths) ending all in the same state s c and a proper set V of possibly infinite sequences (runs) starting all from s c , we define U • V to be the set of all sequences us c v where us c ∈ U and s c v ∈ V .
We now prove both implications of the required equivalence above.
(⇐=) Observe that (s |= 32¬F ) is the set of s-runs visiting F only finitely many times.In particular, the set of s-runs which never visit F is included in that set.This gives (s |= 2¬F ) ⊆ (s |= 32¬F ).By intersection with (s |= 2¬ F ), the set of s-runs which never visit F , we obtain (s By definition of the probability measure, we obtain Prob M (s |= 2¬F ∧ 2¬ F ) ≤ Prob M (s |= 32¬F ∧ 2¬ F ) = 0 for any s ∈ S, where the last equality follows from the assumption.
(=⇒) Given a state s ∈ S, we define the following sets of paths: Now, consider the following sets of runs: Intuitively, ∆ s i is the set of s-runs which revisit F at least i times while Γ s i is the set of all s-runs which revisit F exactly i times and then never visit neither F nor F .Observe that Therefore, it follows that for all i ∈ N where the first equality holds by (1).The second equality follows from (3), and the last from the fact that M is decisive w.r.t.F ; i.e., for all r ∈ S, where for all i ≥ 0, ¬∆ s i is the set of s-runs revisiting F at most i − 1 times.For all i ≥ 0, we have (¬∆ s i ∩ (s |= 2¬ F )) ⊆ i−1 j=0 Γ s j .By using this inclusion, property (2), and the fact that Γ s i has measure zero, we obtain 3.2.Markov Chains with a Finite Attractor.
Definition 3.3.Given a Markov chain M = (S, P ), a set A ⊆ S is said to be an attractor, if for each s ∈ S, we have Prob M (s |= 3A) = 1, i.e., the set A is reached from s with probability one.
Lemma 3.4.A Markov chain M which has a finite attractor is decisive w.r.t.every set Proof.Fix a Markov chain M = (S, P ) that has a finite attractor A, a state s and a set In particular this holds for the finitely many different s ′′ ∈ A visited by those runs.Let A ′ ⊆ A denote the set of states from the attractor, visited by runs in (s |= 2¬F ∧2¬ F ).For every s ′′ ∈ A ′ we define α s ′′ := Prob M (s ′′ |= 3F ), and obtain α s ′′ > 0. By definition of an attractor, we obtain that A ′ is not empty.By finiteness of A (and thus A ′ ), it follows that α := min s ′′ ∈A ′ α s ′′ > 0. Almost every run must visit A infinitely often, and only states in A ′ are visited by runs in (s Notice that if M is coarse then the underlying transition system is finitely branching; however, the converse is not necessarily true.Given a Markov chain M = (S, P ) and a set F ⊆ S. We say that a Markov chain M = (S, P ) is globally coarse w.r.t.F if there exists some α > 0

System Models and their Properties
We define three classes of infinite-state probabilistic system models and describe the induced Markov chains.

Vector Addition Systems. A Vector Addition System with States (VASS) consists
of a finite-state process operating on a finite set of unbounded variables each of which ranges over N. Formally, a VASS V is a tuple (S, X, T), where S is a finite set of control-states, X is a finite set of variables, and T is a set of transitions each of the form (s 1 , op, s 2 ), where s 1 , s 2 ∈ S, and op is a mapping from X to the set {−1, 0, 1}.A (global) state s is of the form (s, v) where s ∈ S and v is a mapping from X to N.
We use s and S to range over control-states and sets of control-states, respectively.On the other hand, we use s and S to range over states and sets of states of the induced transition system (states of the transition system are global states of the VASS). For The complement of an upward-closed set is downward-closed and vice-versa.
For Q ⊆ S, we define a Q-state to be a state of the form (s, v) where s ∈ Q.Notice that, for any Q ⊆ S, the set of Q-states is upward-closed and downward-closed with respect to .
It follows from Dickson's Lemma [Dic13] that every infinite set of VASS configurations has only finitely many minimal elements w.r.t. .When we speak of an upward-closed set of VASS configurations, we assume that it is represented by its finitely many minimal elements.
A transition t = (s 1 , op, s 2 ) is said to be enabled at (s The VASS V induces a transition system (S, −→), where S is the set of states, i.e., S = (S × (X → N)), and (s 1 , v 1 ) −→ (s 2 , v 2 ) iff there is a t ∈ T with (s 2 , v 2 ) = t(s 1 , v 1 ).In the sequel, we assume, without loss of generality, that for all (s, v), the set enabled (s, v) is not empty, i.e., there is no deadlock.This can be guaranteed by requiring that from each control-state there is a self-loop not changing the values of the variables.
VASS are expressively equivalent to Petri nets [Pet81].The only difference is that VASS explicitly mention the finite control as something separate, while Petri nets encode it as another variable in the vector.The reachability problem for Petri nets/VASS is decidable [May84] and a useful extension of this result has been shown by Jančar [Jan90].In our VASS terminology this result can be stated as follows.
Theorem 4.1.( [Jan90]) Let (S, X, T) be a VASS with control-states S = {s 1 , . . ., s j } and variables X = {x 1 , . . ., x n }.A simple constraint logic is used to describe properties of global states (s, x 1 , . . ., x n ).Any formula Φ in this logic is a boolean combination of predicates of the following form: In particular, all upward-closed sets of VASS states can be described in this logic.It suffices to specify that the global state must be larger or equal (in every variable) than some of the (finitely many) minimal elements of the set.Since this constraint logic is closed under negation, all downward-closed sets can also be described in it.
Given an initial global state (s, v), and a constraint logic formula Φ, it is decidable if there exists a reachable state that satisfies Φ.

Probabilistic VASS. A probabilistic VASS (PVASS)
V is of the form (S, X, T, w), where (S, X, T) is a VASS and w is a mapping from T to the set of positive natural numbers.Intuitively, we derive a Markov chain from V by assigning probabilities to the transitions of the underlying transition system.The probability of performing a transition t from a state (s, v) is determined by the weight w(t) of t compared to the weights of the other transitions which are enabled at (s, v).We define w(s, v) = t∈enabled (s,v) w(t).The PVASS V induces a Markov chain (S, P ), where S is defined as for a VASS, and w(s 1 , v 1 ) Notice that this is well-defined since w(s 1 , v 1 ) > 0 by the assumption that there are no deadlock states.Remark 4.2.Coarseness of Markov chains induced by PVASS follows immediately from the definitions.It follows from results in [A ČJYK00] (Section 4 and 7.2) that each Markov chain induced by a PVASS is effective and finitely spanning w.r.t.any upward-closed set of final markings F .VASS induce well-structured systems in the sense of [A ČJYK00] and the computation of the set of predecessors of an ideal (here this means an upward-closed set) converges after some finite number k of steps.This yields the finite span k w.r.Probabilistic lossy channel systems (PLCS) are a generalization of LCS to a probabilistic model for message loss and choice of transitions.There exist several variants of PLCS which differ in how many messages can be lost, with which probabilities, and in which situations, and whether normal transitions are subject to non-deterministic or probabilistic choice.We consider a partial order on channel contents, defined by w 1 ≤ w 2 iff w 1 is a (not necessarily continuous) substring of w 2 .
The most common PLCS model is the one from [AR03, BS03,Rab03], where each message in transit independently has the probability λ > 0 of being lost in every step, and the transitions are subject to probabilistic choice in a similar way as for PVASS.However, the definition of PLCS in [AR03, BS03,Rab03] assumes that messages can be lost only after discrete steps, but not before them.Thus, since no messages can be lost before the first discrete step, the set {s ∈ S : s |= ∃3F } of predecessors of a given set F of target states is generally not upward-closed w.r.t.≤.
Here we assume a more realistic PLCS model where messages can be lost before and after discrete steps.This PLCS model is also closer to the classic non-probabilistic LCS model where also messages can be lost before and after discrete steps [AJ96,CFI96].So we obtain that the set {s ∈ S : s |= ∃3F } is always upward-closed w.r.t.≤.Definition 4.5.Formally, a PLCS is a tuple L = (S, C, M, T, λ, w) where S is a finite set of control-states, C is a finite set of unbounded fifo-channels, M is a finite set called the message alphabet, T is a set of transitions, 0 < λ < 1 is the message loss rate, and w : T → N >0 is the transition weight function.Each transition t ∈ T is of the form s op −→ s ′ , where s, s ′ ∈ S and op is an operation of one of the following froms: c!m (send message m ∈ M in channel c ∈ C), c?m (receive message m from channel c), or nop (do not modify the channels).
A PLCS L = (S, C, M, T, λ, w) induces a transition system T = (S, −→), where S = S × (M * ) C .That is, each state in S consists of a control-state and a function that assigns a finite word over the message alphabet to each channel called channel state.We define two transition relations −→ d (called 'discrete transition') and −→ l (called 'loss transition'), where −→ d models the sending and receiving of messages and transitions taken in the underlying control structure, and −→ l models probabilistic losses of messages.
The relation −→ d is defined as follows.If s = (s, x), s ′ = (s ′ , x ′ ) ∈ S, then there is a transition s −→ d s ′ in the transition system iff one of the following holds: We assume, without loss of generality, that there are no deadlocks.This can be guaranteed by adding self-loops s nop −→ s if necessary.If several discrete transitions are enabled at the same configuration then the next transition is chosen probabilistically.The probability (P d ) that a particular transition is taken is given by the weight of this transition, divided by the sum of the weights of all currently enabled transitions.Since there are no deadlocks, this is well defined.
The transition −→ l models probabilistic losses of messages.We extend the subword ordering ≤ on words first to channel states x, x ′ : C → M * by x ≤ x ′ iff x(c) ≤ x ′ (c) for all channels c ∈ C, and then to the transition system states s = (s, x), s ′ = (s ′ , x ′ ) ∈ S by s ≤ s ′ iff s = s ′ , and x ≤ x ′ .For any s = (s, x) and any x ′ such that x ′ ≤ x, there is a transition s −→ l (s, x ′ ).The probability of loss transitions is given by P l ((s, x), (s, , where a is the number of ways to obtain x ′ by losing messages in x, b is the total number of messages lost in all channels, and c is the total number of messages in all channels of x ′ .
The PLCS induces a Markov chain by alternating the probabilistic transition relations −→ l and −→ d in such a way that message losses can occur before and after every discrete transition, i.e., we consider transition sequences in We say that a set of target states F is effectively representable if a finite set F ′ can be computed s.t.F ′ ↑= F ↑, i.e., their upward-closures are equivalent.(For instance, any context-free language is effectively representable [Cou91].)In [A ČJYK00] it is shown that a Markov chain, induced by a PLCS is effective w.r.t.any effectively representable set F .
However, many of our results do not strongly depend on a particular PLCS model.The only crucial aspects are the existence of a finite attractor in the induced Markov chain (most PLCS models have it) and the standard decidability results of the underlying nonprobabilistic LCS [AJ96,CFI96].In [AR03], it is shown that each Markov chain induced by a PLCS contains a finite attractor.The PLCS models used here (and in [AR03, BS03,Rab03]) differ from the more unrealistic models considered previously in [ABIJ00,BE99].In [BE99] at most one message could be lost during any step and in [ABIJ00] messages could be lost only during send operations.If one assumes a sufficiently high probability (> 0.5) of message loss for these models then they also contain a finite attractor.Another different PLCS model was studied in [BS04].It has the same kind of probabilistic message loss as our PLCS, but differs in having nondeterministic choice (subject to external schedulers) instead of probabilistic choice for the transitions, and thus does not yield a Markov chain, but a Markov decision process.Another difference is that the model of [BS04] allows (and in some cases requires) idle transitions which are not present in our PLCS model.However, for any scheduler, the PLCS model of [BS04] also has a finite attractor (w.r.t. the system-state, though not necessarily w.r.t. the state of the scheduler).

Noisy Turing Machines. Noisy Turing Machines (NTM) were introduced in [AC05].
They are Turing Machines augmented by an additional parameter ǫ > 0 giving the noise level.Each transition of an NTM consists of two steps.First, in the noisy step the tape cells are subjected to noise.In this manner, each symbol in each tape may change independently and uniformly with probability ǫ to any other symbol in the tape alphabet (possibly the same as before).Then, in the normal step, the NTM proceeds like a normal Turing machine.
Probabilistic Turing Machines (PTM) [dLMSS56], which are Turing machines where transitions are random choices among finitely many alternatives, are more general than the model of [AC05].In fact, any NTM can be simulated by a PTM by adding extra steps where the machine makes a pass over the tapes changing the symbols randomly.However, as described below, general PTM do not satisfy our conditions.
Probabilistic NTM.In this paper, we adopt the model of Probabilistic Noisy Turing Machines (PNTM) which are a generalization of NTM.In a PNTM, the transitions are similar to those of an NTM except that normal steps are subject to probabilistic choices.Formally, a PNTM N is a tuple (S, Σ, Γ, M, T, ǫ, w) where S is a finite set of control-states, Σ is the input alphabet, Γ ⊇ Σ ∪ ♯ (where ♯ is the blank symbol) is the tape alphabet, M is the number of tapes, T ⊆ S × Γ M × S × Γ M × {−1, 0, 1} M is the transition relation, ǫ is the noise level and w : T → N >0 is the weight function.The probability of a transition t ∈ T is given by comparing the weight w(t) to the weights of all possible alternatives.
Assume a PNTM N = (S, Σ, Γ, M, T, ǫ, w).A global state of N can be represented by a triple: (i) the control-state, (ii) the current time, and (iii) an M-tuple of tape configurations.A tape configuration is a triple: (i) the head position; (ii) a finite word ω ∈ Γ * representing the content of all cells visited by the head so far; and (iii) a |ω|-tuple of natural numbers, each representing the last point in time the head visited the corresponding cell.For a set Q ⊆ S, we let Q-states denotes the set of all global states whose control-states are in Q.
For a PNTM N = (S, Σ, Γ, M, T, ǫ, w), we use G(N ) to denote the graph obtained from N by abstracting away the memory tapes.Formally, G(N ) is the tuple (S, T ′ ) where S is the set of control-states of the underlying PNTM N , and T ′ ⊆ S × S is obtained form the transition relation of N by projection.Observe that any path in G(N ) corresponds to a possible sequence of transitions in N since in each step, symbols under the reading heads can always change enabling the desired transition.Such statements are not possible for general PTM, since the reachability of any control-state still depends on the tape configurations and thus cannot be reduced to a reachability question in the induced graph.Nevertheless, for PNTM the following holds.
Lemma 4.8.Given a PNTM N = (S, Σ, Γ, M, T, ǫ, w), for any CTL * formula φ over sets Proof.Observe that checking s |= φ is equivalent to checking, in G(N ), s |= φ ′ where s is the control-state in s and φ ′ is the formula obtained from φ by replacing all occurrences of A PNTM N induces a Markov chain M = (S, P ) on the set of global states.Each transition in M is also a combination of a noisy step followed by a normal step.However, in the noisy steps, we assume that cells not under the reading heads are not subjected to noise.Observe that this is different than the way noise is added in the model of [AC05] where, for instance, all cells are subject to noise.Intuitively, the noise doesn't affect the computations of the underlying Turing machine unless it changes a cell which is going to be visited by the reading head.Now, whether the content of that cell changes when the reading head reaches it or has changed in the previous steps; the resulting computation is the same.
In order to compensate for the missing noise, we assume a higher noise probability for the cell under the head.If the cell was last visited k time units ago, then we increase the a In Theorem 5.4, we prove that this is undecidable when F is a general upward-closed set.
noise probability to 1 − (1 − ǫ) k .The probability of a transition in the induced Markov chain is obtained by multiplying the noise probability by the probability of the normal step described earlier.
Theorem 4.9.Each Markov chain induced by a PNTM N = (S, Σ, Γ, M, T, ǫ, w) is coarse, effective and finitely spanning with respect to any set of Q-states for some Q ⊆ S.
Proof.Assume a PNTM N = (S, Σ, Γ, M, T, ǫ, w), a set Q ⊆ S and the induced Markov chain M. Let F be the set of Q-states.Effectiveness of M w.r.t.F follows from the definition and Lemma 4.8.For any state s ∈ S, if s |= ∃3F then there is a path in G(N ) from the control-state of s to a control-state in Q.Such a path has length at most N = |G(N )|.Thus M has span N with respect to F .Along this path, it is possible that, in each step, each symbol under a reading head is subject to noise.Since in each step, M cells are subject to noise and each happens with probability ≥ ǫ, it follows that the probability of each successor is ≥ (ǫ/|Γ|) M .This gives the coarseness of M.
This, combined with Lemma 3.6 and Lemma 3.7, yields the following corollary.
Corollary 4.10.Each Markov chain induced by a PNTM N = (S, Σ, Γ, M, T, ǫ, w) is decisive and thus (by Lemma 3.2) strongly decisive with respect to any set of Q-states for some Q ⊆ S.

Qualitative Reachability
We consider the qualitative reachability problem for Markov chains, i.e., the problem if a given set of final states is eventually reached with probability 1, or probability 0, respectively.

Qual Reach
We show that, for decisive Markov chains, these qualitative questions about the Markov chain can be reduced to structural properties of the underlying transition graph.The decidability results for PLCS, PVASS and PNTM are summarized in Table 3. Proof.If s init |= F Before F then there is a path π of finite length from s init to some state in F s.t.F is not visited in π.The set of all continuation runs of the form ππ ′ thus has a non-zero probability and never visits F .Thus Prob M (s init |= 3F ) < 1.
The reverse implication of Lemma 5.1 holds only for Markov chains which satisfy certain conditions.
Lemma 5.2.Given a Markov chain M and a set F such that M is decisive w.r.t.F , then we have that Lemma 5.2 does not hold for general Markov chains; see Remark 6.3 in Section 6.Now we apply these results to Markov chains derived from PVASS.Interestingly, decidability depends on whether the target set F is a set of Q-states for some Q ⊆ S or a general upward-closed set.
Theorem 5.3.Given a PVASS (S, X, T, w) and a set of final states F which is the set of Q-states for some Q ⊆ S. Then the question Prob M (s init |= 3F ) = 1 is decidable.Proof.Since any set F of Q-states is upward-closed, we obtain from Corollary 4.4 that the Markov chain derived from our PVASS is decisive w.r.t.such F .Thus, by Lemma 5.1 and Lemma 5.2, we obtain Prob M (s init |= 3F ) < 1 ⇐⇒ s init |= F Before F .To decide the question s init |= F Before F , we construct a modified PVASS (S, X, T ′ , w ′ ) by removing all outgoing transitions from states q ∈ Q. Formally, T ′ contains all transitions of the form (s 1 , op, s 2 ) ∈ T with s 1 / ∈ Q and w ′ (t) = w(t) for t ∈ T ∩ T ′ .Furthermore, to avoid deadlocks, we add to each state in Q a self-loop which does not change the values of the variables and whose weight is equal to one.It follows that So we obtain that Prob M (s init |= 3F ) = 1 in (S, X, T, w) iff F is not reachable in the VASS (S, X, T ′ ).
The condition if F is reachable in the VASS (S, X, T ′ ) can be checked as follows.Since, F is upward-closed, the set of predecessors Pre * (F ) is upward-closed and can be effectively constructed by Remark 4.2.Thus the set F = Pre * (F ) can be effectively described by a formula Φ in the constraint logic of [Jan90].Finally, by Theorem 4.1, it is decidable if there is a reachable state in F (i.e., satisfying Φ).
The situation changes if one considers not a set of Q-states as final states F , but rather some general upward-closed set F (described by its finitely many minimal elements).In this case one cannot effectively check the condition s init |= F Before F .Theorem 5.4.Given a PVASS V = (S, X, T, w) and an upward-closed set of final states F (represented by its finitely many minimal elements), then the question Prob M (s init |= 3F ) = ρ is undecidable for any ρ ∈ (0, 1]. We will need the following definition for the proof.Definition 5.5.We define a PVASS which weakly simulates a Minsky [Min67] 2-counter machine.Since this construction will be used in several proofs (Theorem 5.4 and Theorem 9.1), it contains a parameter x > 0 which will be instantiated as needed.
Consider a deterministic Minsky 2-counter machine M with a set of control-states K, initial control-state k 0 , final accepting state k acc , two counters c 1 and c 2 which are initially zero and the usual instructions of increment and test-for-zero-decrement.For technical reasons we require the following conditions on the behavior of M .
• Either M terminates in control-state k acc , or • M does not terminate.In this case we require that in its infinite run it infinitely often tests a counter for zero in a configuration where the tested counter contains a non-zero value.We call a counter machine that satisfies these conditions an IT-2-counter machine (IT for 'infinitely testing').Any 2-counter M ′ can be effectively transformed into an equivalent IT-2-counter machine M by the following operations.After every instruction of M ′ we add two new instructions: First increment c 1 by 1 (thus it is now certainly nonzero).Then test c 1 for zero (this test always yields answer 'no'), decrement it by 1 (so it has its original value again), and then continue with the next instruction of M ′ .So M is infinitely testing and accepts if and only if M ′ accepts.Since acceptance is undecidable for 2-counter machines [Min67], it follows that acceptance is also undecidable for IT-2-counter machines.
Proof.(of Theorem 5.4) Since F is upward-closed, we obtain from Corollary 4.4 that the Markov chain derived from our PVASS is decisive w.r.t.F .Thus, by Lemma 5.1 and Lemma 5.2, we have Prob M (s init |= 3F ) < 1 ⇐⇒ s init |= F Before F .Now we show that the condition s init |= F Before F is undecidable if F is a general upward-closed set.We use the IT-2-counter machine M and the PVASS V from Def. 5.5 and instantiate the parameter x := 1.Let F be the set of configurations where transitions of type δ are enabled.This set is upward-closed, because of the monotonicity of VASS, and effectively constructible (i.e., its finitely many minimal elements).It follows directly from the construction in Def.5.5 that a transition of type δ is enabled if and only if the PVASS has been unfaithful in the simulation of the 2-counter machine, i.e., if a counter was non-zero and a 'zero' transition (of type β) has wrongly been taken instead of the correct 'decrement' transition (of type α).
If the 2-counter machine M accepts then there is a run in the PVASS V which faithfully simulates the run of M and thus never enables transitions of type δ and thus avoids the set F .Since the k acc -states have no outgoing transitions (except for the self-loop), they are trivially contained in F .Thus s init |= F Before F .
If the 2-counter machine M does not accept then its run is infinite.By our convention in Def.5.5, M is an IT-2-counter machine and every infinite run must contain infinitely many non-trivial tests for zero.Thus in our PVASS V, the set F is reachable from every reachable state s ′ which was reached in a faithful simulation of M , i.e., without visiting F before.Therefore in V the set F cannot be reached unless F is visited first, and so we get We obtain that M accepts iff s init |= F Before F iff Prob M (s init |= 3F ) < 1.This proves the undecidability of the problem for the case of ρ = 1.
To show the undecidability for general ρ ∈ (0, 1] we modify the construction as follows.Consider a new PVASS V ′ which with probability ρ does the same as V defined above and with probability 1 − ρ immediately goes to the accepting state k acc .Then the IT-2-CM M Notice the difference between Theorem 5.3 and Theorem 5.4 in the case of ρ = 1.Unlike for non-probabilistic VASS, reachability of control-states and reachability of upward-closed sets cannot be effectively expressed in terms of each other for PVASS. Theorem 5.6.Consider a PLCS L and an effectively representable set of final states F .Then the question Prob M (s init |= 3F ) = 1 is decidable.Proof.By Corollary 4.7, the Markov chain induced by L is decisive w.r.t.such F .Thus we obtain from Lemma 5.1 and Lemma 5.2 that Prob M (s init |= 3F ) = 1 iff s init |= F Before F .This condition can be checked with a standard construction for LCS (from [AR03]) as follows.First one can effectively compute the set F = Pre * (F ) using the techniques from, e.g., [AJ96].Next one computes the set X of all configurations from which it is possible to reach F without passing through F .This is done as follows.Let X 0 := F and X i+1 := X i ↑ ∪ (Pre(X i ) ∩ F ) ↑. Since all X i are upward-closed, this construction converges at some finite index n, by Higman's Lemma [Hig52].We get that X = X n is effectively constructible.Finally we have that Prob M (s init |= 3F ) = 1 iff s init / ∈ X, which can be effectively checked.
Notice that, unlike in earlier work [AR03,BS03], it is not necessary to compute the finite attractor of the PLCS-induced Markov chain for Theorem 5.6.It suffices to know that it exists.For PLCS it is very easy to construct the finite attractor, but this need not hold for other classes of systems with attractors.However, the criterion given by Lemma 5.1 and Lemma 5.2 always holds.
Proof.By Corollary 4.10, we obtain that the Markov chain M derived from N is decisive w.r.t.F .This combined with Lemma 5.1 and Lemma 5.2 yields Prob M (s init |= 3F ) < 1 ⇐⇒ s init |= F Before F .Observe that since F is a set of Q-states, we obtain by Lemma 4.8 that we can compute a set Q ′ ⊆ S such that F = Q ′ -states.Since F and F are sets of Q-states and Q ′ -states respectively, it follows by Lemma 4.8 that the question s init |= F Before F is decidable.This gives the result.Now we consider the question Prob M (s init |= 3F ) = 0.The following property trivially holds for all Markov chains.
The reachability problem for Petri nets/VASS is decidable [May84], and the following result is a direct consequence of Lemma 5.8 and Theorem 4.1.
Theorem 5.9.Given a PVASS V = (S, X, T, w) and a set of final states F which is expressible in the constraint logic of [Jan90] (in particular any upward-closed set, any finite set, and their complements), then the question Prob M (s init |= 3F ) = 0 is decidable.
From Lemma 5.8 and the result that for LCS the set of all predecessors of any effectively representable set can be effectively constructed (e.g., [AJ96]), we get the following.
Theorem 5.10.Given a PLCS L and a set of final states F which is effectively representable, then the question Prob M (s init |= 3F ) = 0 is decidable.
By Lemma 4.8, we obtain the following.

Qualitative Repeated Reachability
Here we consider the qualitative repeated reachability problem for Markov chains, i.e., the problem if a given set of final states F is visited infinitely often with probability 1, or probability 0, respectively.

Qual Rep Reach
We show that, for decisive Markov chains, these qualitative questions about the Markov chain can be reduced to structural properties of the underlying transition graph.The decidability results for PLCS, PVASS and PNTM are summarized in Table 4. small First we consider the problem if Prob M (s init |= 23F ) = 1.The following lemma holds for any Markov chain and any set of states F .Lemma 6.1.Let M = (S, P ) be a Markov chain and Proof.Suppose that s init |= ∀2∃3F .Then s init |= ∃3∀2¬F .Thus there exists a finite path π starting from s init leading to a state s s.t.s |= ∀2¬F .The set of all s init -runs of the form ππ ′ (for any π ′ ) has non-zero probability and they all satisfy ¬23F .So we get that Prob M (s init |= 23F ) < 1.The reverse implication of Lemma 6.1 does not hold generally, but it is true for strongly decisive Markov chains.Lemma 6.2.Given a Markov chain M and a set F such that M is strongly decisive w.r.t.F , then we have that Proof.We show that Prob M (s init |= 32¬F ) = 0 which implies the result.
If s init |= ∀2∃3F then every state s reached by runs in ( Since the Markov chain is strongly decisive, Remark 6.3.Neither Lemma 5.2 nor Lemma 6.2 hold for general Markov chains.A counterexample is the Markov chain M = (S, P ) of the 'gambler's ruin' problem [Fel66] where S = N, P (i, i + 1) := x, P (i, i − 1) := 1 − x for i ≥ 1 and P (0, 0) = 1 and F := {0}, for some parameter Now we show that it is decidable if a PVASS almost certainly reaches an upward-closed set of final states infinitely often.Theorem 6.4.Let V = (S, X, T, w) be a PVASS and F an upward-closed set of final states.Then the question Prob M (s init |= 23F ) = 1 is decidable.
Proof.Since F is upward-closed, we obtain from Corollary 4.4 that the Markov chain derived from our PVASS is strongly decisive w.r.t.F .Thus it follows from Lemma 6.1 and Lemma 6.2 that Prob M (s init |= 23F ) = 1 ⇐⇒ s init |= ∀2∃3F .This condition can be checked as follows.Since F is upward-closed and represented by its finitely many minimal elements, the set Pre * (F ) is upward-closed and effectively constructible by Remark 4.2.Then F = Pre * (F ) is downward-closed and effectively representable by a formula Φ in the constraint logic of [Jan90].We get that s init |= ∀2∃3F iff s init |= ∃3 F , i.e., if there is no reachable state that satisfies Φ.This is decidable by Theorem 4.1.
Notice the surprising contrast of the decidability of repeated reachability of Theorem 6.4 to the undecidability of simple reachability in Theorem 5.4.Now we show the decidability result for PLCS.
Theorem 6.5.Consider a PLCS L and an effectively representable set of final states F .Then the question Prob M (s init |= 23F ) = 1 is decidable.
Proof.By Corollary 4.7, L induces a strongly decisive Markov chain.Thus, we obtain from Lemma 6.1 and Lemma 6.2 that Prob M (s init |= 23F ) = 1 ⇐⇒ s init |= ∀2∃3F .This condition can be checked as follows.First one can effectively compute the set F = Pre * (F ).Next one computes the set X of all configurations from which it is possible to reach F , i.e., X := Pre * ( F ). Finally we have that Prob M (s init |= 3F ) = 1 iff s init |= ∀2∃3F iff s init / ∈ X, which can be effectively checked.
Similarly as in Theorem 5.6, it is not necessary to compute the finite attractor of the PLCS-induced Markov chain for Theorem 6.5.
Next we prove the decidability result for PNTM.
Proof.Since F is a set of Q-states, we obtain from Corollary 4.10 that the Markov chain derived from N is strongly decisive w.r.t.F .Thus it follows from Lemma 6.1 and Lemma 6.2 that Prob M (s init |= 23F ) = 1 ⇐⇒ s init |= ∀2∃3F .We can check this condition by Lemma 4.8.Now we consider the question Prob M (s init |= 23F ) = 0. We start by establishing some connections between the probabilities of reaching certain sets at least once or infinitely often.From the definitions we get the following.Lemma 6.7.For any Markov chain and any set of states F , we have Prob The following lemma implies that the reverse implication holds for strongly decisive Markov chains.Lemma 6.8.Given a Markov chain M which is strongly decisive w.r.t. a given set F , then we have that Prob There is also a correspondence of the condition Prob M (s init |= 23F ) = 0 to a property of the underlying transition graph.Lemma 6.9.
If M is decisive w.r.t.F then the reverse implication holds.Proof.Directly from Lemma 6.7, 6.8, 6.9 and 6.10.Remark 6.12.Observe that decisiveness w.r.t. a given set F does not imply decisiveness w.r.t.F .Therefore the reverse implication of Lemma 6.9 does not hold in general.In particular, it holds for Markov chains with a finite attractor (since they are decisive w.r.t.every set), but not generally when we have global coarseness.This is because global coarseness depends on the set of final states.Global coarseness of a Markov chain w.r.t. a certain set F does not imply global coarseness w.r.t. the set F .Now we show the decidability results for PLCS and PNTM.Theorem 6.13.Consider a PLCS L and an effectively representable set of final states F .Then the question Prob M (s init |= 23F ) = 0 is decidable.Proof.By Corollary 4.7, L induces a strongly decisive Markov chain w.r.t.every set of states.In particular it is decisive w.r.t.F and F .Therefore, by Theorem 6.11, it suffices to check if s init |= ∃3 F .Since the upward-closure of F is effectively constructible, one can effectively construct a symbolic representation of the set of all states which satisfy ∃3 F (using the techniques from, e.g., [AJ96]) and check if s init is not in this set.Theorem 6.14.Given a PNTM N = (S, Σ, Γ, M, T, ǫ, w) and a set F of Q-states for some Q ⊆ S, the question Prob M (s init |= 23F ) = 0 is decidable.
Proof.Observe that since F is a set of Q-states, by Lemma 4.8, we can construct the set Q ′ ⊆ S such that F is exactly the set of Q ′ -states.By Corollary 4.10, the Markov chain induced by N is decisive w.r.t.any set of Q ′′ -states for some Q ′′ ⊆ S. In particular, it is decisive w.r.t.Q ′ -states = F .By Theorem 6.11, it suffices to check if s init |= ∃3 F which is again decidable by Lemma 4.8.Remark 6.15.For PVASS, decidability of Prob M (s init |= 23F ) = 0 and the equivalent question Prob M (s init |= 3 F ) = 1 is open.For an upward-closed set F the set F is downward-closed, but in general not a set of Q-states, and thus Theorem 5.3 does not always apply.The question Prob M (s init |= 3 F ) = 1 can certainly not be reduced to purely structural questions about the underlying transition system (unlike for PLCS), because it depends on the exact values of the probabilities, i.e., on the transition weights.
Furthermore, for PVASS, the probability Prob M (s init |= 23F ) cannot be effectively expressed in the first-order theory of the reals (IR, +, * , ≤), as shown in Section 9, Remark 9.5.

Approximate Quantitative Reachability
In this section we consider the approximate quantitative reachability problem.

Approx Quant Reach
We show that this problem is effectively solvable for PLCS, PVASS and PNTM, provided that the induced Markov chain is decisive w.r.t.F .
First, we present a path enumeration algorithm, based on [IN97], for solving the problem, and then we show that the algorithm is guaranteed to terminate for all instances where M is decisive w.r.t.F1 .
Given an effective Markov chain M = (S, P ), a state s init ∈ S, a set F and a positive ε ∈ Q >0 , the algorithm constructs (a prefix of) the reachability-tree, from s init , in a breadthfirst fashion.The nodes of the tree are labeled with pairs (s, r) where s ∈ S and r is the probability of traversing the path from the root to the current node.Every node in the tree is labeled with a probability.This probability is the product of the probabilities of all the transitions in the path from the root to the node.The algorithm maintains two variables Yes and No which accumulate the probabilities by which the set F is certainly reached (and certainly not reached, respectively).Each step of the algorithm can be implemented due to the effectiveness of M. The algorithm runs until we reach a point where the sum of Yes and No exceeds 1 − ε.
Algorithm We require that the Markov chain is effective w.r.t.F so that the condition s ∈ F in line 5. can be effectively checked.
Let Yes j (M, s init ) denote the value of variable Yes after the algorithm has explored the reachability-tree with root s init up to depth j (i.e., any element (s, r) in store is such that s init ≤j+1 −→ s).We define No j (M, s init ) in a similar manner.First we show partial correctness of Algorithm 1.
Proof.It is straightforward to check that for each j ≥ 0 we have The result follows from the fact that Yes j (M, s init ) + No j (M, s init ) ≥ 1 − ε when the algorithm terminates.Theorem 7.5.Approx Quant Reach is solvable for PNTM in case F is a set of Qstates.

Approximate Quantitative Repeated Reachability
In this section we approximate the probability of reaching a given set of states infinitely often, i.e., we compute arbitrarily close approximations of Prob M (s init |= 23F ).

Approx Quant Rep Reach Instance
• A Markov chain M = (S, P ) We present an algorithm which is a modification of Algorithm 1 (in Section 7) and show that it is guaranteed to terminate for all Markov chains which are decisive w.r.We require that the Markov chain is effective w.r.t.F and F so that the conditions s ∈ F (in line 5.) and s ∈ F (in line 4.) can be effectively checked.
We define Yes j (M, s init ) and No j (M, s init ) as the values of the variables Yes and No after the algorithm has explored the reachability-tree with root s init up-to depth j, similarly as for Algorithm 1.The following Lemma shows the partial correctness of Algorithm 2. Proof.Assume a PNTM N = (S, Σ, Γ, M, T, ǫ, w) and a set F of Q-states for some Q ⊆ S. Since F is the set of Q-states, it follows by Lemma 4.8 that F is also a set of Q ′ -states where Q ′ ⊆ S. By Theorem 4.9, the Markov chain induced by a PNTM is decisive w.r.t.every set of Q ′′ -states, in particular w.r.t.F and F .Thus it follows from Lemma 8.1 and Lemma 8.2, that Algorithm 2 solves Approx Quant Rep Reach for PNTM.
A similar result for PVASS would require the explicit assumption that the induced Markov chain is decisive w.r.t.F and F .It is not sufficient that F is upward-closed, because this only implies decisiveness w.r.t.F , not necessarily w.r.t.F .(See the counterexample in Remark 6.15.)

Exact Quantitative Analysis
In this section we consider the Exact Quantitative Reachability Analysis Problem, defined as follows.However, for PVASS, PLCS and PNTM, we show that the probability Prob M (s init |= 3F ) (and thus the question of Exact Quant Reach ) cannot be effectively expressed in a uniform way in the first-order theory of the reals (IR, +, * , ≤), or any other decidable logical theory with first-order quantification.By 'expressed in a uniform way' we mean that parameters of the system (e.g., transition weights for PVASS, the message loss probability for PLCS or the noise parameter for PNTM) should be reflected directly by constants in the constructed formula (see the remarks at the end of this section for details).This negative result for PVASS, PLCS and PNTM is in contrast to the situation for probabilistic pushdown automata (probabilistic recursive systems) for which this probability can be effectively expressed in (IR, +, * , ≤) [EKM04, EKM06, EY05b, EE04].

Exact Quant Reach
A surprising result was that reachability of control-states and reachability of upwardclosed sets cannot be effectively expressed in terms of each other for PVASS, unlike for normal VASS (Section 5).Furthermore, for probabilistic systems, reachability is not always easier to decide than repeated reachability (Theorems 5.4 and 6.4).
Open questions for future work are the decidability of qualitative reachability problems for Markov chains with downward-closed sets of final states, and an algorithm to approximate quantitative repeated reachability in PVASS.Furthermore, the decidability of exact quantitative questions like Prob M (s init |= 3F ) ≥ 0.5 is still open for PVASS, PLCS and (P)NTM.

Lemma 3. 6 .
If a Markov chain is coarse and finitely spanning w.r.t. a set F then it is globally coarse w.r.t.F .Proof.If a Markov chain is coarse (of coarseness β > 0) and finitely spanning w.r.t. a given set F (of span N ) then it is globally coarse w.r.t. the same set F (define α := β N ).

Theorem 4. 6 .
Each Markov chain induced by a PLCS contains a finite attractor and is effective w.r.t.any effectively representable set of global states F .From this result, Lemma 3.2 and Lemma 3.4, we obtain the following corollary.Corollary 4.7.Each Markov chain induced by a PLCS is decisive w.r.t.every set F and thus strongly decisive w.r.t.every set F .
follows that the set of global states satisfying φ is exactly the set of Q-states such that for any s ∈ Q, s |= φ ′ .Since G(N ) is finite, the result follows by decidability of CTL * model-checking in finite-state systems ([CGP99]).

First
we consider the problem if Prob M (s init |= 3F ) = 1.The following Lemma holds for any Markov chain and any set of states F .Lemma 5.1.Prob M (s init |= 3F ) = 1 implies s init |= F Before F .
For any Markov chain M and any set of states F , s init |= ∃3 F implies Prob M (s init |= 3 F ) = 1.Proof.If s init |= ∃3 F then there exists a finite path starting from s init and leading to some state in F .Therefore we obtain Prob M (s init |= 3 F ) > 0. Observe that any state s reached by runs in (s init |= 3 F ) before reaching F satisfies s |= ∃3 F .So we have s |= ∃3∀2∃3F and therefore s |= ∃3F and thus s |= ¬ F .Furthermore, every state s reached by runs in (s init |= 3 F ) after reaching F also satisfies s |= ¬ F , because, by definition, F cannot be reached from F .This yields (s init |= 3 F ) ⊆ (s init |= 2¬ F ) which implies that 0 < Prob M (s init |= 3 F ) ≤ Prob M (s init |= 2¬ F ), where the first inequality follows from the argument mentioned above.Finally, we obtain Prob M (s init |= 3 Lemma 6.10.Given a Markov chain M and a set of states F s.t.M is decisive w.r.t.F , then the condition Prob M(s init |= 3 F ) = 1 implies s init |= ∃3 F .Proof.As M is decisive w.r.t.F , it follows that Prob M (s init |= 3 F ∨ 3 F ) = 1.Since by assumption Prob M (s init |= 3 F ) = 1, it follows that Prob M (s init |= 3 F ) > 0, which implies s init |= ∃3 F .Theorem 6.11.For any Markov chain M and any setF s.t.M is decisive w.r.t.F s init |= ∃3 F ⇒ Prob M (s init |= 3 F ) = 1 ⇐⇒ Prob M (s init |= 23F ) = 0For any Markov chain M and any set F s.t.M is decisive w.r.t.F and w.r.t.F s init |= ∃3 F ⇐⇒ Prob M (s init |= 3 F ) = 1 ⇐⇒ Prob M (s init |= 23F ) = 0 Instance • A Markov chain M = (S, P ) • A state s init ∈ S • A set of states F ⊆ S • A rational ν Task Check whether Prob M (s init |= 3F ) ≥ ν.By Theorem 5.4, Exact Quant Reach is undecidable for PVASS and upward-closed sets F .If F is a set of Q-states then decidability of Exact Quant Reach is open for PVASS, PLCS and PNTM.

Table 1 :
Prob M (s init |= 3F ), or Prob M (s init |= 23F ) a Computability results for quantitative problems a All results here concern the effective expressibility of the probability in Tarski-algebra.

Table 2 :
Computability results for quantitative problemsa All results here concern the effective expressibility of the probability in Tarski-algebra.
Lemma 3.7.Given a Markov chain M and a set F such that M is globally coarse w.r.t.F , then M is decisive w.r.t.F .Proof.Assume a Markov chain M = (S, P ), a state s and a set F ⊆ S such that M is globally coarse w.r.t.F .All states s ′ visited by runs in (s |= 2¬F ∧ 2¬ F ) satisfy s ′ |= ∃3F , because s ′ / ∈ F .Since M is globally coarse w.r.t.F there exists some universal constant α > 0 s.t.Prob M (s ′ |= 3F ) ≥ α for any s ′ which is visited by those runs.Therefore, Prob M [AJ96,CFI96]Markov chain derived from a PVASS.By applying Remark 4.2 and Lemma 3.6 we obtain the following theorem.Probabilistic Lossy Channel Systems.A Lossy Channel System (LCS) consists of a finite-state process operationg on a finite set of channels, each of which behaves as a FIFO buffer which is unbounded and unreliable in the sense that it can spontaneously lose messages[AJ96,CFI96].
This combined with Lemma 3.2 and Lemma 3.7 yields the following corollary.Corollary 4.4.Each Markov chain induced by a PVASS is decisive (and thus, by Lemma 3.2, strongly decisive) w.r.t.any upward-closed set of final states.4.2.

Table 3 :
Decidability results for qualitative reachability.

Table 4 :
Decidability results for qualitative problems of repeated reachability.
1 -Approx Quant Reach Input A Markov chain M = (S, P ), a state s init ∈ S, a set F ⊆ S and a positive ε ∈ Q >0 such that M is effective w.r.t.F .Return value A rational θ such that θ ≤ Prob M (s init |= 3F ) ≤ θ + ε ′ , r • P (s, s ′ )) to the end of store 8. until Yes + No ≥ 1 − ε 9. return Yes end Lemma 7.2.Algorithm 1 terminates in case the Markov chain M is decisive w.r.t.F .Proof.Since M is decisive we have Prob M (s init |= 3F ∨3 F ) = 1.Therefore lim j→∞ (Yes j + No j ) = 1, which implies termination of the algorithm.From Lemma 7.1 and Lemma 7.2 it follows that Approx Quant Reach is solvable for Markov chains which are globally coarse w.r.t. the target set and for Markov chains which contain a finite attractor.This, together with Theorem 4.3 and Theorem 4.6, yield the following theorems.Theorem 7.3.Approx Quant Reach is solvable for PVASS in case F is upward-closed.Theorem 7.4.Approx Quant Reach is solvable for PLCS in case F is effectively representable.
t. both F and F .Algorithm 2 -Approx Quant Rep Reach Input A Markov chain M = (S, P ), a state s init ∈ S, a set F ⊆ S and a positive ε ∈ Q >0 such that M is effective w.r.t.F and F .Return value A rational θ such that θ ≤ Prob M (s init |= 23F ) ≤ θ + ε Variables Yes, No: Q (initially all are set to 0) store: queue with elements in S × Q begin 1. store := (s init , 1) 2. repeat 3. remove (s, r) from store 4. if s ∈ F then Yes := Yes + r 5. else if s ∈ F then No := No + r 6. else for each s ′ ∈ Post(s) 7. add (s ′ , r • P (s, s ′ )) to the end of store 8. until Yes + No ≥ 1 − ε 9. return Yes end Lemma 8.1.For a Markov chain M and a set F such that M is strongly decisive w.r.t.F , if Algorithm 2 terminates at depth j then Yes j (M,s init ) ≤ Prob M (s init |= 23F ) ≤ Yes j (M, s init ) + εProof.If Algorithm 2 reaches some state s ∈ F (at line 4.) then we have s |= ∀2∃3F .Since M is strongly decisive, it follows from Lemma 6.2 that Prob M (s |= 23F ) = 1.Thus, for each j ≥ 0, we have Yes j (M, s init ) ≤ Prob M (s init |= 23F ).Similarly, if the algorithm reaches some state s ∈ F (at line 5.) then we have (s |= 23F ) = ∅.Thus, for each j ≥ 0, we haveNo j (M, s init ) ≤ Prob M (s init |= 23F ) = 1 − Prob M (s init |= 23F ).It follows that Yes j (M, s init ) ≤ Prob M (s init |= 23F ) ≤ 1 − No j (M,s init ).The result follows from the fact that Yes j (M, s init ) + No j (M, s init ) ≥ 1 − ε when the algorithm terminates.Lemma 8.2.Algorithm 2 terminates if M is decisive w.r.t.F .Proof.Since M is decisive w.r.t.F , we have Prob M s init |= 3 F ∨ 3 F = 1.It follows that lim j→∞ (Yes j + No j ) = 1, which implies termination.Note that Algorithm 2 only works for Markov chains which are decisive w.r.t.both F and F .Decisiveness w.r.t.F is required for termination (Lemma 8.2), while decisiveness w.r.t.F is required for correctness (Lemma 8.1).Now we show the computability results for PLCS and PNTM.Theorem 8.3.Approx Quant Rep Reach is solvable for PLCS in case F is effectively representable.Proof.By Theorem 4.6, a Markov chain induced by a PLCS is decisive w.r.t.every set, in particular w.r.t.F and F .Thus it follows from Lemma 8.1 and Lemma 8.2, that Algorithm 2 solves Approx Quant Rep Reach for PLCS.Theorem 8.4.Approx Quant Rep Reach is solvable for PNTM in case F is a set of Q-states.