On the Expressiveness and Complexity of ATL

ATL is a temporal logic geared towards the specification and verification of properties in multi-agents systems. It allows to reason on the existence of strategies for coalitions of agents in order to enforce a given property. In this paper, we first precisely characterize the complexity of ATL model-checking over Alternating Transition Systems and Concurrent Game Structures when the number of agents is not fixed. We prove that it is \Delta^P_2 - and \Delta^P_?_3-complete, depending on the underlying multi-agent model (ATS and CGS resp.). We also consider the same problems for some extensions of ATL. We then consider expressiveness issues. We show how ATS and CGS are related and provide translations between these models w.r.t. alternating bisimulation. We also prove that the standard definition of ATL (built on modalities"Next","Always"and"Until") cannot express the duals of its modalities: it is necessary to explicitely add the modality"Release".


Introduction
Model checking.Temporal logics were proposed for the specification of reactive systems almost thirty years ago [CE81,Pnu77,QS82].They have been widely studied and successfully used in many situations, especially for model checking -the automatic verification that a finite-state model of a system satisfies a temporal logic specification.Two flavors of temporal logics have mainly been studied: linear-time temporal logics, e.g.LTL [Pnu77], which expresses properties on the possible executions of the model; and branching-time temporal logics, such as CTL [CE81,QS82], which can express requirements on states (which may have several possible futures) of the model.
Alternating-time temporal logic.Over the last ten years, a new flavor of temporal logics has been defined: alternating-time temporal logics (ATL) [AHK97].ATL is a fundamental logic for verifying properties in synchronous multi-agent systems, in which several agents can concurrently act upon the behavior of the system.This is particularly interesting for modeling control problems.In that setting, it is not only interesting to know if something can arrive or will arrive, as can be expressed in CTL or LTL, but rather if some agent(s) can control the evolution of the system in order to enforce a given property.
The logic ATL can precisely express this kind of properties, and can for instance state that "there is a strategy for a coalition A of agents in order to eventually reach an accepting state, whatever the other agents do".ATL can be seen as an extension of CTL; its formulae are built on atomic propositions and boolean combinators, and (following the seminal papers [AHK97, AHK98, AHK02]) on modalities A X ϕ (coalition A has a strategy to immediately enter a state satisfying ϕ), A G ϕ (coalition A can force the system to always satisfy ϕ) and A ϕ U ψ (coalition A has a strategy to enforce ϕ U ψ).
Multi-agent models.While linear-and branching-time temporal logics are interpreted on Kripke structure, alternating-time temporal logics are interpreted on models that incorporate the notion of multiple agents.Two kinds of synchronous multi-agent models have been proposed for ATL in the literature.First Alternating Transition Systems (ATSs) [AHK98] have been defined: in any location of an ATS, each agent chooses one move, i.e., a subset of locations (the list of possible moves is defined explicitly in the model) in which she would like the execution to go to.When all the agents have made their choice, the intersection of their choices is required to contain one single location, in which the execution enters.In the second family of models, called Concurrent Game Structures (CGSs) [AHK02], each of the n agents has a finite number of possible moves (numbered with integers), and, in each location, an n-ary transition function indicates the state to which the execution goes.
Our contributions.First we precisely characterize the complexity of the model checking problem.The original works about ATL provide model-checking algorithms in time O(m • l), where m is the number of transitions in the model, and l is the size of the formula [AHK98,AHK02], thus in PTIME.However, contrary to Kripke structures, the number of transitions in a CGS or in an ATS is not quadratic in the number of states [AHK02], and might even be exponential in the number of agents.PTIME-completeness thus only holds for ATS when the number of agents is bounded, and it is shown in [JD05,JD06] that the problem is strictly 1 harder otherwise, namely NP-hard on ATS and Σ P 2 -hard on CGSs where the transition function is encoded as a boolean function.We prove that it is in fact ∆ P 2 -complete and ∆ P 3complete, resp.We also precisely characterize the complexity of model-checking classical extensions of ATL, depending on the underlying family of models.
Then we address expressiveness questions.First we show how ATSs and CGSs are related by providing translations between these models.Moreover we consider expressiveness questions about ATL modalities.While in LTL and CTL, the dual of "Until" modality can be expressed as a disjunction of "always" and "until", we prove that it is not the case in ATL.In other words, ATL, as defined in [AHK97, AHK98, AHK02], is not as expressive as one could expect (while the dual modalities clearly do not increase the complexity of the verification problems).
Related works.In [AHK98,AHK02], ATL has been defined and studied over ATSs and CGSs.In [HRS02], expressiveness issues are considered for ATL * and ATL.Complexity of satisfiability is addressed in [GvD06,WLWW06].Complexity results about model checking (for ATL, ATL + , ATL * ) can be found in [AHK02,Sch04].Regarding control-and game theory, many papers have focused on this wide area; we refer to [Wal04] for a survey, and to its numerous references for a complete overview.
Plan of the paper.Section 2 contains the formal definitions needed in the sequel.Section 3 deals with the model-checking questions and contains algorithms and complexity analysis for ATSs and CGSs.Section 4 contains our expressiveness results: we first prove that ATSs and CGSs have the same expressive power w.r.t.alternating bisimulation (i.e., any CGS can be translated into an equivalent ATS, and vice-versa).We then present our expressiveness results concerning ATL modalities.With each location and each set of moves of the agents, it associates the resulting location.

Definitions
The intended behaviour is as follows [AHK02]: in a location ℓ, each player A i chooses one possible move m A i in Mov(ℓ, A i ) and the next location is given by Edg(ℓ, m A 1 , ..., m A k ).We write Next(ℓ) for the set of all possible successor locations from ℓ, and Next(ℓ, A j , m), with m ∈ Mov(ℓ, A j ), for the restriction of Next(ℓ) to locations reachable from ℓ when player A j makes the move m.

The way the transition table
Edg is encoded has not been made precise in the original definition.Following the remarks of [JD05], we propose two possible encodings: Definition 2.2.
• An explicit CGS is a CGS where the transition table is defined explicitly.
• An implicit CGS is a CGS where, in each location ℓ, the transition function is defined by a finite sequence ((ϕ 0 , ℓ 0 ), ..., (ϕ n , ℓ n )), where ℓ i ∈ Loc is a location, and ϕ i is a boolean combination of propositions A j = c that evaluate to true iff agent A j chooses move c.The transition table is then defined as follows: Edg(ℓ, m A 1 , ..., m A k ) = ℓ j iff j is the lowest index s.t.ϕ j evaluates to true when players A 1 to A k choose moves m A 1 to m A k .We require that the last boolean formula ϕ n be ⊤, so that no agent can enforce a deadlock.
Besides the theoretical aspect, the implicit description of CGSs may reveal useful in practice, as it allows to not explicitly describe the full transition table.
The size |C| of a CGS C is defined as |Loc| + |Edg|.For explicit CGSs, |Edg| is the size of the transition table.For implicit CGSs, |Edg| is the sum of the sizes of the formulas used for the definition of Edg.

Alternating Transition Systems.
In the original works about ATL [AHK97], the logic was interpreted on ATSs, which are transition systems slightly different from CGSs: Definition 2.3.An Alternating Transition System (ATS for short) A is a 5-tuple (Agt, Loc, AP, Lab, Mov) where: • Agt, Loc, AP and Lab have the same meaning as in CGSs; • Mov: Loc × Agt → P(P(Loc)) associate with each location ℓ and each agent a the set of possible moves, each move being a subset of Loc.For each location ℓ, it is required that, for any The intuition is as follows: in a location ℓ, once all the agents have chosen their moves (i.e., a subset of locations), the execution goes to the (only) state that belongs to all the sets chosen by the players.Again Next(ℓ) (resp.Next(ℓ, A j , m)) denotes the set of all possible successor locations (resp.the set of possible successor locations when player A j chooses the move m).
The size of an ATS is |Loc| + |Mov| where |Mov| is the sum of the number of locations in each possible move of each agent in each location.
We prove in Section 4.1 that CGSs and ATSs have the same expressiveness (w.r.t.alternating bisimilarity [AHKV98]).
2.3.Coalition, strategy, outcomes of a strategy.A coalition is a subset of agents.In multi-agent systems, a coalition A plays against its opponent coalition Agt A as if they were two single players.We thus extend Mov and Next to coalitions: • Given A ⊆ Agt and ℓ ∈ Loc, Mov(ℓ, A) denotes the possible moves for the coalition A from ℓ.Such a move m is composed of a single move for every agent of the coalition, that is m def = (m a ) a∈A .Then, given a move m ′ ∈ Mov(ℓ, Agt\A), we use m ⊕ m ′ to denote the corresponding complete move (one for each agent).In ATSs, such a move m ⊕ m ′ corresponds to the unique resulting location; in CGSs, it is given by Edg(ℓ, m ⊕ m ′ ).
• Next is extended to coalitions in a natural way: given m = (m a ) a∈A ∈ Mov(ℓ, A), we let Next(ℓ, A, m) denote the restriction of Next(ℓ) to locations reachable from ℓ when every player A j ∈ A makes the move m A j .Let S be a CGS or an ATS.A computation of S is an infinite sequence ρ = ℓ 0 ℓ 1 • • • of locations such that for any i, ℓ i+1 ∈ Next(ℓ i ).We write ρ[i] for the i + 1-st location ℓ i .A strategy for a player A i ∈ Agt is a function f A i that maps any finite prefix of a computation to a possible move for A i , i.e., satisfying A strategy induces a set of computations from ℓ -called the outcomes of f A i from ℓ and denoted2 Out S (ℓ, f A i )-that player A i can enforce: . Given a coalition A ⊆ Agt, a strategy for A is a tuple F A containing one strategy for each player in A: F A = {f A j |A j ∈ A}.The outcomes of F A from a location ℓ contains the computations enforced by the strategies in The set of strategies for A is denoted 2 Strat S (A).Finally, note that F ∅ is empty and Out S (ℓ, ∅) represents the set of all computations from ℓ.
2.4.The logic ATL.We now define the logic ATL, whose purpose is to express controllability properties on CGSs and ATSs.Our definition is slightly different from the one proposed in [AHK02].This difference will be explained and argued in Section 4.2.
Definition 2.4.The syntax of ATL is defined by the following grammar: where P ranges over the set AP and A over the subsets of Agt.
Given a formula ϕ ∈ ATL, the size of ϕ, denoted by |ϕ|, is the size of the tree representing that formula.The DAG-size of ϕ is the size of the directed acyclic graph representing that formula (i.e., sharing common subformulas).
In addition, we use standard abbreviations such as ⊤, ⊥, F , etc. ATL formulae are interpreted over states of a game structure S. The semantics of the main operators is defined as follows 2 : It is well-known that, for the logic ATL, it is sufficient to restrict to state-based strategies (i.e., A ϕ p is satisfied iff there is a state-based strategy all of whose outcomes satisfy ϕ p ) [AHK02,Sch04].Note that ∅ ϕ p corresponds to the CTL formula Aϕ p (i.e., universal quantification over all computations issued from the current state), while Agt ϕ p corresponds to existential quantification Eϕ p .However, ¬ A ϕ p is generally not equivalent to Agt A ¬ϕ p [AHK02,GvD06]: indeed the absence of a strategy for a coalition A to ensure ϕ does not entail the existence of a strategy for the coalition Agt\A to ensure ¬ϕ.For instance, Fig. 1 displays a (graphical representation of a) 2-player CGS for which, in ℓ 0 , both ¬ A 1 X p and ¬ A 2 ¬X p hold.In such a representation, a transition is labeled with m 1 , m 2 when it corresponds to move m 1 of player A 1 and to move m 2 of player A 2 .Fig. 2 represents an "equivalent" ATS with the same property.
F. LAROUSSINIE, N. MARKEY, AND G. OREIBY Figure 2: An ATS that is not determined.

Complexity of ATL model-checking
In this section, we establish the precise complexity of ATL model-checking.This issue has already been addressed in the seminal papers about ATL, on both ATSs [AHK98] and CGSs [AHK02].The time complexity is shown to be in O(m • l), where m is the number of transitions and l is the size of the formula.The authors then claim that the model-checking problem is in PTIME (and obviously, PTIME-complete, since it is already for CTL).In fact this only holds for explicit CGSs.In ATSs, the number of transitions might be exponential in the size of the system (more precisely, in the number of agents).This problem -the exponential blow-up of the number of transitions to handle in the verification algorithm-also occurs for implicit CGSs: the standard algorithms running in O(m • l) require exponential time.
Basically, the algorithm for model-checking ATL is similar to that for CTL: it consists in recursively computing fixpoints, based e.g. on the following equivalence: The difference with CTL is that we have to deal with the modality A X -corresponding to the pre-image of a set of states for some coalition-instead of the standard modality EX .In control theory, A X corresponds to the controllable predecessors of a set of states for a coalition: CPre(A, S), with A ⊆ Agt and S ⊆ Loc, is defined as follows: The crucial point of the model-checking algorithm is the computation of the set CPre(A, S).
In the sequel, we establish the exact complexity of computing CPre (more precisely, given A ⊂ Agt, S ⊆ Loc, and ℓ ∈ Loc, the complexity of deciding whether ℓ ∈ CPre(A, S)), and of ATL model-checking for our three kinds of multi-agent systems.

Model checking ATL on explicit CGSs. As already mentionned, the precise complexity of ATL model-checking over explicit CGSs was established in [AHK02]:
Theorem 3.1.ATL model-checking over explicit CGSs is PTIME-complete.
To our knowledge, the precise complexity of computing CPre in explicit CGSs has never been considered.The best upper bound is PTIME, which is sufficient for deriving the PTIME complexity of ATL model-checking.
In fact, given a location ℓ, a set of locations S and a coalition A, deciding whether ℓ ∈ CPre(A, S) has complexity much lower than PTIME: Proof.We begin with precisely defining how the input is encoded as a sequence of bits: • the first |Agt| bits define the coalition: the i-th bit is a 1 iff agent A i belongs to A; • the following |Loc| bits of input define the set S; • for the sake of simplicity, we assume that all the agents have the same number of moves in ℓ.
We write p for that number, which we assume is at least 2. The transition table Edg(ℓ) is then given as a sequence of p k sets of log(|Loc|) bits.As a first step, it is rather easy to modify the input in order to have the following form: • first the k bits defining the coalition; • then, a sequence of p k bits defining whether the resulting state belongs to S. This is achieved by p k copies of the same AC 0 circuit.
We now have to build a circuit that will "compute" whether coalition A has a strategy for ending up in S. Since circuits must only depend on the size of the input, we cannot design a circuit for coalition A. Instead, we build one circuit for each possible coalition (their number is exponential in the number of agents, but polynomial in the size of the input, provided that p ≥ 2), and then select the result corresponding to coalition A.
Thus, for each possible coalition B, we build one circuit whose final node will evaluate to 1 iff ℓ ∈ CPre(B, S).This is achieved by an unbounded fan-in circuit of depth 2: at the first level, we put p |B| AND-nodes, representing each of the p |B| possible moves for coalition B. Each of those nodes is linked to p k−|B| bits of the transition table, corresponding to the set of possible p k−|B| moves of the opponents.At the second level, an OR-node is linked to all the nodes at depth 1.
Clearly enough, the OR-node at depth 2 evaluates to true iff coalition B has a strategy to reach S.Moreover, there are k l coalitions of size l, each of which is handled by a circuit of p l + 1 nodes.The resulting circuit thus has (p + 1) k + 2 k nodes, which is polynomial in the size of the input.This circuit is thus an AC 0 circuit.

And indeed, as a direct corollary of [JD05, Lemma 1], we have:
3 Given m = (ma)a∈A for A ⊆ Agt, ϕ[m] denotes the formula where every proposition "Aj = c" with The membership in Σ P 2 follows directly the above remarks.A Σ P 2 procedure is explicitly described in Algorithm 1.
Procedure co-strategy(q, (ϕ i , ℓ i ) i , (m a ) a∈A , S) //checks if the opponents have a co-strategy to (m a ) a∈A to avoid S begin foreach ā ∈ Ā do m ā ← guess(q, ā); i ← 0; Concerning hardness in Σ P 2 , we directly use the construction of [JD05, Lemma 1]: from an instance ∃X.∀Y.ϕ of EQSAT 2 , one consider an implicit CGS with three states q 1 , q ⊤ and q ⊥ , and 2n agents A 1 , ..., A n , B 1 , ..., B n , each having two possible choices in q 1 and only one choice in q ⊤ and q ⊥ .The transitions out of q ⊤ and q ⊥ are self-loops.The transitions from q 1 are given by: δ(q 1 ) = ((ϕ[x j ← (A j ? = 1), y j ← (B j ?= 1)], q ⊤ )(⊤, q ⊥ )).Then clearly, q 1 belongs to CPre({A 1 , ..., A n }, {q ⊤ }) iff there exists a valuation for variables in X s.t.ϕ is true whatever B-agents choose for Y .
The complexity of ATL model checking over implicit CGS is higher: the proof of Σ P 2hardness of CPre(A, S) can easily be adapted to prove Π P 2 -hardness.Indeed consider the dual (thus Π P 2 -complete) problem AQSAT 2 , in which, with the same input, the output is the value of ∀X.∃Y.ϕ.Then it suffices to consider the same implicit CGS, and the formula ¬ A 1 , ..., A n X ¬q ⊤ .It states that there is no strategy for players A 1 to A n to avoid q ⊤ : whatever their choice, players B 1 to B n can enforce ϕ.
This contradicts the claim in [JD05] that model checking ATL would be Σ P 2 -complete.In fact there is a flaw in their algorithm about the way it handles negation (and indeed their result holds only for the positive fragment of ATL [JD08]): games played on CGSs (and ATSs) are generally not determined, and the fact that a player has no strategy to enforce ϕ does not imply that the other players have a strategy to enforce ¬ϕ.It rather means that the other players have a co-strategy to enforce ¬ϕ (by a co-strategy, we mean a way to react to each move of their opponents [GvD06]).
Still, using the expression of ATL modalities as fixpoint formulas (see Eq. (3.1)), we can compute the set of states satisfying an ATL formula by a polynomial number of computations of CPre, which yields a ∆ P 3 algorithm: Proposition 3.4.Model checking ATL on implicit CGSs is in ∆ P 3 .Note that, since the algorithm consists in labeling the locations with the subformulae it satisfies, that complexity holds even if we consider the DAG-size of the formula.
To prove hardness in ∆ P 3 , we introduce the following And we have: Proposition 3.5.Model checking ATL on implicit CGSs is ∆ P 3 -hard.Proof.We pick an instance I of SNSAT 2 , and reduce it to an instance of the ATL modelchecking problem.Note that such an instance uniquely defines the values of variables z i .We write v I : {z 1 , ..., z m } → {⊤, ⊥} for this valuation.Also, when v I (z i ) = ⊤, there exists a witnessing valuation for variables in X i .We extend v I to {z 1 , ..., z m } ∪ i X i , with v I (x j i ) being a witnessing valuation if v I (z i ) = ⊤.
We now define an implicit CGS C as follows: it contains mn agents A j i (one for each x j i ), mn agents B j i (one for each y j i ), m agents C i (one for each z i ), and one extra agent D. The structure is made of m states q i , m states q i , m states s i , and two states q ⊤ and q ⊥ .There are three atomic propositions: s ⊤ and s ⊥ , that label the states q ⊤ and q ⊥ resp., and an atomic proposition s labeling states s i .The other states carry no label.
Except for D, the agents represent booleans, and thus always have two possible choices (0 and 1).Agent D always has m choices (0 to m − 1).The transition relation is defined as follows: for each i, = 1), Intuitively, from state q i , the boolean agents chose a valuation for the variable they represent, and agent D can either choose to check if the valuation really witnesses ϕ i (by choosing move 0), or "challenge" player C k , with move k < i.
The ATL formula is built recursively as follows: where AC stands for the coalition {A 1 1 , ..., A n m , C 1 , ..., C m }.Let f I (A) be the state-based strategy for agent A ∈ AC that consists in playing according to the valuation v I (i.e.move 0 if the variable associated with A evaluates to 0 in v I , and move 1 otherwise).The following lemma completes the proof of Proposition 3.5: Lemma 3.6.For any i ≤ m and k ≥ i, the following three statements are equivalent: Proof.Clearly, (b) implies (a).We prove that (a) implies (c) and that (c) implies (b) by induction on i.
First assume that q 1 |= ψ j , for some j ≥ 1.Since only q ⊤ and q ⊥ are reachable from q 1 , we have q 1 |= AC X q ⊤ .We are (almost) in the same case as in the Σ P 2 reduction of [JD05]: there is a valuation of the variables x 1 1 to x n 1 s.t., whatever players D and B 1 1 to B n m decide, the run will end up in q ⊤ .This holds in particular if player D chooses move 0: for any valuation of the variables y 1 1 to y n 1 , ψ 1 (X 1 , Y 1 ) holds true, and z 1 evaluates to true in v I .Secondly, if z 1 evaluates to true, then v I (x 1 1 ), ..., v I (x n 1 ) are such that, whatever the value of y 1 1 to y n 1 , ψ 1 holds true.If players A 1 1 to A n 1 play according to f I , then players D and B 1 1 to B n 1 cannot avoid state q ⊤ , and q 1 |= AC X q ⊤ , thus also ψ k when k ≥ 1.We now assume the result holds up to index i ≥ 1, and prove that it also holds at step i + 1. Assume q i+1 |= ψ k+1 , with k ≥ i.There exists a strategy witnessing ψ k+1 , i.e., s.t.all the outcomes following this strategy satisfy (¬s) U (q ⊤ ∨ EX (s ∧ EX ¬ψ k )).Depending on the move of player D in state q i+1 , we get several informations: first, if player D plays move l, with 1 ≤ l ≤ i, the play goes to state q l or q l , depending on the choice of player C l .
• if player C l chose move 0, the run ends up in q l .Since the only way out of that state is to enter state s l , labeled by s, we get that q l |= EX (s ∧ EX ¬ψ k ), i.e., that q l |= ¬ψ k .By i.h., we get that z l evaluates to false in our instance of SNSAT 2 .• if player C l chose move 1, the run goes to q l .In that state, players in AC can keep on applying their strategy, which ensures that q l |= ψ k+1 , and, by i.h., that z l evaluates to true in I. Thus, the strategy for AC to enforce ψ k+1 in q i+1 requires players C 1 to C i to play according to v I and the validity of these choices can be verified by the "opponent" D. Now, if player D chooses move 0, all the possible outcomes will necessarily immediately go to q ⊤ (since ψ k+1 holds, and since q ⊥ |= EX (s ∧ EX ¬ψ k )).We immediately get that players B 1 i+1 to B n i+1 cannot make ψ i+1 false, hence that z i+1 evaluates to true in I. Secondly, if z i+1 evaluates to true, assume players in AC play according to f I , and consider the possible moves of player D: • if player D chooses move 0, since z i+1 evaluates to true and since players C 1 to C i and A 1 i+1 to A n i+1 have played according v I , there is no way for player B 1 i+1 to B n i+1 to avoid state q ⊤ .• if player D chooses some move l between 1 and i, the execution will go into state q l or q l , depending on the move of C l .− if C l played move 0, i.e., if z l evaluates to false in v I , the execution goes to state q l , and we know by i.h. that q l |= ¬ψ k .Thus q l |= EX (s ∧ EX ¬ψ k ), and the strategy still fulfills the requirement.− if C l played move 1, i.e., if z l evaluates to true, then the execution ends up in state q l , in which, by i.h., the strategy f I enforces ψ k+1 .• if player D plays some move l with l > i, the execution goes directly to q ⊤ , and the formula is fulfilled.
With Proposition 3.4, this implies: Theorem 3.7.Model checking ATL on implicit CGSs is ∆ P 3 -complete.We propose here a slightly different proof, that will be a first step to the ∆ P 2 -hardness proof below.The proof is a direct reduction from 3SAT: let I = (S 1 , ..., S n ) be an instance of 3SAT over variables X = {x 1 , ..., x m }.We assume that S j = α j,1 s j,1 ∨α j,2 s j,2 ∨α j,3 s j,3 where s j,k ∈ X and α j,k ∈ {0, 1} indicates whether variable s j,k is taken negatively (0) or positively (1).We assume without loss of generality that no clauses contain both one proposition and its negation.

Model checking
With such an instance, we associate the following ATS A. It contains 8n + 1 states: one state q, and, for each clause S j , eight states q j,0 to q j,7 .Intuitively, the state q j,k corresponds to a clause B j,k = k 1 s j,1 ∨ k 2 s j,2 ∨ k 3 s j,3 , where k 1 k 2 k 3 corresponds to the binary notation for k.There is only one atomic proposition α in our ATS: a state q j,k is labeled with α iff it does not correspond to clause S j .By construction, for each j, only one of the states q j,0 to q j,7 is not labeled with α.
There are m + 1 players, where m is the number of variables that appear in I.With each x i is associated a player A i .The extra player is named D.Only the transitions from q are relevant for this reduction.We may assume that the other states only carry a self-loop.In q, player A i decides the value of x i .She can thus choose between two sets of next states, namely the states corresponding to clauses that are not made true by her choice: Last, player D has n choices, namely {q 1,0 , ..., q 1,7 } to {q n,0 , ..., q n,7 }.
We first prove the singleton requirement for ATSs' transitions: the intersections of the choices of the agents must be a singleton.Once players A 1 to A m have chosen their moves, all the variables have been assigned a value.Under that valuation, for each j ≤ n, exactly one clause among B j,0 to B j,7 evaluates to false (thanks to our requirement that a literal cannot appear together with its negation in the same clause).Intersecting with the choice of player D, we end up with one single state (corresponding to the only clause, among those chosen by D, that evaluates to false).Now, let ϕ = A 1 , ..., A m X α.That q |= ϕ indicates that players A 1 to A m can choose a valuation for x 1 to x m s.t.player D will not be able to find a clause of the original instance (i.e., not labeled with α) that evaluates to false (i.e., that is not made true by any of the choices of the players A 1 to A m ).In that case, the instance is satisfiable.Conversely, if the instance is satisfiable, it suffices for the players A 1 to A m to play according to a satisfying valuation of variables x 1 to x m .Since this valuation makes all the original clauses true, it yields a strategy that only leads to states labeled with α.
As in the case of implicit CGSs, we combine the fixpoint expressions of ATL modalities together with the NP algorithm for computing CPre.This yield a ∆ P 2 algorithm for full ATL: Proposition 3.9.Model checking ATL over ATSs is in ∆ P 2 .This turns out to be optimal: Proposition 3.10.Model checking ATL on ATSs is ∆ P 2 -hard.Proof.The proof is by a reduction of the ∆ P 2 -complete problem SNSAT [LMS01]: SNSAT: Input:: p families of variables X r = {x 1 r , ..., x m r }, p variables z r , p boolean formulae ϕ r in 3-CNF, with ϕ r involving variables in X r ∪ {z 1 , ..., z r−1 }.Output:: The value of z p , defined by Let I be an instance of SNSAT, where we assume that each ϕ r is made of n clauses S 1 r to S n r , with S j r = α j,1 r s j,1 r ∨ α j,2 r s j,2 r ∨ α j,3 r s j,3 r .Again, such an instance uniquely defines a valuation v I for variables z 1 to z r , that can be extended to the whole set of variables by choosing a witnessing valuation for x 1 r to x n r when z r evaluates to true.We now describe the ATS A: it contains (8n + 3)p states: • p states q r and p states q r , • p states s r , • and for each formula ϕ r , for each clause S j r of ϕ r , eight states q j,0 r , ..., q j,7 r , as in the previous reduction.States s r are labelled with the atomic proposition s, and states q j,k r that do not correspond to clause S j r are labeled with α.There is one player A j r for each variable x j r , one player C r for each z r , plus one extra player D. As regards transitions, there are self-loops on each state q j,k r , single transitions from each q r to the corresponding s r , and from each s r to the corresponding q r .From state q r , • player A j r will choose the value of variable x j r , by selecting one of the following two sets of states: | ∀l ≤ 3. s g,l r = x j r or α g,l r = 1} ∪ {q t , q t | t < r} if x j r = ⊥ Both choices also allow to go to one of the states q t or q t .In q r , players A j t with t = r have one single choice, which is the whole set of states.
• player C t also chooses for the value of the variable it represents.As for players A j r , this choice will be expressed by choosing between two sets of states corresponding to clauses that are not made true.But as in the proof of Prop.3.5, players C t will also offer the possibility to "verify" their choice, by going either to state q t or q t .Formally, this yields two sets of states: player D chooses either to challenge a player C t , with t < r, by choosing the set {q t , q t }, or to check that a clause S j r is fulfilled, by choosing {q j,0 r , ..., q j,7 r }.Let us first prove that any choices of all the players yields exactly one state.It is obvious except for states q r .For a state q r , let us first restrict to the choices of all the players A j r and C r , then: • if we only consider states q 1,0 r to q n,7 r , the same argument as in the previous proof ensures that precisely on state per clause is chosen, • if we consider states q t and q t , the choices of players B t ensure that exactly one state has been chosen in each pair {q t , q t }, for each t < r.Clearly, the choice of player D will select exactly one of the remaining states.Now, we build the ATL formula.It is a recursive formula (very similar to the one used in the proof of Prop.3.5), defined by ψ 0 = ⊤ and (again writing AC for the set of players {A 1 1 , ..., A m p , C 1 , ..., C p }): Proof.We prove by induction on r that (a) implies (c) and that (c) implies (b), the last implication being obvious.For r = 1, since no s-state is reachable, it amounts to the previous proof of NP-hardness.
Assume the result holds up to index r.Then, if q r+1 |= ψ t+1 for some t ≥ r, we pick a strategy for coalition AC witnessing this property.Again, we consider the different possible choices available to player D: • if player D chooses to go to one of q u and q u , with u < r + 1: the execution ends up in q u if player C u chose to set z u to true.But in that case, formula ψ t+1 still holds in q u , which yields by i.h. that z u really evaluates to true in v I .Conversely, the execution ends up in q u if player C u set z u to false.In that case, we get that q u |= ¬ψ t , with t ≥ u, which entails by i.h. that z u evaluates to false.This first case entails that player C 1 to C r chose the correct value for variables z 1 to z r .• if player D chooses a set of eight states corresponding to a clause S j r+1 , then the strategy of other players ensures that the execution will reach a state labeled with α.As in the previous reduction, this indicates that the corresponding clause has been made true by the choices of the other players.Putting all together, this proves that variable z r+1 evaluates to true.Now, if variable z r+1 evaluates to true, Assume the players in AC play according to valuation f I .Then • if player D chooses to go to a set of states that correspond to a clause of ϕ r+1 , he will necessarily end up in a state that is labeled with α, since the clause is made true by the valuation we selected.• if player D chooses to go to one of q u or q u , for some u, then he will challenge player B u to prove that his choice was correct.By i.h., and since player B u played according to f I , formula (¬s) U (α ∨ EX (s ∧ EX ¬ψ t+1 )) will be satisfied, for any t ≥ u.
We end up with the precise complexity of ATL model-checking on ATSs: Theorem 3.12.Model checking ATL on ATSs is ∆ P 2 -complete.
3.4.Beyond ATL.As for classical branching-time temporal logics, we can consider several extensions of ATL by allowing more possibilities in the way of combining quantifiers over strategies and temporal modalities.We define ATL ⋆ [AHK02] as follows: Definition 3.13.The syntax of ATL ⋆ is defined by the following grammar: where P and A range over AP and 2 Agt , resp.
The size and DAG-size of an ATL ⋆ formula are defined in the same way as for ATL.ATL ⋆ formulae are interpreted over states of a game structure S, the semantics of the main modalities is as follows (if ρ = ℓ 0 ℓ 1 . .., we write ρ i for the i + 1-st suffix, starting from ℓ i ): ATL is the fragment of ATL ⋆ where each modality U or X has to be preceded by a strategy quantifier A .Several other fragments of ATL ⋆ are also classically defined: • ATL + is the restriction of ATL ⋆ where a strategy quantifier A has to be inserted between two embedded temporal modalities U or X but boolean combination are allowed.• EATL extends ATL by allowing the operators A G F (often denoted as A ∞ F ) and A F G (often written A ∞ G ).They are especially useful to express fairness properties.For instance, 3.4.1.Model checking ATL + .First note that ATL + extends ATL and allows to express properties with more succinct formulae [Wil99, AI01] but these two logics have the same expressive power: every ATL + formula can be translated into an equivalent ATL formula [HRS02].
The complexity of model checking ATL + over ATSs has been settled ∆ P 3 -complete in [Sch04].But the ∆ P 3 -hardness proof of [Sch04] is in LOGSPACE only w.r.t. the DAGsize of the formula.Below, we prove that model checking ATL + is ∆ P 3 -complete (with the classical definition of the size of a formula) for our three kinds of game structures.Proposition 3.14.Model checking ATL + can be achieved in ∆ P 3 on implicit CGSs.Proof.A ∆ P 3 algorithm is given in [Sch04] for explicit CGSs.We extend it to handle implicit CGSs: for each subformula of the form A ϕ, guess (state-based) strategies for players in A. In each state, the choices of each player in A can be replaced in the transition functions.We then want to compute the set of states where the CTL + formula Aϕ holds.This can be achieved in ∆ P 2 [CES86, LMS01], but requires to first compute the possible transitions in the remaining structure, i.e., to check which of the transition formulae are satisfiable.This is done by a polynomial number of independent calls to an NP oracle, and thus does not increase the complexity of the algorithm.Proposition 3.15.Model checking ATL + on turn-based two-player explicit CGSs is ∆ P 3hard.
Proof.This reduction is a quite straightforward extension of the one presented in [LMS01] for CTL + .In particular, it is quite different from the previous reductions, since the boolean formulae are now encoded in the ATL + formula, and not in the model.
We encode an instance I of SNSAT 2 , keeping the notations used in the proofs of Prop.3.5 (for the SNSAT 2 problem) and 3.10 (for clause numbering).Fig. 3   The ATL + formula is built recursively, with ψ 0 = ⊤ and where l j,k w = v when s j,k w = v and α j,k w = 1, and l j,k w = v when s j,k w = v and α j,k w = 0. We then have: Lemma 3.16.For any r ≤ p and t ≥ r, the following statements are equivalent: (a) z r |= ψ t ; (b) the strategies f I witness the fact that q r |= ψ t ; (c) variable z r evaluates to true in v I .
When r = 1, since no s-or z-state is reachable from z 1 , the fact that z 1 |= ψ t , with t ≥ 1, is equivalent to z 1 |= A j k F l j,k 1 .This in turn is equivalent to the fact that z 1 evaluates to true in I.
We now turn to the inductive case.If z r+1 |= ψ t+1 with t ≥ r, consider a strategy for A s.t.all the outcomes satisfy the property, and pick one of those outcomes, say ρ.Since it cannot run into any s-state, it defines a valuation v ρ for variables z 1 to z r+1 and x 1 1 to x n m in the obvious way.Each time the outcome runs in some z u -state, it satisfies EX (s ∧ EX ψ t ).Each time it runs in some z u -state, the suffix of the outcome witnesses formula ψ t+1 in z u .Both cases entail, thanks to the i.h., that v ρ (z u ) = v I (z u ) for any u < r + 1.Now, the subformula w [(F z w ) → j≤n k≤3 F l j,k w , when w = r + 1, entails that ϕ r+1 is indeed satisfied whatever the values of the y j r+1 's, i.e., that z r+1 evaluates to true in I. Conversely, if z r evaluates to true, then strategy f I clearly witnesses the fact that ψ t holds in state z r .
As an immediate corollary, we end up with: Theorem 3.17.Model checking ATL + is ∆ P 3 -complete on ATSs as well as on explicit CGSs and implicit CGSs.

Model checking EATL. In the classical branching-time temporal logics, adding the modality E
∞ F to CTL increases its expressive power (see [Eme90]), this is also true when considering alternating-time temporal logics, as we will see in Section 4.2.2.
From the theoretical-complexity point of view, there is no difference between ATL and EATL: Theorem 3.18.Model checking EATL is: • PTIME-complete over explicit CGSs; • ∆ P 2 -complete over ATSs; • ∆ P 3 -complete over implicit CGSs.Proof.We extend the model-checking algorithm for ATL.This is again achieved by expressing modalities Computing these fixpoints can again be achieved by a polynomial number of computations of CPre.
Hardness directly follows from the hardness of ATL model checking.
3.4.3.ATL ⋆ model-checking.When considering ATL ⋆ model checking, the complexity is the same for explicit CGS, implicit CGS and ATS since it mainly comes from the formula to be checked: Theorem 3.19.Model checking ATL ⋆ is 2EXPTIME-complete on ATSs as well as on explicit CGSs and implicit CGSs.
Proof.We extend the algorithm of [AHK02].This algorithm recursively labels each location with the subformulae it satisfies.Formulas A ψ, with ψ ∈ LTL, are handled by building a deterministic Rabin tree automaton A ψ for ψ, and a Büchi tree automaton A C,A recognizing trees corresponding to the sets of outcomes of each possible strategy of coalition A in the structure C. We refer to [AHK02] for more details on the whole proof, and only focus on the construction of A C,A .The states of A C,A are the states of C. From location ℓ, there are as many transitions as the number of possible joint moves m = (m A i ) A i ∈A of coalition A. Each transition is a set of states that should appear at the next level of the tree.Formally, given p ∈ 2 AP , when p = Lab(ℓ), and δ(ℓ, p) = ∅ otherwise.
For explicit CGSs, this transition function is easily computed in polynomial time.For ATSs and implicit CGSs, the transition function is computed by enumerating the (exponential) set of joint moves of coalition A (computing Next(ℓ, A, m) is polynomial once the joint move is fixed).
Computing A C,A can thus be achieved in exponential time.Testing the emptiness of the product automaton then requires doubly-exponential time.The whole algorithm thus runs in 2EXPTIME.The lower bound directly follows from the lower bound for explicit CGSs.
Let us finally mention that our results could easily be lifted to Alternating-time µcalculus (AMC) [AHK02]: the PTIME algorithm proposed in [AHK02] for explicit CGSs, which again consists in a polynomial number of computations of CPre, is readily adapted to ATSs and implicit CGSs: as a result, model checking the alternation-free fragment has the same complexities as model checking ATL, and model checking the whole AMC is in EXPTIME for our three kinds of models.

Expressiveness
We have seen that the ability of quantifying over the possible strategies of the agents increases the complexity of model checking and makes the analysis more difficult.
We now turn to expressivity issues.We first focus on translations between our different models (explicit CGS, implicit CGS and ATS).We then consider the expressiveness of "Until" and "Always" modalities, proving that they cannot express the dual of "Until".4.1.Comparing the expressiveness of CGSs and ATSs.We prove in this section that CGSs and ATSs are closely related: they can model the same concurrent games.In order to make this statement formal, we use the following definition: ∀q ′ ∈ Next(ℓ ′ , A, m ′ ).∃q ∈ Next(ℓ, A, m).(q, q ′ ) ∈ R.
• symmetrically, for any coalition A ⊆ Agt, we have where Next(ℓ, A, m) is the set of locations that are reachable from ℓ when each player A i ∈ A plays m(A i ).
Two models are said to be alternating-bisimilar if there exists an alternating bisimulation involving all of their locations.
With this equivalence in mind, ATSs and CGSs (both implicit and explicit ones) have the same expressive power4 : Theorem 4.2.
(1) Any explicit CGS can be translated into an alternating-bisimilar implicit one in linear time; (2) Any implicit CGS can be translated into an alternating-bisimilar explicit one in exponential time; (3) Any explicit CGS can be translated into an alternating-bisimilar ATS in cubic time; (4) Any ATS can be translated into an alternating-bisimilar explicit CGS in exponential time; (5) Any implicit CGS can be translated into an alternating-bisimilar ATS in exponential time; (6) Any ATS can be translated into an alternating-bisimilar implicit CGS in quadratic time; Figure 4 summarizes those results.From our complexity results (and the assumption that the polynomial-time hierarchy does not collapse), the costs of the above translations is optimal.For point 6, it suffices to write, for each possible next location, the conjunction (on each agent) of the disjunction of the choices that contain that next location.For instance, if we have Mov A (ℓ 0 , A 1 ) = {{ℓ 1 , ℓ 2 }, {ℓ 1 , ℓ 3 }} and Mov A (ℓ 0 , A 2 ) = {{ℓ 2 , ℓ 3 }, {ℓ 1 }} in the ATS A, then each player will have two choices in the associated CGS B, and Formally, let A = (Agt, Loc A , AP, Lab A , Mov A ) be an ATS.We then define B = (Agt, Loc B , AP, Lab B , Mov B , Edg B ) as follows: • Edg B is a function mapping each location ℓ to the sequence ((ϕ ℓ ′ , ℓ ′ )) ℓ ′ ∈Loc A (the order is not important here, as the formulas will be mutually exclusive) with It is now easy to prove that the identity Id ⊆ Loc A × Loc B is an alternating bisimulation, since there is a direct correspondance between the choices in both structures.
We now explain how to transform an explicit CGS into an ATS, showing point 3. Let A = (Agt, Loc A , AP, Lab A , Mov A , Edg A ) be an explicit CGS.We define the ATS B = (Agt, Loc B , AP, Lab B , Mov B ) as follows (see Figure 5 for more intuition on the construction): It remains to show alternating bisimilarity between those structures.We define the relation It is now only a matter of bravery to prove that R is an alternating bisimulation between A and B.
Point 5 is now immediate (through explicit CGSs), but it could also be proved in a similar way as point 3.
Let us mention that our translations are optimal (up to a polynomial): our exponential translations cannot be achieved in polynomial time because of our complexity results for ATL model-checking.Note that it does not mean that the resulting structures must have exponential size.

4.2.1.
A R cannot be expressed with A U and A G .In the original papers defining ATL [AHK97, AHK02], the syntax of that logic was slightly different from the one we used in this paper: following classical definitions of the syntax of CTL, it was defined as: Duality is a fundamental concept in modal and temporal logics: for instance, the dual of modality U, often denoted by R and read release, is defined by p R q def ≡ ¬((¬p) U (¬q)).
Dual modalities allow, for instance, to put negations inner inside the formula, which is often an important property when manipulating formulas.
In LTL, modality R can be expressed using only U and G: p R q ≡ G q ∨ q U (p ∧ q).(4.1) In the same way, it is well known that CTL can be defined using only modalities EX, EG and EU, and that we have It is easily seen that, in the case of ATL, it is not the case that A p R q is equivalent to A G q ∨ A q U (p ∧ q): it could be the case that part of the outcomes satisfy G q and the other ones satisfy q U (p ∧ q).In fact, we prove that ATL orig is strictly less expressive than ATL: Theorem 4.3.There is no ATL orig formula equivalent to Φ = A (a R b).
The proof of Theorem 4.3 is based on techniques similar to those used for proving expressiveness results for temporal logics like CTL or ECTL [Eme90]: we build two families of models (s i ) i∈N and (s ′ i ) i∈N s.t.(1) s i |= Φ, (2) s ′ i |= Φ for any i, and (3) s i and s ′ i satisfy the same ATL orig formula of size less than i.Theorem 4.3 is a direct consequence of the existence of such families of models.In order to simplify the presentation, the theorem is proved for formula The models are described by one single inductive CGS6 C, involving two players.It is depicted on Fig. 6.A label α, β on a transition indicates that this transition corresponds Figure 6: The CGS C, with states s i and s ′ i on the left to move α of player A 1 and to move β of player A 2 .In that CGS, states s i and s ′ i only differ in that player A 1 has a fourth possible move in s ′ i .This ensures that, from state s ′ i (for any i), player A 1 has a strategy (namely, he should always play 4) for enforcing a W b. a i and b i satisfy ψ, and the same strategy (move 1 or 2, resp.)enforces G ψ 1 from s i .It is now easy to see that the same strategy is correct from s ′ i+1 .Conversely, apart from trivial cases, the strategy can again only consist in playing moves 1 or 2. In both cases, the game could end up in s i , and then in s i−1 .Thus s i−1 |= ψ, and the same strategy as in s ′ i+1 can be applied in s ′ i to witness ψ. • The proofs for A 2 X ψ 1 , A 2 G ψ 1 , and A 2 ψ 1 U ψ 2 are very similar to the previous ones.
Lemma 4.5.∀i > 0, ∀ψ ∈ ATL orig with |ψ| ≤ i: s i |= ψ iff s ′ i |= ψ.Proof.The proof proceeds by induction on i, and on the structure of the formula ψ.The case i = 1 is trivial, since s 1 and s ′ 1 carry the same atomic propositions.For the induction step, dealing with CTL modalities ( ∅ and A 1 , A 2 ) is also straightforward, then we just consider A 1 -and A 2 -modalities.
First we consider A 1 -modalities.It is well-known that we can restrict to state-based strategies in this setting.If player A 1 has a strategy in s i to enforce something, then he can follow the same strategy from s ′ i .Conversely, if player A 1 has a strategy in s ′ i to enforce some property, two cases may arise: either the strategy consists in playing move 1, 2 or 3, and it can be mimicked from s i .Or the strategy consists in playing move 4 and we distinguish three cases: • ψ = A 1 X ψ 1 : that move 4 is a winning strategy entails that s ′ i , a i and b i must satisfy ψ 1 .Then s i (by i.h. on the formula) and s i−1 (by Lemma 4.4) both satisfy ψ 1 .Playing move 1 (or 3) in s i ensures that the next state will satisfy ψ 1 .
• ψ = A 1 G ψ 1 : by playing move 4, the game could end up in s i−1 (via b i ), and in a i and s ′ i .Thus s i−1 |= ψ, and in particular ψ 1 .By i.h., s i |= ψ 1 , and playing move 1 (or 3) in s i , and then mimicking the original strategy (from s ′ i ), enforces G ψ 1 .• ψ = A 1 ψ 1 U ψ 2 : a strategy starting with move 4 implies s ′ i |= ψ 2 (the game could stay in s ′ i for ever).Then s i |= ψ 2 by i.h., and the result follows.We now turn to A 2 -modalities: clearly if A 2 ψ 1 holds in s ′ i , it also holds in s i .Conversely, if player A 2 has a (state-based) strategy to enforce some property in s i : If it consists in playing moves 1 or 3, then the same strategy also works in s ′ i .Now if the strategy starts with move 2, then playing move 3 in s ′ i has the same effect, and thus enforces the same property.
Remark 4.6.ATL orig and ATL have the same distinguishing power as the fragment of ATL involving only the • X modality (see [AHKV98,proof of Th. 6]).This means that we cannot exhibit two models M and M ′ s.t.(1) M |= Φ, (2) M ′ |= Φ, and (3) M and M ′ satisfy the same ATL orig formula.
Remark 4.7.In [AHK02], a restriction of CGS -the turn-based CGSs-is considered.In any location of these models (named TB-CGS hereafter), only one player has several moves (the other players have only one possible choice).Such models have the property of determinedness: given a set of players A, either there is a strategy for A to win some objective Φ, or there is a strategy for other players (Agt\A) to enforce ¬Φ.In such systems, modality R can be expressed as follows: A ϕ R ψ ≡ TB-CGS ¬ Agt\A (¬ϕ) U (¬ψ).G are expressible in ATL.Indeed, assume that A ∞ F could be expressed by the ATL formula Φ.This holds in particular in 1-player games (i.e., Kripke structures).In the case where coalition A contains the only player, we would end up with a CTL equivalent of E ∞ F , which is known not to exist.A similar argument applies for A ∞ G .

Conclusion
In this paper, we considered the basic questions of expressiveness and complexity of ATL.We precisely characterized the complexity of ATL, ATL + , EATL and ATL ⋆ model-checking, on both ATSs and CGSs, when the number of agents is not fixed.These results complete the previously known results about these formalisms (and corrects some of them).It is interesting to see that their complexity classes (∆ P 2 or ∆ P 3 ) are unusual in the area of modelchecking.We also showed that ATL, as originaly defined in [AHK97, AHK98, AHK02], is not as expressive as it could be expected, and we argue that the modality "Release" should be added in its definition.

2. 1 .
Concurrent Game Structures.Concurrent game structures are a multi-player extension of classical Kripke structures [AHK02].Their definition is as follows: Definition 2.1.A Concurrent Game Structure (CGS for short) C is a 6-tuple (Agt, Loc, AP, Lab, Mov, Edg) where: • Agt = {A 1 , ..., A k } is a finite set of agents (or players); • Loc and AP are two finite sets of locations and atomic propositions, resp.; • Lab : Loc → 2 AP is a function labeling each location by the set of atomic propositions that hold for that location; • Mov: Loc × Agt → P(N) {∅} defines the (finite) set of possible moves of each agent in each location.• Edg : Loc × N k → Loc, where k = |Agt|, is a (partial) function defining the transition table.

14F.
LAROUSSINIE, N. MARKEY, AND G. OREIBYThen, writing f I for the state-based strategy associated to v I : Lemma 3.11.For any r ≤ p and t ≥ r, the following statements are equivalent: (a) q r |= ψ t ; (b) the strategies f I witness the fact that q r |= ψ t ; (c) variable z r evaluates to true in v I .
depicts the turnbased two-player CGS C associated to I. States s 1 to s m are labeled by atomic proposition s,

Figure 3 :
Figure 3: The CGS C states z 1 to z m are labeled by atomic proposition z, and the other states are labeled by their name as shown on Fig. 3.The ATL + formula is built recursively, with ψ 0 = ⊤ and Definition 4.1 ([AHKV98]).Let A and B be two models of concurrent games (either ATSs or CGSs) over the same set Agt of agents.Let R ⊆ Loc A × Loc B be a (non-empty) relation between states of A and states of B. That relation is an alternating bisimulation when, for any (ℓ, ℓ ′ ) ∈ R, the following conditions hold: • Lab A (ℓ) = Lab B (ℓ ′ ); • for any coalition A ⊆ Agt, we have ∀m : A → Mov A (ℓ, A). ∃m ′ : A → Mov B (ℓ ′ , A).
Figure 4: Costs of translations between the three models

F∞F
cannot be expressed in ATL.It is well known that ECTL formulae of the form E ∞ F P (and its dual A ∞ G P ) cannot be expressed in CTL[Eme90].On the other hand, the following equivalences hold:E ∞ G P ≡ EF EG P A P ≡ AG AF P.The situation is again different in ATL: neither A ∞ F nor A ∞ strategy is statebased (or memoryless) if it only depends on the current state (i.e., f A i