The Complexity of Reachability in Affine Vector Addition Systems with States

Vector addition systems with states (VASS) are widely used for the formal verification of concurrent systems. Given their tremendous computational complexity, practical approaches have relied on techniques such as reachability relaxations, e.g., allowing for negative intermediate counter values. It is natural to question their feasibility for VASS enriched with primitives that typically translate into undecidability. Spurred by this concern, we pinpoint the complexity of integer relaxations with respect to arbitrary classes of affine operations. More specifically, we provide a trichotomy on the complexity of integer reachability in VASS extended with affine operations (affine VASS). Namely, we show that it is NP-complete for VASS with resets, PSPACE-complete for VASS with (pseudo-)transfers and VASS with (pseudo-)copies, and undecidable for any other class. We further present a dichotomy for standard reachability in affine VASS: it is decidable for VASS with permutations, and undecidable for any other class. This yields a complete and unified complexity landscape of reachability in affine VASS. We also consider the reachability problem parameterized by a fixed affine VASS, rather than a class, and we show that the complexity landscape is arbitrary in this setting.


Introduction
Vector addition systems with states (VASS), which can equivalently be seen as Petri nets, form a widespread general model of infinite-state systems with countless applications ranging from the verification of concurrent programs to the modeling of biological, chemical and business processes (see, e.g., [GS92,KKW14,EGLM17,HGD08,van98]). They comprise a finite-state controller with counters ranging over N and updated via instructions of the form x ← x + c which are executable if x + c ≥ 0. The central decision problem concerning VASS • d ≥ 1 is the number of counters of V; • Q is a finite set of elements called control-states; • T ⊆ Q × Z d×d × Z d × Q is a finite set of elements called transitions.
For every transition t = (p, A, b, q), let src(t) := p, M (t) := A, ∆(t) := b and tgt(t) := q. A configuration is a pair (q, v) ∈ Q × Z d written q(v). For all t ∈ T and D ∈ {Z, N}, we write if u, v ∈ D d , src(t) = p, tgt(t) = q, and v = M (t) · u + ∆(t). The relation − → D is naturally extended to sequences of transitions, i.e., for every w ∈ T k we let Moreover, we write As an example, let us consider the affine VASS of Figure 1, i.e., where d = 2, Q = {p, q, r} and T is as depicted graphically. We have: More generally, p(x, 1) * − → Z r(2 k , 0) for all x ∈ Z and k ∈ N >0 . However, p(3, 0) s − → N q(1, −1) does not hold as counters are not allowed to become negative under this semantics. Classes of matrices. Let us formalize the informal notion of classes of affine VASS, such as "VASS with resets", "VASS with transfers", "VASS with doubling", etc., used throughout the literature.
Such classes depend on the extra operations they provide, i.e., by their affine transformations, and more precisely by their linear part (matrices) rather than their additive part (vectors). Affine VASS extend standard VASS since they always include the identity matrix, which amounts to not applying any extra operation. Moreover, as transformations can be composed along sequences of transitions, their matrices are closed under multiplication, i.e., if matrices M (s) and M (t) are allowed on transitions s and t of two affine VASS of a given class, then M (s) · M (t) is typically also allowed in an affine VASS of the class as it is understood that s and t can be composed. In other words, matrices form a monoid. In addition, classes of affine VASS typically considered do not pose restrictions on the number of counters that can be used, or on the subset of counters on which operations can be applied. In other words, their affine transformations can be extended to arbitrary dimensions and can be applied on any subset of counters, e.g., general "VASS with resets" allow to reset any counter, not just say the first one. We formalize these observations as follows. For every k ≥ 1, let I k be the k × k identity matrix and let S k denote the set of permutations over [k]. Let P σ ∈ {0, 1} k×k be the permutation matrix of σ ∈ S k . For every matrix A ∈ Z k×k , every permutation σ ∈ S k and every n ≥ 1, let σ(A) := P σ · A · P σ −1 , and let A n ∈ Z (k+n)×(k+n) be the matrix such that: A class (of matrices) is a set of matrices C ⊆ k≥1 Z k×k that satisfies {σ(A), A n , I n , A · B} ⊆ C for every A, B ∈ C, every σ ∈ S dim A and every n ≥ 1. In other words, C is closed under counter renaming; each matrix of C can be extended to larger dimensions; and C ∩ Z k×k is a monoid under matrix multiplication for every k ≥ 1.
Note that "counter renaming" amounts to choosing a set of counters on which to apply a given transformation, i.e., it renames the counters, applies the transformation, and renames the counters back to their original names. Let us illustrate this. Consider the classical case of transfer VASS, i.e., where the contents of a counter can be transferred onto another counter with operations of the form "x ← x + y; y ← 0". In matrix notation, this amounts to: O := 1 1 0 0 . Now, consider a system with three counters c 1 , c 2 and c 3 . This system should be able to compute "c 1 ← c 1 + c 2 + c 3 ; c 2 ← 0; c 3 ← 0", but matrix O cannot achieve this on its own. However, it can be done with the following matrix: We have O = O 1 ·σ(O 1 ) where σ := (2; 3). Thus, the operation can be achieved by any class containing O. The symmetric operation "c 3 ← c 1 + c 2 + c 3 ; c 1 ← 0; c 2 ← 0", e.g., can also be achieved with appropriate permutations. Hence, this corresponds to the usual notion of transfers: we are allowed to choose some counters and apply transfers in either direction. Note that requiring P σ · A ∈ C for classes would be too strong as it would allow to permute the contents of counters even for classes with no permutation matrix, such as resets.
Classes of interest. We say that a matrix A ∈ Z k×k is a pseudo-reset, pseudo-transfer or pseudo-copy matrix if A ∈ {−1, 0, 1} k×k and if it also satisfies the following: • pseudo-reset matrix: A is a diagonal matrix; • pseudo-transfer matrix: A has at most one nonzero entry per column; • pseudo-copy matrix: A has at most one nonzero entry per row.

M. Blondin and M. Raskin
Vol. 17:3 as achieved respectively by the matrices of transitions s (reset), t (transfer ) and u (copy) illustrated in Figure 1.
Reachability problems. We say that an affine VASS V = (k, Q, T ) belongs to a class of matrices C if {M (t) : t ∈ T } ⊆ C, i.e., if all matrices appearing on its transitions belong to C.
The reachability problem and integer reachability problem for a fixed class C are defined as: Reach C Input: an affine VASS V that belongs to C, and two configurations p(u), q(v); an affine VASS V that belongs to C, and two configurations p(u), q(v);

A complexity trichotomy for integer reachability
This section is devoted to the proof of our main result, namely the trichotomy on Z-Reach C : Theorem 3.1. The integer reachability problem Z-Reach C is: (i) NP-complete if C only contains reset matrices; (ii) PSPACE-complete, otherwise, if either C only contains pseudo-transfer matrices or C only contains pseudo-copy matrices; (iii) Undecidable otherwise.
It is known from [HH14, Cor. 10] that NP-hardness holds for affine VASS using only the identity matrix (recall that our definition of reset matrices include the identity matrix), and that NP membership holds for any class of reset matrices. Hence, (i) follows immediately. Thus, the rest of this section is dedicated to proving (ii) and (iii).
3.1. PSPACE-hardness. For the rest of this subsection, let us fix some class C that either only contains pseudo-transfer matrices or only contains pseudo-copy matrices. We prove PSPACE-hardness of Z-Reach C by first proving that PSPACE-hardness holds if either: • C contains a matrix with an entry equal to −1; or • C contains a matrix with entries from {0, 1} and a nonzero entry outside of its diagonal.
For these two cases, we first show that C can implement operations x ← −x or (x, y) ← (y, x) respectively, i.e., sign flips or swaps. Essentially, each of these operations is sufficient to simulate linear bounded automata. Before investigating these two cases, let us carefully formalize what it means to implement an operation: Definition 3.2. Let f : Z k → Z k and let τ ∈ {0, ?}. Given a set of counters X ⊆ [m], let and let V X := Z m otherwise. We say that C τ -implements f if for every n ≥ k, there exist counters X = {x 1 , x 2 , . . . , x n }, matrices {F σ : σ ∈ S k } ⊆ C and m ≥ n such that the following holds for every σ ∈ S k and v ∈ V X : (a) dim F σ = m; We further say that C implements f if it either 0-implements or ?-implements f . Definition 3.2 (b) and (c) state that it is possible to obtain arbitrarily many counters X such that f can be applied on any k-subset of X, provided that the counter values belong to V X . Moreover, (d) states that vectors resulting from applying operation f also belong to V X , which ensures that f can be applied arbitrarily many times. Note that (a) allows for extra auxiliary counters whose values are only restricted by V X .
Informally, ?-implementation means that we use additional counters that can hold arbitrary values, while 0-implementation requires the extra counters to be initialized with zeros but promises to keep them in this state. It turns out that pseudo-transfer matrix classes 0-implement the functions we need, while pseudo-copy matrix classes ?-implement them. Proof. Let n ≥ 1 and A ∈ C be such that A a,b = −1 for some counters a and b. Let d := dim A. We extend A with n + 2 counters X := X ∪ {y, z}, where X := {x i : i ∈ [n]} are the counters for which we wish to implement sign flips, and {y, z} are auxiliary counters. More formally, let A := A n+2 where X = [d + 1, d ] and d := d + n + 2.
For every s, t ∈ X such that s = t, let B s,t := π s,t (A ) and let C t := σ t (A ) where π s,t := (a; t)(b; s) and σ t := (a; t). For every x ∈ X, let Intuitively, B s,t (resp. C t ) flips the sign from source counter s (resp. t) to target counter t. If a = b, then matrix F x implements a sign flip in three steps using auxiliary counters y and z, as illustrated in Figure 2. Otherwise, F x implements sign flip directly in one step. ? depicts the case where A is a pseudo-transfer (resp. pseudo-copy) matrix. A solid or dashed edge from s to t represents operation s ← t or s ← −t respectively.
Filled nodes indicate counters that necessarily hold 0. Symbol "?" stands for an integer whose value is irrelevant and depends on A and the counter values.
Let us consider the case where A is a pseudo-transfer matrix. From the definition of B s,t and C t , it can be shown that for every s, t, u ∈ X such that s = t and u ∈ {s, t}, the following holds: (i) B s,t · e s = C t · e t = −e t , and (ii) B s,t · e u = C t · e u = e u . Let us show that we 0-implement sign flips, so let Let v ∈ V and x ∈ X. By definition of V , v = y∈X v(y) · e y . Let v := j∈X\{x} v(j) · e j . Items (b), (c) and (d) of Definition 3.2 are satisfied since: The proof of (i) and (ii), and the similar proof for the case where A is a pseudo-copy matrix, are analogous (see Appendix A).
Proposition 3.4. Z-Reach C is PSPACE-hard if C has a matrix with an entry equal to −1.
Let w ∈ {0, 1} k and let A = (P, Σ, δ, p init , p acc ) be a linear bounded automaton where: • P is its finite set of control-states; • Σ = {0, 1} is its input and tape alphabet; • δ : P × Σ → P × Σ × {Left, Right} is its transition function; and • p init and p acc are its initial and accepting control-states, respectively.
We construct an affine VASS V = (d, Q, T ) and configurations p(u), q(v) such that V belongs to C, and p(u) * − → Z q(v) ⇐⇒ A accepts w. For every control-state p and head position j of A, there is a matching control-state in V, i.e., Q := {q p,j : p ∈ P, 1 ≤ j ≤ k} ∪ Q, where Q will be auxiliary control-states. We associate two counters to each tape cell of A, i.e., d := 2 · k. For readability, let us denote these counters {x j , y j : j ∈ [k]}.
We represent the contents of tape cell i by the sign of counter y j , i.e., y j > 0 represents 0, and y j < 0 represents 1. We will ensure that y j is never equal to 0, which would otherwise be an undefined representation. Since V cannot directly test the sign of a counter, it will be possible for V to commit errors during the simulation of A. However, we will construct V in such a way that erroneous simulations are detected.
The gadget depicted in Figure 3 simulates a transition of A in three steps: • x i is incremented; • y i is incremented (resp. decremented) if the letter a to be read is 0 (resp. 1); • the sign of y i is flipped if the letter b to be written differs from the letter a to be read.
Let u ∈ Z d be the vector such that for every j ∈ [k]: u(x j ) := 1 and u(y j ) := (−1) w j .
Provided that V starts in vector u, we claim that: q p,i q p ,i+1 The gadget for direction Left is the same except for q p ,i+1 which is replaced by q p ,i−1 . Note that a and b are fixed, hence expressions such as (−1) a are constants; they do not require exponentiation.
Let us see why this claim holds. Let i ∈ [k]. Initially, we have |x i | = |y i | and the sign of y i set correctly. Assume we execute the gadget of Figure 3, resulting in new values x i and y i . Let λ ≥ 0 be such that |x i | = |y i | + λ. Let c ∈ {0, 1} be the letter represented by y i . If c = a, then |x i | = |y i | + λ and the sign of y i represents b as desired. If c = a, then |x i | = |y i | + (λ + 1). Thus, we have |x i | = |y i | if and only if no error was made before and during the execution of the gadget.
From the above observations, we conclude that A accepts w if and only if there exist i ∈ [k] and v ∈ Z d such that and |v(x j )| = |v(y j )| for every j ∈ [k]. This can be tested using the gadget depicted in Figure 4, which: • detects nondeterministically that some control-state of the form q pacc,i has been reached; • attempts to set y j to its absolute value for every j ∈ [k]; • decrements x j and y j simultaneously for every j ∈ [k]. r q p acc ,i q p acc ,1 q p acc ,k y 1 ← −y 1 Figure 4: Gadget of V for tesing whether A was faithfully simulated and has accepted w.
Due to the above observations, it is only possible to reach r(0) if |x j | = |y j | for every j ∈ [k] before entering the gadget of Figure 4. Thus, we are done proving the reduction since A accepts w if and only if Sign flips. The above construction considers sign flips as a "native" operation. However, this is not necessarily the case, and instead relies on the fact that class C either 0-implements 3:10

M. Blondin and M. Raskin
Vol. 17:3 or ?-implements sign flips, by Proposition 3.3. Thus, the reachability question must be changed to q p init ,1 (u, 0) * − → Z r(0, 0) to take auxiliary counters into account. Moreover, if C ?-implements sign flips, then extra transitions (r, I, e j , r) and (r, I, −e j , r) must be added to T , for every auxiliary counter j, to allow counter j to be set back to 0. Note that control-state r can only be reached after the simulation of A, hence it plays no role in the emulation of sign flips. Moreover, if there is an error during the simulation of A and the extra transitions set the auxiliary counters to zero, we will stil detect it as the configuration will be of the form r(w, 0) where w = 0.
In the two forthcoming propositions, we prove PSPACE-hardness of the remaining case.
Proposition 3.5. If C contains a matrix with entries from {0, 1} and a nonzero entry outside of its main diagonal, then it implements swaps, i.e., the operation f : Proof. Let n ≥ 2 and let A ∈ C be a matrix with entries from {0, 1} and a nonzero entry outside of its main diagonal. Let d :

right) diagram depicts the case where
A is a transfer (resp. copy) matrix. An edge from counter s to counter t represents operation s ← t. Filled nodes indicate counters that necessarily hold 0. Symbol "?" stands for an integer whose value is irrelevant and depends on A and the counter values.
Intuitively, B s,t moves the contents from some source counter s to some target counter t, and F x,y implements a swap in three steps using an auxiliary counter z as depicted in Figure 5. In the case where A is a transfer matrix, B s,t resets s, provided that t held value 0.
Let us consider the case where A is a transfer matrix. From the definition of B s,t , it can be shown that for every s, t, u ∈ X such that s = t and u ∈ {s, t}, the following holds: (i) B s,t · e s = e t , and (ii) B s,t · e u = e u . Let us show that we 0-implement swaps, so let Let v ∈ V X and let x, y ∈ X be such that Items (b), (c) and (d) of Definition 3.2 are satisfied since we obtain the following by applications of (i) and (ii): The proof of (i) and (ii), and the similar proof for the case where A is a copy matrix, are analogous (see Appendix A).
Proposition 3.6. Z-Reach C is PSPACE-hard if C contains a matrix with entries from {0, 1} and a nonzero entry outside of its main diagonal.
Proof. It is shown in [BHMR19] that Z-reachability is PSPACE-hard for affine VASS with swaps, using a reduction from the membership problem for linear bounded automata.
Here, we may not have swaps as a "native" operation. However, by Proposition 3.5, class C implements swaps. Thus, as in the proof of Proposition 3.4, if the reachability question is of the form . Moreover, if the class C ?-implements swaps, then new transitions must be introduced to allow auxiliary counters to be set back to 0. Recall that under ?-implementation, there is no requirement on the value of the auxiliary counters, hence these new transitions do not interfere with the emulation of swaps.
We now proceed to prove the main result of this subsection, namely Theorem 3.1 (ii): Proof of Theorem 3.1 (ii). Let M k := C ∩ Z k×k for every k ≥ 1. Theorem 7 of [BHM18] shows that Z-Reach C belongs to PSPACE if each M k is a finite monoid of at most exponential norm and size in k. Let us show that this is the case. First, since C is a class that contains only pseudo-transfer (reps. pseudo-copy) matrices, and since the product of two such matrices remains so, M k is a monoid which is finite as M k ⊆ {−1, 0, 1} k×k . Moreover, by definitions of pseudo-transfer and pseudo-copy matrices, each such matrix can be described by cutting it into k lines and specifying for each line either the position of the unique nonzero entry (which is −1 or 1), or the lack of such entry. Therefore, for every k ≥ 1, it is the case that M k ≤ 1 and It remains to show PSPACE-hardness. By assumption, C contains a nonreset matrix A. Since C ≤ 1, we have A = 1 as no class can be such that C = 0. If A contains an 3:12

M. Blondin and M. Raskin
Vol. 17:3 entry equal to −1, then we are done by Proposition 3.4. Otherwise, A only has entries from {0, 1}, and hence we are done by Proposition 3.6.
3.2. Undecidability. In this subsection, we first show that any class C, that does not satisfy the requirements for Z-Reach C ∈ {NP-complete, PSPACE-complete}, must be such that C ≥ 2. We then show that this is sufficient to mimic doubling, i.e., the operation x → 2x, even if C does not contain a doubling matrix. In more details, we will (a) construct a matrix C that provides a sufficiently fast growth; which will (b) allow us to derive undecidability by revisiting a reduction from the Post correspondence problem which depends on doubling.
Proposition 3.7. Let C be a class that contains some matrices A and B which are respectively not pseudo-copy and pseudo-transfer matrices. It is the case that C ≥ 2.
Proof. By assumption, A and B respectively have a row and a column with at least two nonzero entries. We make use of the following lemma shown in Appendix A: if C contains a matrix which has a row (resp. column) with at least two nonzero entries, then C also contains a matrix which has a row (resp. column) with at least two nonzero entries with the same sign.
Since C is a class, we can assume that dim A = dim B = d for some d ≥ 2, as otherwise the smallest matrix can be enlarged. Thus, there exist i, i , j, k ∈ [d] and a, b, a , b = 0 such that: Note that the reason we can assume A and B to share counters j and k is due to C being closed under counter renaming.
We wish to obtain a matrix with entry We cannot simply pick A · B as (A · B) i,i may differ from this value due to other nonzero entries. Hence, we rename all counters of B, except for j and k, with fresh counters. This way, we avoid possible overlaps and we can select precisely the four desired entries. More formally, Let i := i + d if i ∈ {j, k} and i := i otherwise. We have: A and B , and by Since a and b (resp. a and b ) have the same sign, and since a, b, a , b = 0, we conclude that |C i,i | ≥ 2 and consequently that C ≥ C ≥ 2. To avoid cumbersome subscripts, we write e for e 1 in the rest of the section. Moreover, let λ (C) := (C · e)(1) for every matrix C and ∈ N.
The following technical lemma will be key to mimic doubling. It shows that, from any class of norm at least 2, we can extract a matrix with sufficiently fast growth.
Lemma 3.8. For every class of matrices C such that C ≥ 2, there exists C ∈ C with λ n+1 (C) ≥ 2 · λ n (C) for every n ∈ N.
Proof. Let A ∈ C be a matrix with some entry c such that |c| ≥ 2. We can assume that c ≥ 2. Indeed, if it is negative, then we can multiply A by a suitable permutation of itself to obtain an entry equal to c · c. We can further assume that c is the largest positive coefficient occurring within A, and that it lies on the first column of A, i.e., A k,1 = c for some k ∈ [d] where d := dim A. We consider the case where k = 1. The case where k = 1 will be discussed later.
For readability, we rename counters {1, 2, . . . , d} respectively by X := {x 1 , x 2 , . . . , x d }. Note that (A · e)(x 1 ) = c ≥ 2 · e(x 1 ) as desired. However, vector A · e may now hold nonzero values in counters x 2 , . . . , x d . Therefore, if we multiply this vector by A, some "noise" will be added to counter x 1 . If this noise is too large, then it may cancel the growth of x 1 by ≈ c. We address this issue by introducing extra auxiliary counters replacing x 2 , . . . , x d at each "iteration". Of course, we cannot have infinitely many auxiliary counters. Fortunately, after a sufficiently large number m of iterations, the auxiliary counters used at the first iteration will contain sufficiently small noise so that the process can restart from there.
More formally, let A : } is the set of auxiliary counters, and m ≥ 1 is a sufficiently large constant whose value will be picked later. Let V be the set of vectors v ∈ Z |X|+|Y | satisfying v(x 1 ) > 0 and Let us fix some vector v 0 ∈ V . For every 0 ≤ i < m, let B i := σ i (A ) and v i+1 : We claim that: v m (x 1 ) ≥ 2 · v 0 (x 1 ) and v m ∈ V. The validity of this claim proves the lemma. Indeed, C·v 0 = v m where C := B m−1 · · · B 1 ·B 0 . Hence, an application of C yields a vector whose first component has at least doubled. Since e ∈ V and the resulting vector also belong to V , this can be iterated arbitrarily many times.
Let us first establish the following properties for every 0 ≤ i < m and j ∈ [2, d]: Property (a), which follows from the definition of B i , essentially states that the contents of counter y i,j is only altered from v i to v i+1 . Properties (b) and (c) bound the growth of the counters in terms of x 1 . Let us prove these two latter properties by induction on i.
, and hence property (b) follows from: Similarly, property (c) holds since, for every j ∈ [2, d]: We may now prove the claim. Let m be sufficiently large so that (3c/4) m ≥ 8cd. We have v m (x 1 ) ≥ (3c/4) m · v 0 (x 1 ) ≥ 8cd · v 0 (x 1 ) by (b) and definition of m. Hence, since c ≥ 2 and d ≥ 1, we have v m (x 1 ) ≥ 2 · v 0 (x 1 ), which satisfies the first part of the claim. Moreover, the second part of the claim, namely v m ∈ V , holds since for every y i,j ∈ Y , we have: We are done proving the lemma for the case A k,1 = c ≥ 2 with k = 1. This case is slightly simpler as c lies on the main diagonal of A which means that v i+1 (x 1 ) ≈ c · v i (x 1 ). If k = 1, then we have v i+1 (x k ) ≈ c · v i (x 1 ) instead, which breaks composability for the next iteration. However, this is easily fixed by swapping the names of counters x k and x 1 .
Let us fix a class C such that C ≥ 2 and the matrix C obtained for C from Lemma 3.8. For simplicity, we will write λ instead of λ (C). We prove two intermediary propositions that essentially show that C can encode binary strings. Let f b (v) := C · v + b · e for both b ∈ {0, 1} and every v ∈ Z dim C . Let f ε be the identity function, and let f x := f xn • · · · • f x 2 • f x 1 for every x ∈ {0, 1} n . Let γ x := f x (e)(1) for every x ∈ {0, 1} * . Let ε := ∅ and w := {i ∈ [k] : w i = 1} be the "support" of w for every sequence w ∈ {0, 1} + of length k > 0.
Proof. It suffices to show that f x (e) = C |x| · e + i∈ x C |x|−i · e for every x ∈ {0, 1} * . Let us prove this by induction on |x|. If |x| = 0, then x = ε, and hence f x (e) = e = C 0 · e. Assume that |x| > 0 and that the claim holds for sequences of length |x| − 1. There exist b ∈ {0, 1} and w ∈ {0, 1} * such that x = wb. We have: Proof. Let < lex denote the lexicographical order over {0, 1} * . It is sufficient to show that for every x, y ∈ {0, 1} * the following holds: if x < lex y, then γ x < γ y . Indeed, if this claim holds, then for every x, y ∈ {0, 1} * such that x = y, we either have x < lex y or y < lex x, which implies γ x = γ y in both cases. Let us prove the claim. Let x, y ∈ {0, 1} * be such that x < lex y. We either have |x| < |y| or |x| = |y|. If the former holds, then the claim follows from: It remains to prove the case where |x| = |y| = k for some k > 0. Since x < lex y, there exist u, v, w ∈ {0, 1} * such that x = u0v and y = u1w. Let := k − |u| − 1. Note that = |v| = |w|. The proof is completed by observing that: We may finally prove the last part of our trichotomy.
Proof. We give a reduction from the Post correspondence problem inspired by [Rei15]. There, counter values can be doubled as a "native" operation. Here, we adapt the construction with our emulation of doubling. Let us consider an instance of the Post correspondence problem over alphabet {0, 1}: We say that Γ has a match if there exists w ∈ Γ + such that the underlying top and bottom sequences of w are equal. Let C be the matrix obtained for C from Lemma 3.8, let d := dim C, and let e be of size d. For every x ∈ {0, 1} * , let g x and h x be the linear mappings over Z 2d defined as f x , but operating on counters 1, 2, . . . , d and counters d + 1, d + 2, . . . , 2d respectively. Let V := (2d, Q, T ) be the affine VASS such that Q and T are as depicted in Figure 6. Note that V belongs to C. Indeed, g x and h x can be obtained from matrix C ∈ C and the fact that C is a class, and hence closed under counter renaming and larger dimensions. We claim that p(e, e) * − → Z r(e, e) if and only if Γ has a match. Note that any sequence w ∈ T + from p to p computes g wx • h wy for some word Therefore: ⇐⇒ ∃w ∈ T + : γ wx = γ wy (by def. of g, h and γ) ⇐⇒ ∃w ∈ T + : w x = w y (by Proposition 3.10) ⇐⇒ Γ has a match.
We conclude this section by proving Theorem 3.1 (iii) which can be equivalently formulated as follows: Corollary 3.12. Z-Reach C is undecidable if C does not only contain pseudo-transfer matrices and does not only contain pseudo-copy matrices.

A complexity dichotomy for reachability
This section is devoted to the following complexity dichotomy on Reach C , which is mostly proven by exploiting notions and results from the previous section: Theorem 4.1. The reachability problem Reach C is equivalent to the (standard) VASS reachability problem if C only contains permutation matrices, and is undecidable otherwise.

4.1.
Decidability. Note that the (standard) VASS reachability problem is the problem Reach I where I := n≥1 I n . Clearly Reach I ≤ Reach C for any class C. Therefore, it suffices to show the following: Proposition 4.2. Reach C ≤ Reach I for every C that only contains permutation matrices.
Proof. Let V = (d, Q, T ) be an affine VASS that belongs to C. We construct a (standard) VASS V = (d, Q , T ) that simulates V. Recall that a (standard) VASS is an affine VASS that only uses the identity matrix. For readability, we omit the identity matrix on the transitions of V . We assume without loss of generality that each transition t ∈ T satisfies either ∆(t) = 0 or M (t) = I. Indeed, since permutation matrices are nonnegative, every transition of T can be splitted in two parts: first applying its matrix, and then its vector.
The control-states and transitions of V are defined as Q := {q σ : q ∈ Q, σ ∈ S d } and T := S ∪ S vec , which are to be defined shortly. Intuitively, each control-state of V stores the current control-state of V together with the current renaming of its counters. Whenever 3:18

M. Blondin and M. Raskin
Vol. 17:3 a transition t ∈ T such that ∆(t) = 0 is to be applied, this means that the counters must be renamed by the permutation M (t). This is achieved by: Similarly, whenever a transition t ∈ T such that M (t) = I is to be applied, this means that ∆(t) should be added to the counters, but in accordance to the current renaming of the counters. This is achieved by: A routine induction shows that where ε denotes the identity permutation. Since this amounts to finitely many reachability queries, i.e., |S d | = d! queries, this yields a Turing reduction 1 .

4.2.
Undecidability. We show undecidability by considering three types of classes: (1) classes with matrices with negative entries; (2) nontransfer and noncopy classes; and (3) transfer or copy classes. In each case, we will argue that an "undecidable operation" can be simulated, namely: zero-tests, doubling and resets.
Proposition 4.3. Reach C is undecidable for every class C that contains a matrix with some negative entry.
Proof. Let A ∈ C be a matrix such that A i,j < 0 for some i, j ∈ [d] where d := dim A. We show how a two counter Minsky machine M can be simulated by an affine VASS V belonging to C. Note that we only have to show how to simulate zero-tests. The affine VASS V has 2d counters: counters j and j + d which represent the two counters x and y of M; and 2d − 2 auxiliary counters which will be permanently set to value 0.
x ← x + c y ← y + c x = 0? Observe that for every λ ∈ N, the following holds: Thus, A simulates a zero-test as it leaves all counters set to zero if counter j holds value zero, and it generates a vector with some negative entry otherwise, which is an invalid 1 Although it is not necessary for our needs, the reduction can be made many-one by weakly computing a matrix multiplication by P σ −1 onto d new counters, from each control-state qσ to a common state r. configuration under N-reachability. Figure 7 shows how each transition of M is replaced in V. We are done since Proposition 4.4. Reach C is undecidable if C does not only contain transfer matrices and does not only contain copy matrices.
Proof. If C contains a matrix with some negative entry, then we are done by Proposition 4.3.
Thus, assume C only contains nonnegative matrices. By Proposition 3.7, we have C ≥ 2. Let C be the matrix obtained for C from Lemma 3.8. Since C ≥ 0, we have C · v ≥ 0 for every v ≥ 0. Hence, multiplication by C is always allowed under N-reachability. Thus, the reduction from the Post correspondence problem given in Theorem 3.11 holds here under N-reachability, as the only possibly (relevant) negative values arose from C.
We may finally prove the last part of our dichotomy: Theorem 4.5. Reach C is undecidable for every class C with some nonpermutation matrix.
Proof. Let A ∈ C be a matrix which is not a permutation matrix. By Propositions 4.3 and 4.4, we may assume that A is either a transfer or a copy matrix. Hence, A must have a column or a row equal to 0, as otherwise it would be a permutation matrix. Thus, we either have We show that C implements resets, i.e., the operation f : Z → Z such that f (x) := 0. This suffices to complete the proof since reachability for VASS with resets is undecidable [AK76].
Let X := {d + 1, d + 2, . . . , d + n} be the counters for which we wish to implement resets. Let A := A n and let B x := σ x (A ) where σ x := (x; i). Let x, y ∈ X be such that x = y.
Case A ,i = 0. We have: B x · e x = (B x ) ,x = A ,i = 0. Similarly, it can be shown that B x · e y = e y . Hence, class C 0-implements resets.
Case A i, = 0. The following holds for every v ∈ Z d+n : Similarly, (B x · v)(y) = v(y). Hence, class C ?-implements resets (see Appendix A).

Parameterization by a system rather than a class
In this section, we consider the (integer or standard) reachability problem parameterized by a fixed affine VASS, rather than a matrix class. We show that in contrast to the case of classes, this parameterization yields an arbitrary complexity (up to a polynomial). More formally, this section is devoted to establishing the following theorem: There exists an affine VASS V such that L and the following problem are interreducible under polynomial-time many-one reductions: In order to show Theorem 5.1, we will prove the following technical lemma: iff ϕ(f (w)) holds. Let V be the affine VASS given by Lemma 5.2 for ϕ, and let p, q be its associated control-states. To reduce L to Reach V , we construct, on input w, the pair I w := p(f (w), 0), q(0, 0) . We have w ∈ L iff ϕ(f (w)) holds iff I w ∈ Reach V by Lemma 5.2.
The reduction from D-Reach V to L is as follows. On input I = r(v), r (v ) : • If r = p or r (v ) = q(0), then we check whether r(v) * − → D r (v ) in polynomial time and return a positive (resp. negative) instance of L if it holds (resp. does not hold). This is possible by Lemma 5.2 and by nontriviality of L.
• Otherwise, I = p(x, u), q(0, 0) for some value x and some vector u, so it suffices to return f −1 (x). Indeed, by Lemma 5.2, In order to prove Lemma 5.2, we first establish the following proposition. Informally, it states that although an affine VASS cannot evaluate a polynomial P with the mere power of affine functions, it can evaluate P "weakly". Moreover, its structure is simple enough to answer any reachability query in polynomial time. Proof. We adapt and simplify a contruction of the authors which was given in [BHMR19] for other purposes. Let us first consider the case of a single monomial P (y 1 , . . . , y k ) = cy d 1 1 · · · y d k k with c ≥ 1. We claim that the affine VASS V depicted in Figure 8 satisfies the claim. Its  Figure 8: Affine VASS evaluating cy d 1 1 · · · y d k k , where "x++", "x−−", "x += x " and "x −= x " stand respectively for "x ← x + 1", "x ← x − 1", "x ← x + x " and "x ← x − x ".
By construction, counters from Y are never altered, merely copied onto Y . Moreover, if counters from Y ∪ A initially hold zero, and if loops are executed so that counters from Y reach zero, then a out contains P (y 1 , . . . , y k ) when reaching control-state q, and every other auxiliary counter holds zero (due to the final resets).
It remains to argue that Item (b) holds. Let us consider a query "r(v) * − → D r (v )". Observe that, although V has nondeterminism in its loops and uses nonreversible operations (copies and resets), it is still "reversible" since the inputs Y are never altered and since each counter from Y can only be altered at a single copy or via the two loops next to it. This provides enough information to answer the reachability query. Indeed, we can: • Answer "false" if v and v disagree on Y ; • Pretend the accumulators do not exist and traverse V backward from r (v ) to r by undoing the loops (either up only or down only), ensuring each counter from Y reaches its correct value; and answer "false" if not possible; • Traverse V forward from r(v) by running through the loops the number of times identified by the previous traversal; this now allows to determine the value of the accumulators A; • If the values of A are incorrect in r (v ), or if we ever drop below zero in the case of D = N, then we answer "false", and otherwise "true". Observe that there is no need to execute the loops one step at a time, e.g., if v(y i ) = 10 ≥ 3 = v (y i,j ), then we can compute 7 directly rather than in seven steps.
Let us now consider the general case. We can write the polynomial P as where each Q i is a monomial with positive coefficient. Thus, we can compose the above construction sequentially with n distinct sets of auxiliary counters (only Y is shared). Let a out,i be output counter for Q i . We add a last transition that computes into an extra counter and resets a out,1 , . . . , a out,n . This preserves all of the above properties. Indeed, each copy variable is still only affected locally by a single copy and the next two loops. Note that ordering the monomials only matters for D = N as summing, e.g., −2 + 5 blocks, while 5 − 2 does not.
We may now conclude by proving Lemma 5.2, which we recall: Lemma 5.2. Let D ∈ {Z, N} and let ϕ : D → {0, 1} be a nontrivial computable predicate. There exists an affine VASS V = (d, Q, T ) and control-states p, q ∈ Q such that: (a) For every vector u, it is the case that p(x, u) * Proof. Since ϕ is decidable, by Matiyasevich's theorem 3 [Mat71], there exists a polynomial P , with integer coefficients and k variables, such that for every x ∈ D: ϕ(x) holds ⇐⇒ ∃y ∈ D k−1 : P (x, y) = 0.
Let V be the affine VASS obtained from Proposition 5.3 for P , and let p and q be its associated control-states. Let us show that the affine VASS V depicted in Figure 9 satisfies the claim. It uses counters C := Y ∪ Y where Y := {y 1 , . . . , y k } are the k first counters of V corresponding to the variables of P ; and where Y forms the other auxiliary counters of V . We let m := |C| and sometimes refer to the counters C as {c 1 , . . . , c m }. For the rest of the proof, let us write u X to denote the vector obtained by restricting u to counters X ⊆ C.
3 There are two variants of Matiyasevich's theorem: stated either over N or Z. This follows, e.g., from Lagrange's four-square theorem, i.e., any number from N can be written as a 2 +b 2 +c 2 +d 2 where a, b, c, d ∈ Z. The purpose of the upper gadget is to satisfy Item (b) by simplifying other queries from p to q. More precisely, the upper gadget allows to generate an arbitrary nonzero vector. This is achieved by (1) setting each counter c i ∈ C to an arbitrary value; (2) nondeterministically setting a counter c j to 1; (3) setting c j to an arbitrary positive value; (4) nondeterministically keeping or flipping the sign of c j (the latter only works if D = Z). These steps ensure that all counter values are possible, provided that some counter c j = 0.
Let us first show Item (a). Suppose p(x, u) * − → D q(0, 0). Clearly, the target is not reached through the upper gadget. So, the following holds for some value x and some vectors y, y : Since V does not alter counters Y , we have (x, y) = (x , y ) and consequently: Hence, P (x, y) = 0, which implies that ϕ(x) holds, as desired. Conversely, if ϕ(x) holds, then P (x, y) = 0 holds for some vector y. Clearly, we can achieve the following: Let us now show Item (b). Recall that we want to show that queries not covered by Item (a) can be answered in polynomial time. These are queries of the form where ¬(r = p ∧ r (v ) = q(0)), which amounts to either r = p or r (v ) = q(0).
We assume w.l.o.g. that r can reach r in the underlying graph, as it can be tested in linear time. Let Q be the control-states of V . We make a case distinction on r and r , and explain each time how to answer the query.
• Case r = p, r ∈ Q . Recall that V does not alter Y , that p can generate any values within Y \ {y 1 }, and that the transition from p to p resets Y . Hence, we have: The latter can be answered in polynomial time by Proposition 5.3.
• Case r = p, r = q. Since r = p, we must have v = 0 by assumption. Thus, the answer is "true" since the upper gadget allows to reach any nonzero vector in q.
• Case r, r ∈ Q . Can be tested in polynomial time by Proposition 5.3.
• Case r ∈ Q , r = q. Recall that V does not alter Y , and that the transition from q to q resets Y , but not Y . Hence, we have: The latter can be answered in polynomial time by Proposition 5.3.
Upper part: • Case r ∈ {p, a} and r = a. The answer is always "true".
• Case r ∈ {p, a} and r = b i . Amounts to v (c i ) ≥ 1.
• Case r = a, r = q. Amounts to i∈[m] v (c i ) = 0. • Case r = r = b i . Amounts to v (c i ) ≥ v(c i ) and v (c j ) = v(c j ) for all j = i.
• Case r = b i , r = q. Amounts to |v (c i )| ≥ | max(v(c i ), 0)| and v (c j ) = v(c j ) for j = i.

Conclusion and further work
Motivated by the use of relaxations to alleviate the tremendous complexity of reachability in VASS, we have studied the complexity of integer reachability in affine VASS. Namely, we have shown a trichotomy on the complexity of integer reachability for affine VASS: it is NP-complete for any class of reset matrices; PSPACE-complete for any class of pseudo-transfers matrices and any class of pseudo-copies matrices; and undecidable for any other class. Moreover, the notions and techniques introduced along the way allowed us to give a complexity dichotomy for (standard) reachability in affine VASS: it is decidable for any class of permutation matrices, and undecidable for any other class. This provides a complete general landscape of the complexity of reachability in affine VASS.
We further complemented these trichotomy and dichotomy by showing that, in contrast to the case of classes, the (integer or standard) reachability problem has an arbitrary complexity when it is parameterized by a fixed affine VASS. A further direction of study is the range of possible complexities for integer reachability relations for specific matrix monoids, which is entirely open.
Another direction lies in the related coverability problem which asks whether p(u) * − → D q(v ) for some v ≥ v rather than v = v. As shown in the setting of [HH14,CHH18], this problem is equivalent to the reachability problem for D = Z. Indeed, given an affine VASS V, it is possible to construct an affine VASS V such that This can be achieved by replacing each affine transformation Ax + b of V by Furthermore, note that classes are closed under this construction, which shows that integer coverability and integer reachability are equivalent w.r.t. classes. However, (*) does not hold for D = N. In this case, it is well-known, e.g., that the coverability problem is decidable for VASS with resets or transfers, while the reachability problem is not. Hence, a precise characterization of the complexity landscape remains unknown in this case.
More formally, we wish to obtain a matrix D with positive entries a 2 and b, and more precisely such that D i,j = a 2 and D i,k = b for some counter j . Let B := A d , C := σ(B) and D := B · C where σ : [2d] → [2d] is the following permutation: We claim that D is as desired. First, observe that: B i, · B σ( ),k+d (by def. of D and σ(k) = k + d) = B i,k · B k+d,k+d (since B σ( ),k+d = 0 ⇐⇒ σ( ) = k + d) = b (since B k+d,k+d = 1).
Thus, D has a positive entry on row i. It remains to show that D has another positive entry on row i. We make a case distinction on whether j = i.
Case j = i. Note that k = i + d. Hence, we are done since: Case j = i. Note that k = j = i. Hence, we are done since: = a 2 (since i = j).
We are done proving the proposition for the case of rows. For the case of columns, we can instead assume that A T ∈ C, i.e., the transpose of A belongs to C. Since D T is as desired, we simply have to show that D T ∈ C. This is the case since: = (P σ · B T · P σ −1 ) · B T (since P π −1 = P T π for every perm. π) ∈ C (since A T ∈ C).
A.4. Details for the proof of Theorem 4.5. We prove the missing details for both cases: Case A ,i = 0. We have B x · e y = e y since: (B x · e y )(k) = (B x )k, y = A σx(k),y (since y = x) = 1 ⇐⇒ k = y (by def. of A and σ x ).