Definability of linear equation systems over groups and rings

Motivated by the quest for a logic for PTIME and recent insights that the descriptive complexity of problems from linear algebra is a crucial aspect of this problem, we study the solvability of linear equation systems over finite groups and rings from the viewpoint of logical (inter-)definability. All problems that we consider are decidable in polynomial time, but not expressible in fixed-point logic with counting. They also provide natural candidates for a separation of polynomial time from rank logics, which extend fixed-point logics by operators for determining the rank of definable matrices and which are sufficient for solvability problems over fields. Based on the structure theory of finite rings, we establish logical reductions among various solvability problems. Our results indicate that all solvability problems for linear equation systems that separate fixed-point logic with counting from PTIME can be reduced to solvability over commutative rings. Moreover, we prove closure properties for classes of queries that reduce to solvability over rings, which provides normal forms for logics extended with solvability operators. We conclude by studying the extent to which fixed-point logic with counting can express problems in linear algebra over finite commutative rings, generalising known results on the logical definability of linear-algebraic problems over finite fields.


Introduction
The quest for a logic for PTIME [14,17] is one of the central open problems in both finite model theory and database theory. Specifically, it asks whether there is a logic in which a class of finite structures is expressible if, and only if, membership in the class is decidable in deterministic polynomial time.
Much of the research in this area has focused on the logic FPC, the extension of inflationary fixed-point logic by counting terms. In fact, FPC has been shown to capture PTIME on many natural classes of structures, including planar graphs and structures of bounded tree-width [16,17,19]. Recently, it was shown by Grohe [18] that FPC captures polynomial time on all classes of graphs with excluded minors, a result that generalises most of the previous capturing results. More recently, it has been shown that FPC can express important algorithmic techniques, such as the ellipsoid method for solving linear programs [1].
On the other side, already in 1992, Cai, Fürer and Immerman [9] constructed a graph query that can be decided in PTIME, but which is not definable in FPC. But while this CFI query, as it is now called, is very elegant and has led to new insights in many different areas, it can hardly be called a natural problem in polynomial time. Therefore, it was often remarked that possibly all natural polynomial-time properties of finite structures could be expressed in FPC. However, this hope was eventually refuted in a strong sense by Atserias, Bulatov and Dawar [4] who proved that the important problem of solvability of linear equation systems (over any finite Abelian group) is not definable in FPC and that, indeed, the CFI query reduces to this problem. This motivates the study of the relationship between finite model theory and linear algebra, and suggests that operators from linear algebra could be a source of new extensions to fixed-point logic, in an attempt to find a logical characterisation of PTIME. In [12], Dawar et al. pursued this direction of study by adding operators for expressing the rank of definable matrices over finite fields to first-order logic and fixed-point logic. They showed that fixed-point logic with rank operators (FPR) can define not only the solvability of linear equation systems over finite fields, but also the CFI query and essentially all other properties that were known to separate FPC from PTIME. However, although FPR is strictly more expressive than FPC, it seems rather unlikely that FPR suffices to capture PTIME on the class of all finite structures.
A natural class of problems that might witness such a separation arises from linear equation systems over finite domains other than fields. Indeed, the results of Atserias, Bulatov and Dawar [4] imply that FPC fails to express the solvability of linear equation systems over any finite ring. On the other side, it is known that linear equation systems over finite rings can be solved in polynomial time [2], but it is unclear whether any notion of matrix rank is helpful for this purpose. We remark in this context that there are several non-equivalent notions of matrix rank over rings, but both the computability in polynomial time and the relationship to linear equation systems remains unclear. Thus, rather than matrix rank, the solvability of linear equation systems could be used directly as a source of operators (in the form of generalised quantifiers) for extending fixed-point logics.
Instead of introducing a host of new logics, with operators for various solvability problems, we set out here to investigate whether these problems are inter-definable. In other words, are they reducible to each other within FPC? Clearly, if they are, then any logic that generalises FPC and can define one, can also define the others. We thus study relations between solvability problems over (finite) rings, fields and Abelian groups in the context of logical many-to-one and Turing reductions, i.e., interpretations and generalised quantifiers. In this way, we show that solvability both over Abelian groups and over arbitrary (possibly non-commutative) rings reduces to solvability over commutative rings. These results indicate that all solvability problems for linear equation systems that separate FPC from PTIME can be reduced to solvability over commutative rings. We also show that solvability over commutative rings reduces to solvability over local rings, which are the basic building blocks of finite commutative rings. Finally, in the other direction, we show that solvability over rings with a linear order and solvability over local rings for which the maximal ideal is generated by k elements, reduces to solvability over cyclic groups. Further, we prove closure properties for classes of queries that reduce to solvability over rings, and establish normal forms for first-order logic extended with operators for solvability over finite fields.
While it is known that solvability of linear equation systems over finite domains is not expressible in fixed-point logic with counting, it has also been observed that the logic can define many other natural problems from linear algebra. For instance, it is known that over finite fields, the inverse to a non-singular matrix and the characteristic polynomial of a square matrix can be defined in FPC [8,12]. We conclude this paper by studying the extent to which these results can be generalised to finite commutative rings. Specifically, we use the structure theory of finite commutative rings to show that common basic problems in linear algebra over rings reduce to the respective problems over local rings. Furthermore, we show that over rings that split into a direct sum of k-generated local rings, matrix inverse can be defined in FPC. Finally, we show that over the class of Galois rings, which are finite rings that generalise finite fields and rings of the form Z p n , there is a formula of FPC which can define the coefficients of the characteristic polynomial of any square matrix. In particular, this shows that the matrix determinant is definable in FPC over such rings.

Background on logic and algebra
Throughout this paper, all structures (and in particular, all algebraic structures such as groups, rings and fields) are assumed to be finite. Furthermore, it is assumed that all groups are Abelian, unless otherwise noted.
1.1. Logic and structures. The logics we consider in this paper include first-order logic (FO) and inflationary fixed-point logic (FP) as well as their extensions by counting terms, which we denote by FOC and FPC, respectively. We also consider the extension of firstorder logic with operators for deterministic transitive closure, which we denote by DTC. For details see [13,14].
A vocabulary τ is a sequence of relation and constant symbols (R 1 , . . . , R k , c 1 , . . . , c ℓ ) in which every R i has an arity r i ≥ 1. A τ -structure A = (D(A), R A 1 , . . . , R A k , c A 1 , . . . , c A ℓ ) consists of a non-empty set D(A), called the domain of A, together with relations R A i ⊆ D(A) r i and constants c A j ∈ D(A) for each i ≤ k and j ≤ ℓ. Given a logic L and a vocabulary τ , we write L[τ ] to denote the set of τ -formulas of L. A τ -formula φ( x) with | x | = k defines a k-ary query that takes any τ -structure A to the set φ( x) To evaluate formulas of counting logics like FOC and FPC we associate to each τ -structure A the two-sorted extension A + of A by adding as a second sort the standard model of arithmetic N = (N, +, ·). We assume that in such logics all variables (including the fixed-point variables) are typed and we require that quantification over the second sort is bounded by numerical terms in order to guarantee a polynomially bounded range of all quantifiers. To relate the original structure with the second sort we consider counting terms of the form #x . φ(x) which take as value the number of different elements a ∈ D(A) such that A + |= φ(a). For details see [14,12].
Interpretations and logical reductions. Consider signatures σ and τ and a logic L. An m-ary L-interpretation of τ in σ is a sequence of formulas of L in vocabulary σ consisting of: (i) a formula δ( x); (ii) a formula ε( x, y); (iii) for each relation symbol R ∈ τ of arity k, a formula φ R ( x 1 , . . . , x k ); and (iv) for each constant symbol c ∈ τ , a formula γ c ( x), where each x, y or x i is an m-tuple of free variables. We call m the width of the interpretation. We say that an interpretation I associates a τ -structure Lindström quantifiers and extensions. Let σ = (R 1 , . . . , R k ) be a vocabulary where each relation symbol R i has arity r i , and consider a class K of σ-structures that is closed under isomorphism.
With K and m ≥ 1 we associate a Lindström quantifier Q m K whose type is the tuple (m; r 1 , . . . , r k ). For a logic L, we define the extension L(Q m K ) by adding rules for constructing formulas of the kind x δ has length m, x ε has length 2 · m and each x i has length m · r i . To define the semantics of this new quantifier we associate the interpretation I = (δ( x δ ), ε( x ε ), (φ i ( x i )) 1≤i≤k ) of signature σ in τ of width m and we let A |= Q K x δ x ε x 1 . . . x k . (δ, ε, φ 1 , . . . , φ k ) if I(A) is defined and I(A) ∈ K as a σ-structure (see [23,26]). Similarly we can consider the extension of L by a collection Q of Lindström quantifiers. The logic L(Q) is defined by adding a rule for constructing formulas with Q, for each Q ∈ Q, and the semantics is defined by considering the semantics for each quantifier Q ∈ Q, as above. Finally, we write Q K := {Q m K | m ≥ 1} to denote the vectorised sequence of Lindström quantifiers associated with K (see [11]). Definition 1.1 (Logical reductions). Let C be a class of σ-structures and D a class of τ -structures closed under isomorphism.
• C is said to be L-many-to-one reducible to D (C ≤ L D) if there is an L-interpretation I of τ in σ such that for every σ-structure A it holds that A ∈ C if, and only if, Note that as in the case of usual many-to-one and Turing-reductions, we have that whenever a class C is L-many-to-one reducible to a class D, C is also L-Turing reducible to D.

1.2.
Rings and systems of linear equations. We recall some definitions from commutative and linear algebra, assuming that the reader has knowledge of basic algebra and group theory (for further details see Atiyah et al. [3]). For m ≥ 2, we write Z m to denote the ring of integers modulo m.
Commutative rings. Let (R, ·, +, 1, 0) be a commutative ring. An element x ∈ R is a unit if xy = yx = 1 for some y ∈ R and we denote by R × the set of all units. Moreover, we say that y divides x (written y | x) if x = yz for some z ∈ R. An element x ∈ R is nilpotent if x n = 0 for some n ∈ N, and we call the least such n ∈ N the nilpotency of x. The element x ∈ R is idempotent if x 2 = x. Clearly 0, 1 ∈ R are idempotent elements, and we say that an idempotent x is non-trivial if x / ∈ {0, 1}. Two elements x, y ∈ R are orthogonal if xy = 0.
We say that R is a principal ideal ring if every ideal of R is generated by a single element. An ideal m ⊆ R is called maximal if m = R and there is no ideal m ′ R with m m ′ . A commutative ring R is local if it contains a unique maximal ideal m. Rings that are both local and principal are called chain rings. For example, all prime rings Z p n are chain rings and so too are all finite fields. More generally, a k-generated local ring is a local ring for which the maximal ideal is generated by k elements. See McDonald [24] for further background.
Remark 1.2. When we speak of a "commutative ring with a linear order", then in general the ordering does not respect the ring operations (cp. the notion of ordered rings from algebra).
Systems of linear equations. We consider systems of linear equations over groups and rings whose equations and variables are indexed by arbitrary sets, not necessarily ordered. In the following, if I, J and X are finite and non-empty sets then an I × J matrix over X is a function A : I × J → X. An I-vector over X is defined similarly as a function b : I → X.
A system of linear equations over a group G is a pair (A, b) with A : I × J → {0, 1} and b : I → G. By viewing G as a Z-module (i.e. by defining the natural multiplication between integers and group elements respecting 1·g = g, (n+1)·g = n·g+g, and (n−1)·g = n·g−g), we write (A, b) as a matrix equation A · x = b, where x is a J-vector of variables that range over G. The system (A, b) is said to be solvable if there exists a solution vector c : J → G such that A · c = b, where we define multiplication of unordered matrices and vectors in the usual way by (A · c)(i) = j∈J A(i, j) · c(j) for all i ∈ I. We represent linear equation systems over groups as finite structures over the vocabulary τ les-g := (G, A, b, τ group ), where τ group := (+, e) denotes the language of groups, G is a unary relation symbol (identifying the elements of the group) and A, b are two binary relation symbols. Similarly, a system of linear equations over a commutative ring R is a pair (A, b) where A is an I × J matrix with entries in R and b is an I-vector over R. As before, we usually write (A, b) as a matrix equation A·x = b and say that (A, b) is solvable if there is a solution vector c : J → R such that A · c = b. In the case that the ring R is not commutative, we represent linear systems in the form We consider three different ways to represent linear systems over rings as relational structures. For simplicity, we just explain the case of linear systems over commutative rings here. The encoding of linear systems over non-commutative rings is analogous. Firstly, we consider the case where the ring is part of the structure. Let τ les-r := (R, A, b, τ ring ), where τ ring = (+, ·, 1, 0) is the language of rings, R is a unary relation symbol (identifying the ring elements), and A and b are ternary and binary relation symbols, respectively. Then a finite τ les-r -structure S describes the linear equation system (A S , b S ) over the ring R S = (R S , + S , · S ). Secondly, we consider a similar encoding but with the additional assumption that the elements of the ring (but not the equations or variables of the equation systems) are linearly ordered. Such systems can be seen as finite structures over the vocabulary τ les-r := (τ les-r , ). Finally, we consider linear equation systems over a fixed ring encoded in the vocabulary: for every ring R, we define the vocabulary τ les (R) := (A r , b r | r ∈ R), where for each r ∈ R the symbols A r and b r are binary and unary, respectively. A finite τ les (R)-structure S describes the linear equation system (A, b) over R where A(i, j) = r if, and only if, (i, j) ∈ A S r and similarly for b (assuming that the A S r form a partition of I × J and that the b S r form a partition of I). Finally, we say that two linear equation systems S and S ′ are equivalent, if either both systems are solvable or neither system is solvable.

Solvability problems over different algebraic domains
It follows from the work of Atserias, Bulatov and Dawar [4] that fixed-point logic with counting cannot express solvability of linear equation systems ('solvability problems') over any class of (finite) groups or rings. In this section we study solvability problems over such different algebraic domains in terms of logical reductions. Our main result here is to show that the solvability problem over groups (SlvAG) DTC-reduces to the corresponding problem over commutative rings (SlvCR) and that the solvability problem over commutative rings which are equipped with a linear order (SlvCR ) FP-reduces to the solvability problem over cyclic groups (SlvCycG). Note that over any non-Abelian group, the solvability problem already is NP-complete [15].

SlvCR SlvCR SlvR
SlvLR k SlvF SlvLR Our methods can be further adapted to show that solvability over arbitrary (that is, not necessarily commutative) rings (SlvR) DTC-reduces to SlvCR. We then consider the solvability problem restricted to special classes of commutative rings: local rings (SlvLR) and k-generated local rings (SlvLR k ), which generalises solvability over finite fields (SlvF). The reductions that we establish are illustrated in Figure 1.
In the remainder of this section we describe three of the outlined reductions: from commutative rings equipped with a linear order to cyclic groups, from groups to commutative rings, and finally from general rings to commutative rings. To give the remaining reductions from commutative rings to local rings and from k-generated local rings to commutative linearly ordered rings we need to delve further into the theory of finite commutative rings, which is the subject of §3.
Let us start by considering the solvability problem over commutative rings that come with a linear order. We want to construct an FP-reduction that translates from linear systems over such rings to equivalent linear equation systems over cyclic groups. Hence, if the ring is linearly ordered (and in particular if the ring is fixed), this shows that, up to FP-definability, it suffices to analyse the solvability problem over cyclic groups.
The main idea of the reduction, which is given in full detail in the proof of Theorem 2.1, is as follows: for a ring R = (R, +, ·), we consider a decomposition of the additive group (R, +) into a direct sum of cyclic groups g 1 ⊕ · · · ⊕ g k for appropriate elements g i ∈ R. Then every element r ∈ R can uniquely be identified with a k-tuple (r 1 , . . . , r k ) ∈ Z ℓ 1 × · · · × Z ℓ k where ℓ i denotes the order of g i in (R, +), and furthermore, the addition in R translates to component-wise addition (modulo ℓ i ) in Z ℓ 1 × · · · × Z ℓ k .
Having such a group decomposition at hand, this suggests to treat linear equations component-wise, i.e. to let variables range over the cyclic summands g i and to split each equation into a set of equations accordingly. In general, however, in contrast to the ring addition, the ring multiplication will not be compatible with such a decomposition of the group (R, +). Moreover, an expression of the ring elements with respect to a decomposition of (R, +) has to be definable in fixed-point logic. To guarantee this last point, we make use of the given linear ordering.
Proof. Consider a system of linear equations (A, b) over a commutative ring R of characteristic m and let be a linear order on R. In the following we describe a mapping that translates the system (A, b) into a system of equations (A ⋆ , b ⋆ ) over the cyclic group Z m which is solvable if, and only if, (A, b) has a solution over R. Observe that the group Z m can easily be interpreted in the ring R in fixed-point logic, for instance as the subgroup of (R, +) generated by the multiplicative identity. Indeed, for the purpose of the following construction we could also identify Z m with the cyclic group generated by any element r ∈ R which has maximal order in (R, +).
Let {g 1 , . . . , g k } ⊆ R be a (minimal) generating set for the additive group (R, +) and let ℓ i denote the order of g i in (R, +). Moreover, let us choose the set of generators such that that ℓ 1 | ℓ 2 | · · · | ℓ k | m. From now on, we identify the group generated by g i with the group Z m /ℓ i Z m and thus have (R, +) ∼ = (Z m ) k /(ℓ 1 Z m × · · · × ℓ k Z m ). In this way we obtain a unique representation for each element r ∈ R as r = (r 1 , . . . , r k ) where r i ∈ Z m /ℓ i Z m . Similarly, we can identify variables x ranging over R with tuples x = (x 1 , . . . , x k ) where x i ranges over Z m /ℓ i Z m .
To translate a linear equation over R into an equivalent set of equations over Z m , the crucial step is to consider the multiplication of a coefficient r ∈ R with a variable x with respect to the chosen representation, i.e. the formal expression r·x = (r 1 , . . . , r k )·(x 1 , . . . , x k ). We observe that the ring multiplication is uniquely determined by the products of all pairs of generators g i · g j , so we let g i · g j = k y=1 c ij y · g y , where c ij y ∈ Z m /ℓ y Z m for 1 ≤ y ≤ k. Now, let us reconsider the formal expression r · x from above where r i , Here, the coefficient of generator g y in the last expression is an element in Z m /ℓ y Z m , which in turn means that we have to reduce all summands r i x j c ij y modulo ℓ y . To see that this transformation is sound, we choose z ∈ Z m arbitrary such that ord(g i g j ) | z −r i x j . However, since it holds that ℓ y | ord(g i g j )·c ij y for all 1 ≤ y ≤ k we conclude that zc ij y = r i x j c ij y mod ℓ y for all 1 ≤ i, j, y ≤ k. Finally, since ℓ y | m for all 1 ≤ y ≤ k we can uniformly consider all terms as taking values in Z m first, and then reduce the results modulo ℓ y afterwards.
For notational convenience, let us set b r,y j := k i=1 r i c ij y , then we can write rx = ( k j=1 b r,1 j x j )g 1 + · · · + ( k j=1 b r,k j x j )g k . Note that the remaining multiplications between variables x j and coefficients b r,y j are just multiplications in Z m /ℓ y Z m . However, for our translation we face a problem, since we cannot express that x i ranges over Z m /ℓ i Z m as a linear equation over Z m . To overcome this obstacle, let us first drop the requirement completely, i.e. let us consider the multiplication of R in the form given above lifted to the group (Z m ) k . Furthermore, let π denote the natural group epimorphism which maps (Z m ) k onto (R, +). We claim that for all r, x ∈ (Z m ) k we have π(rx) = π(r)π(x). Together with the fact that π is also a group homomorphism from (Z m ) k to (R, +) this justifies doing all calculations in Z m first, and reducing the result to (R, +) via π afterwards. To see that π(rx) = π(r)π(x) for all r, x ∈ (Z m ) k let us denote by π y the natural group epimorphism from Z m → Z m /ℓ y Z m . Note that for r = r 1 g 1 + · · · + r k g k we have π(r) = π 1 (r 1 )g 1 + · · · + π k (r k )g k . Then we have to show that for all We are prepared to give the final reduction. In a first step we substitute each variable x by a tuple of variables (x 1 , . . . , x k ) where x i takes values in Z m . We then translate all terms rx in the equations of the original system according to the above explanations and split each equation e = c into a set of k equations e 1 = c 1 , . . . , e k = c k according to the decomposition of (R, +). We finally have to address the issue that the set of new equations is not equivalent to the original equation e = c; indeed, we really want to introduce the set of equations π 1 (e 1 ) = π 1 (c 1 ), . . . , π k (e k ) = π k (c k ). However, this problem can be solved easily as for all 1 ≤ i ≤ k the linear equation Hence, altogether we obtain a system of linear equations (A ⋆ , b ⋆ ) over Z m which is solvable if, and only if, the original system (A, b) has a solution over R.
We proceed to explain that the mapping (A, b) → (A ⋆ , b ⋆ ) can be expressed in FP. Here, we crucially rely on the given order on R to fix a set of generators. More specifically, as we can compute a set of generators in time polynomial in | R |, it follows from the Immerman-Vardi theorem [21,28] that there is an FP-formula φ(x) such that φ(x) R = {g 1 , . . . , g k } generates (R, +) and g 1 · · · g k . Having fixed a set of generators, it is obvious that the map ι : R → (Z m ) k /(ℓ 1 Z m × · · · × ℓ k Z m ) taking r → (r 1 , . . . , r k ), is FP-definable. Furthermore, the map (r, y, j) → b r,y j can easily be formalised in FP, since the coefficients are just obtained by performing a polynomial-bounded number of ring operations. Splitting the original system of equations component-wise into k systems of linear equations, multiplying them with a coefficient ℓ i and combining them again to a single system over Z m is trivial.
Finally, we note that a linear system over the ring Z m can be reduced to an equivalent system over the group Z m , by rewriting terms ax with a ∈ Z m as x + x + · · · + x (a-times).
Note that in the proof we crucially rely on the fact that we have given a linear order on the ring R to be able to fix a set of generators of the Abelian group (R, +). So far, we have shown that the solvability problem over linearly ordered commutative rings can be reduced to the solvability problem over groups. This raises the question whether a translation in the other direction is also possible; that is, whether we can reduce the solvability problem over groups to the solvability problem over commutative rings. Essentially, such a reduction requires a logical interpretation of a commutative ring in a group, which is what we describe in the proof of the following theorem.
Proof. Let (A, b) be a system of linear equations over a group (G, + G , e), where A ∈ {0, 1} I×J and b ∈ G I . For the reduction, we first construct a commutative ring φ(G) from G and then lift (A, b) to a system of equations We consider G as a Z-module in the usual way and write · Z for multiplication of group elements by integers. Let d be the least common multiple of the order of all group elements. Then we have ord G (g) | d for all g ∈ G, where ord G (g) denotes the order of g. This allows us to obtain from · Z a well-defined multiplication of G by elements of which commutes with group addition. We write + d and · d for addition and multiplication in Z d , where [0] d and [1] d denote the additive and multiplicative identities, respectively. We now consider the set G × Z d as a group, with component-wise addition defined by It is easily verified that this multiplication is associative, commutative and distributive over +. It follows that φ(G) : Proof of claim. In one direction, observe that a solution s to (A, b) gives the solution ι(s) to Hence, s g gives a solution to (A, b), as required.
All that remains is to show that our reduction can be formalised as an interpretation in DTC. Essentially, this comes down to showing that the ring φ(G) can be interpreted in G by formulas of DTC. By elementary group theory, we know that for elements g ∈ G of maximal order we have ord (g) = d. It is not hard to see that the set of group elements of maximal order can be defined in DTC; for example, for any fixed g ∈ G the set of elements of the form n · g for n ≥ 0 is DTC-definable as a reachability query in the deterministic directed graph E = {(x, y) : y = x+g}. Hence, we can interpret Z d in G, and as on ordered domains, DTC expresses all LOGSPACE-computable queries (see e.g. [14]) the multiplication of φ(G) is also DTC-definable, which completes the proof.
We conclude this section by discussing the solvability problem over general (i.e. not necessarily commutative) rings R. Over such rings, linear equation systems have a representation of the form A ℓ · x + (x t · A r ) t = b where A ℓ and A r are two coefficient matrices over R. This representation takes into account the difference between left and right multiplication of variables with coefficients from the ring.
First of all, if the ring comes with a linear ordering, then it is easy to adapt the proof of Theorem 2.1 for the case of non-commutative rings. Hence, in this case we obtain again an FP-reduction to the solvability problem over cyclic groups. Moreover, in what follows we are going to establish a DTC-reduction from the solvability problem over general rings to the solvability problem over commutative rings R. These results indicate that from the viewpoint of FP-definability the solvability problem does not become harder when considered over arbitrary (i.e. possibly non-commutative) rings.
As a technical preparation, we first give a first-order interpretation that transforms a linear equation systems over R into an equivalent system with the following property: the linear equation system is solvable if, and only if, the solution space contains a numerical solution, i.e. a solution over Z.
There is an FO-interpretation I of τ les-r in τ les-r such that for every linear equation system S : Proof. Let A ℓ ∈ R I×J , A r ∈ R J×I , and b ∈ R I . By duplicating each variable we can assume that for every j ∈ J we have that A ℓ (i, j) = 0 for all i ∈ I, or A r (j, i) = 0 for all i ∈ I, i.e. we assume that each variable occurs only with either left-hand or right-hand coefficients. For S ⋆ , we introduce for each variable x j (j ∈ J) and each element s ∈ R a new variable x s j , i.e. the index set for the variables of S ⋆ is J × R. Finally, we replace all terms of the form rx j by s∈R rsx s j , and similarly, terms of the form x j r by s∈R srx s j . If we let the new variables x s j take values in Z, then we obtain a new linear equation system of the desired form S ⋆ : A ⋆ · Z x ⋆ = b ⋆ over the Z-module (R, +). It is easy to see that this transformation can be formalised by an FO-interpretation I.
Finally we observe that the newly constructed linear equation system S ⋆ is equivalent to the original system S. To see this, assume that x ∈ R J is a solution of the original system. By setting x s j = 1 if x j = s and by setting x s j = 0 otherwise, we obtain a solution x ⋆ ∈ Z R×J of the system S ⋆ . For the other direction, assume that x ⋆ ∈ Z R×J is a solution of S ⋆ . Then we set x j := s∈R sx s j = s∈R x s j s to get a solution x for the system S. By Lemma 2.3, we can restrict to linear equation systems (A, b) over the Z-module (R, +) where variables take values in Z. However, since Z is an infinite domain, we let d = max{ord(r) : r ∈ R} denote the maximal order of elements in the group (R, +). Then we can also treat (A, b) as an equivalent linear equation system over the Z d -module (R, +).
At this point, we reuse our construction from Theorem 2.2 to obtain a linear system For the non-trivial direction, suppose s is a solution to (A ⋆ , b ⋆ ) and decompose s = s g + s n into group elements and number elements, as explained in the proof of Theorem 2.2. Recalling that r 1 • r 2 = 0 for all r 1 , there is a solution s n to (A ⋆ , b ⋆ ) that consists only of number elements, as claimed. Thus we obtain:

The structure of finite commutative rings
In this section we study structural properties of (finite) commutative rings and present the remaining reductions for solvability outlined in §2: from commutative rings to local rings, and from k-generated local rings to commutative rings with a linear order. Recall that a commutative ring R is local if it contains a unique maximal ideal m. The importance of the notion of local rings comes from the fact that they are the basic building blocks of finite commutative rings. We start by summarising some of their useful properties.
Proposition 3.1 (Properties of (finite) local rings). Let R be a finite commutative ring.
• If R is local then its cardinality (and in particular its characteristic) is a prime power.
Proof. The first claim follows directly by the uniqueness of the maximal ideal m. For the second part, assume R is local but contains a non-trivial idempotent element x, i.e.
x(1 − x) = 0 but x = 0, 1. In this case x and (1 − x) are two non-units distinct from 0, hence x, (1 − x) ∈ m. But then x + (1 − x) = 1 ∈ m which yields the contradiction. On the other hand, if R only contains trivial idempotents, then we claim that every non-unit in R is nilpotent: assume that x = 0 is a non-unit which is not nilpotent, then x n+km = x n for some m, n ≥ 1 and all k ≥ 1 because R is finite. In particular, Since x nm = 1 we have x nm = 0 which is a contradiction to our assumption that x is not nilpotent. Hence we have that x is a non-unit if, and only if, x is nilpotent. Knowing this, it is easy to verify that also sums of non-units are non-units, which implies that the set of non-units forms a unique maximal ideal in R.
For the third part, assume x ∈ R is idempotent.
Finally, let R be local and suppose | R | = p k n where p ∤ n. We want to show that n = 1. Otherwise I p = {r ∈ R | p k r = 0} and I n = {r ∈ R | nr = 0} would be two proper distinct ideals. To see this, let x, y ∈ Z with xp k + yn = 1. We first show that I p ∩ I n = {0}. Assume p k r = 0 = nr for some r ∈ R, then xp k r + ynr = 0 and hence r = 0. Furthermore we show that R = I p + I c : for each r ∈ R we have that nr ∈ I p , p k r ∈ I n and so ynr + xp k r = r ∈ (I p + I n ). This shows that R does not contain a unique maximal ideal, and so R was not local.
By this proposition we know that finite commutative rings can be decomposed into local summands that are principal ideals generated by pairwise orthogonal idempotent elements. Indeed, this decomposition is unique (for more details, see e.g. [6]). We next show that the ring decomposition R = e∈B(R) e · R is FO-definable. As a first step, we note that B(R) (the base of R) is FO-definable over R. Proof. We claim that B(R) consists precisely of those non-trivial idempotent elements of R which cannot be expressed as the sum of two orthogonal non-trivial idempotent elements. To establish this claim, consider an element e ∈ B(R) and suppose that e = x + y where x and y are orthogonal non-trivial idempotents. It follows that e is different from both x and y, since if e = x, say, then y = e − x = 0 and similarly when e = y. Now ex = xe = x(x + y) = x 2 + xy = x and, similarly, ey = y. Since both ex and ey are idempotent elements in eR, it follows that ex, ey ∈ {0, e}, since eR is local with identity e and contains no non-trivial idempotents. But by the above we know that ex = x = e and ey = y = e, so ex = ey = x = y = 0. This contradicts the fact that e = x + y is non-trivial, so the original assumption must be false.
Conversely, suppose x ∈ R is a non-trivial idempotent element that cannot be written as the sum of two orthogonal non-trivial idempotents. Writing B(R) = {e 1 , . . . , e m }, we get that x = x(1) = x(e 1 + · · · + e m ) = xe 1 + · · · + xe m . Each xe i is an idempotent element of e i R and since e i R is local, xe i must be trivial. Hence, there are distinct f 1 , . . . , f n ∈ B(R), with n ≤ m, such that x = f 1 + · · · + f n . But since x cannot be written as a sum of two (or more) non-trivial idempotents, it follows that n = 1 and x ∈ B(R), as claimed. Now it is straightforward to write down a first-order formula that identifies exactly all nontrivial idempotent elements that are not expressible as the sum of two non-trivial orthogonal idempotents. Moreover, if R was already local then trivially B(R) = {1}. To test for locality, it suffices by Proposition 3.1 to check whether all idempotent elements in R are trivial and this can be expressed easily in first-order logic.
The next step is to show that the canonical mapping R → e∈B(R) e · R can be defined in FO. To this end, recall from Proposition 3.1 that for every e ∈ B(R) (indeed, for any idempotent element e ∈ R), we can decompose the ring R as R = e · R ⊕ (1 − e) · R. This fact allows us to define for all base elements e ∈ B(R) the projection of elements r ∈ R onto the summand e · R in first-order logic, without having to keep track of all local summands simultaneously.
Lemma 3.4. There is a formula ψ(x, y, z) ∈ FO(τ ring ) such that for all rings R, e ∈ B(R) and r, s ∈ R, it holds that (R, e, r, s) |= ψ if, and only if, s is the projection of r onto e · R.
Proof. The formula can simply be given as It follows that any relation over R can be decomposed in first-order logic according to the decomposition of R into local summands. In particular, a linear equation system (A | b) over R is solvable if, and only if, each of the projected linear equation systems (A e | b e ) is solvable over eR. Hence, we obtain: We now want to exploit the algebraic structure of finite local rings further. In §2 we proved that solvability over rings with a linear ordering can be reduced in fixed-point logic to solvability over cyclic groups. This naturally raises the question: which classes of rings can be linearly ordered in fixed-point logic? By Lemma 3.4, we know that for this question it suffices to focus on local rings, which have a well-studied structure. The most basic local rings are the rings Z p n , and the natural ordering of such rings can be easily defined in FP (since the additive group of Z p n is cyclic). Moreover, the same holds for finite fields as they have a cyclic multiplicative group [20].
In the following lemma, we are able to generalise these insights in a strong sense: for any fixed k ≥ 1 we can define an ordering on the class of all local rings for which the maximal ideal is generated by at most k elements. We refer to such rings as k-generated local rings. Note that for k = 1 we obtain the notion of chain rings which include all finite fields and rings of the form Z p n . For increasing values of k the structure of k-generated local rings becomes more and more sophisticated. For instance, the ring R k = Z 2 [X 1 , . . . , X k ]/(X 2 1 , . . . , X 2 k ) is a k-generated local ring which is not (k − 1)-generated. Lemma 3.6 (Ordering k-generated local rings). There is an FP-formula φ(x, z 1 , . . . , z k ; v, w) such that for all k-generated local rings R there are α, π 1 , . . . , π k ∈ R such that is a linear order on R.
Proof. First of all, there are FP-formulas φ u (x), φ m (x), φ g (x 1 , . . . , x k ) that define in any k-generated local ring R the set of units, the maximal ideal m (i.e. the set of non-units) and the property of being a set of size ≤ k that generates the ideal m, respectively. More precisely, for all (π 1 , . . . , π k ) ∈ φ R g we have that In particular there is a first-order interpretation of the field F := R/m in R.
The idea of the following proof is to represent the elements of R as polynomial expressions of a certain kind. Let q := | F | and define Γ(R) := {r ∈ R : r q = r}. It can be seen that Γ(R) \ {0} forms a multiplicative group which is known as the Teichmüller coordinate set [6]. Now, the map ι : Γ(R) → F defined by r → r + m is a bijection. Indeed, for two different units r, s ∈ Γ(R) we have r − s / ∈ m. Otherwise, we would have r − s = x for some x ∈ m and thus r = (s + x) q = s + q i=1 q i x i s q−i . Since q ∈ m and r − s = x we obtain that x = xy for some y ∈ m. Hence x(1 − y) = 0 and since (1 − y) ∈ R × , as in a local ring the sum of a unit and a non-unit is always a unit, this means x = 0.
As explained above, we can define in FP an order on F by fixing a generator α ∈ F × of the cyclic group F × . Combining this order with ι −1 , we obtain an FP-definable order on Γ(R). The importance of Γ(R) now lies in the fact that every ring element can be expressed as a polynomial expression over a set of k generators of the maximal ideal m with coefficients lying in Γ(R). To be precise, let π 1 , . . . , π k ∈ m be a set of generators for m, i.e. m = π 1 R + · · · + π k R, where each π i has nilpotency n i for 1 ≤ i ≤ k. We claim that we can express r ∈ R as r = (i 1 ,...,i k )≤ lex (n 1 ,...,n k ) To see this, consider the following recursive algorithm: • If r ∈ R × , then for a unique a ∈ Γ(R) we have that r ∈ a+m, so r = a+(π 1 r 1 +· · ·+π k r k ) for some r 1 , . . . , r k ∈ R and we continue with r 1 , . . . , r k . • Else r ∈ m, and r = π 1 r 1 + · · · + π k r k for some r 1 , . . . , r k ∈ R; continue with r 1 , . . . , r k . Observe that for all pairs a, b ∈ Γ(R) there exist elements c ∈ Γ(R), r ∈ m such that aπ i 1 1 · · · π i k k + bπ i 1 1 · · · π i k k = cπ i 1 1 · · · π i k k + rπ i 1 1 · · · π i k k . Since π i 1 1 · · · π i k k = 0 if i ℓ ≥ n ℓ for some 1 ≤ ℓ ≤ k, the process is guaranteed to stop and the claim follows.
Note that this procedure neither yields a polynomial-time algorithm nor do we obtain a unique expression, as for instance, the choice of elements r 1 , . . . , r k ∈ R (in both recursion steps) need not to be unique. However, knowing only the existence of an expression of this kind, we can proceed as follows. For any sequence of exponents (ℓ 1 , . . . , ℓ k ) ≤ lex (n 1 , . . . , n k ) define the ideal R[ℓ 1 , . . . , ℓ k ] R as the set of all elements having an expression of the form (P) where a i 1 ···i k = 0 for all (i 1 , . . . , i k ) ≤ lex (ℓ 1 , . . . , ℓ k ).
It is clear that we can define the ideal R[ℓ 1 , . . . , ℓ k ] in FP. Having this, we can use the following recursive procedure to define a unique expression of the form (P) for all r ∈ R: • Choose the minimal (i 1 , . . . , i k ) ≤ lex (n 1 , . . . , n k ) such that r = aπ i 1 1 · · · π i k k + s for some (minimal) a ∈ Γ(R) and s ∈ R[i 1 , . . . , i k ]. Continue the process with s. Finally, the lexicographical ordering induced by the ordering on n 1 × · · · × n k and the ordering on Γ(R) yields an FP-definable order on R (where we use the parameters to fix a generator of F × and a set of generators of m).

Solvability problems under logical reductions
In the previous two sections we studied reductions between solvability problems over different algebraic domains. Here we change our perspective and investigate classes of queries that are reducible to the solvability problem over a fixed commutative ring. Our motivation for this work was to study extensions of first-order logic with generalised quantifiers which express solvability problems over rings. In particular, the aim was to establish various normal forms for such logics. However, rather than defining a host of new logics in full detail, we state our results in this section in terms of closure properties of classes of structures that are themselves defined by reductions to solvability problems. We explain the connection between the specific closure properties and the corresponding logical normal forms below.
To state our main results formally, let R be a fixed commutative ring. We write Slv(R) to denote the solvability problem over R, as a class of τ les (R)-structures. Let Σ qf FO (R) and Σ FO (R) denote the classes of queries that are reducible to Slv(R) under quantifier-free and first-order many-to-one reductions, respectively. Then we show that Σ qf FO (R) and Σ FO (R) are closed under first-order operations for any commutative ring R of prime-power characteristic, i.e. char(R) = p k for a prime p and an integer k ≥ 1. In particular, we have that Σ qf FO (R) contains any FO-definable query in such a case. Furthermore, we prove that if R has prime characteristic, i.e. char(R) = p for a prime p, then Σ qf FO (R) and Σ FO (R) are closed under oracle queries. Thus, if we denote by Σ T FO (R) the class of queries reducible to Slv(R) by first-order Turing reductions, then for all commutative rings R of prime characteristic the three solvability reduction classes coincide, i.e. we have Σ qf FO (R) = Σ FO (R) = Σ T FO (R). To relate these results to logical normal forms, we let D = Slv(R) and write FOS R := FO( Q D ) to denote first-order logic extended by Lindström quantifiers expressing solvability over R. Then the closure of Σ FO (R) under first-order operations amounts to showing that the fragment of FOS R which consists of formulas without nested solvability quantifiers has a normal form which consists of a single application of a solvability quantifier to a first-order formula. Moreover, for the case when R has prime characteristic, the closure of Σ qf FO (R) = Σ FO (R) under first-order oracle queries amounts to showing that nesting of solvability quantifiers can be reduced to a single quantifier. It follows that FOS R has a strong normal form: one application of a solvability quantifier to a quantifier-free formula suffices. It remains as an interesting open question whether the closure properties we establish here can be extended to the case of general commutative rings, i.e. to rings R whose characteristic is divisible by two different primes, e.g. to R = Z 6 .
Throughout this section, the reader should keep in mind that for logical reductions to the solvability problem over a fixed ring R, we can safely drop all formulas ε( x, y) in interpretations I which define the equality-congruence on the domain of the interpreted structure: indeed, duplicating equations or variables does not affect the solvability of a given linear equation system.

4.1.
Closure under first-order operations. Let R be a fixed commutative ring of characteristic m. In this section we prove the closure of Σ qf FO (R) and Σ FO (R) under first-order operations for the case that m is a prime power. To this end, we need to establish a couple of technical results. Of particular importance is the following key lemma, which gives a simple normal form for linear equation systems: up to quantifier-free reductions, we can restrict ourselves to linear systems over rings Z m , where the constant term b i of every linear equation (A · x)(i) = b i is b i = 1 ∈ Z m , and the same holds for all the coefficients, i.e. A(i, j) = 1, for all i, j ∈ I. The proof of the lemma crucially relies on the fact that the ring R is fixed. Recall that m denotes the characteristic of the ring R. Proof. We describe I as the composition of three quantifier-free transformations: the first one maps a system (A, b) over R to an equivalent system (B, c) over Z m , where m is the characteristic of R. Secondly, (B, c) is mapped to an equivalent system (C, 1) over Z m . Finally, we transform (C, 1) into an equivalent system (D, 1) over Z m , where D is a {0, 1}matrix. The first transformation is obtained by adapting the proof of Theorem 2.1. It can be seen that first-order quantifiers and fixed-point operators are not needed if R is fixed.
For the second transformation, suppose that B is an I ×J matrix and c a vector indexed by I. We define a new linear equation system T which has in addition to all the variables that occur in S, a new variable v e for every e ∈ I and a new variable w r for every r ∈ R. For every element r ∈ Z m , we include in T the equation (1 − r)w 1 + w r = 1. It can be seen that this subsysem of equations has a unique solution given by w r = r for all r ∈ Z m . Finally, for every equation j∈J B(e, j) · x j = c(e) in S (indexed by e ∈ I) we include in T the two equations v e + j∈J B(e, j) · x j = 1 and v e + w c(e) = 1.
Finally, we translate the system T : Cx = 1 over Z m into an equivalent system over Z m in which all coefficients are either 0 or 1. For each variable v in T, the system has the m distinct variables v 0 , . . . , v m−1 together with equations v i = v j for i = j. By replacing all terms rv by 1≤i≤r v i we obtain an equivalent system. However, in order to establish our original claim we need to express the auxiliary equations of the form v i = v j as a set of equations whose constant terms are equal to 1. To achieve this, we introduce a new The resulting system is equivalent to T and has the desired form. Since the ring R, and hence also the ring Z m , is fixed, it can be seen that all the reductions we have outlined above can be formalised as quantifier-free reductions.
. It is a basic fact from linear algebra that solvability of a linear equation system A · x = b is invariant under applying elementary row and column operations to the augmented coefficient matrix (A | b). Over fields, this insight forms the basis for the method of Gaussian elimination, which transforms the augmented coefficient matrix of a linear equation system into row echelon form. Over the integers, a generalisation of this method can be used to transform the coefficient matrix into Hermite normal form. The following lemma shows that a similar normal form exists over chain rings. The proof uses the fact that in a chain ring R, divisibility is a total preorder.

Lemma 4.3 (Hermite normal form).
For every k × ℓ-matrix A over a chain ring R, there exists an invertible k × k-matrix S and an ℓ × ℓ-permutation matrix T such that where a 11 | a 22 | a 33 | · · · | a kk and for all 1 ≤ i, j ≤ k it holds that a ii | a ij .
Proof. If R is not a field, fix an element π ∈ R \ R × which generates the maximal ideal m = πR in R. Then, every element of R can be represented in the form π n u where n ≥ 0 and u ∈ R × . It follows that for all elements r, s ∈ R we have r | s or s | r. Now, consider the following procedure: In the remaining k×ℓ-matrix A ′ , choose an entry r ∈ R which is minimal with respect to divisibility and use row and column permutations to obtain an equivalent k × ℓ-matrix A ′ which has r in the upper left corner, i.e. A ′ (1, 1) = r. Then, use the first row to eliminate all other entries in the first column. After this transformation, the element r still divides every entry in the resulting matrix, since all of its entries are linear combinations of entries of A ′ . Proceed in the same way with the (k − 1) × (ℓ − 1)-submatrix that results by deleting the first row and column from A ′ . Now we are ready to prove the closure of Σ qf FO (R) and Σ FO (R) under first-order operations for the case that R has prime-power characteristic, i.e. for the case that Z m is a chain ring. First of all, it can be seen that conjunction and universal quantification can be handled easily by combining independent subsystems into a single system. Assume for example, that we have given two quantifier-free interpretations I = (δ I ( x), φ A ( x, y)) and J = (δ J ( x), ψ A ( x, y)) of τ les (Z 2 )-structures in τ -structures, where we assume the normal form established in Lemma 4.1. In order to obtain a quantifier-free interpretation I ∩ J of τ les (Z 2 )-structures in τ -structures such that (I ∩ J )(A) ∈ Slv(Z 2 ) if, and only if, I(A) ∈ Slv(Z 2 ) and J (A) ∈ Slv(Z 2 ), we just set I ∩ J : , y)). To see that the resulting system is equivalent, the reader should recall that the duplication of equations and variables does not effect the solvability of linear equation systems.
Thus, the only non-trivial part of the proof is to establish closure under complementation. To do this, we describe an appropriate reduction that translates from non-solvability to solvability of linear systems. For this step we make use of the fact that R has primepower characteristic (which was not necessary for obtaining the closure under conjunction and universal quantification).
First of all, we consider the case where R has characteristic m = p for a prime p. In this case we know that Σ qf FO (R) = Σ qf FO (Z p ) and Σ FO (R) = Σ FO (Z p ) by Corollary 4.2, where Z p is a finite field. Over fields, the method of Gaussian elimination guarantees that a linear equation system (A, b) is not solvable if, and only if, for some vector x we have x · (A | b) = (0, . . . , 0, 1). In other words, the vector b is not in the column span of A if, and only if, the vector (0, . . . , 0, 1) is in the row span of (A | b). This shows that (A | b) is not solvable if, and only if, the system ((A | b) T , (0, . . . , 0, 1) T ) is solvable. In other words, over fields this reasoning translates the question of non-solvability to the question of solvability. In the proof of the next lemma, we generalise this approach to chain rings, which enables us to translate from non-solvability to solvability over all rings of prime-power characteristic. . We claim that for every row index i, the diagonal entry a ii in the transformed coefficient matrix A ′ divides the i-th entry of the transformed target vector b ′ . Towards a contradiction, suppose that there is some a ii not dividing b ′ i . Then a ii is a non-unit in R and can be written as a ii = uπ t for some unit u and t ≥ 1.
By Lemma 4.3, it holds that a ii divides every entry in the i-th row of A ′ and thus we can multiply the i-th row of the augmented matrix (A ′ | b ′ ) by an appropriate non-unit to obtain a vector of the form (0, . . . , 0, π n−1 ), contradicting our assumption. Hence, in the transformed augmented coefficient matrix (A | b), diagonal entries divide all entries in the same row, which implies solvability of (A, b), since every linear equation of the form ax + c = d with a, c, d ∈ R and a | c, d is clearly solvable.
Along with our previous discussion, Lemma 4.4 now yields the closure of Σ qf FO (R) and Σ FO (R) under complementation for all rings R which have prime-power characteristic. As noted above, it is an interesting open question whether the reduction classes are also closed under complementation when R does not have prime characteristic. The prototype example for studying this question is R = Z 6 .

4.2.
Solvability over rings of prime characteristic. From now on we assume that the commutative ring R is of prime characteristic p. We prove that in this case, the three reduction classes Σ qf FO (R), Σ FO (R) and Σ T FO (R) coincide. By definition, we have Σ qf FO (R) ⊆ Σ FO (R) ⊆ Σ T FO (R). Also, since we know that solvability over R can be reduced to solvability over Z p (Corollary 4.2), it suffices for our proof to show that Σ qf FO (Z p ) ⊇ Σ T FO (Z p ). More specifically, it can be seen that proving closure under oracle queries amounts to showing that the nesting of solvability queries can be reduced to the solvability of a single linear equation system. In order to formalise this, let I( x, y) be a quantifier-free interpretation of τ les (Z p ) in σ with parameters x, y of length k and ℓ, respectively. We extend the signature σ to σ X := σ∪{X} and restrict our attention to those σ Xstructures A (with domain A) where the relation symbol X is interpreted as Then it remains to show that for any quantifier-free interpretation O of τ les (Z p ) in σ X , there exists a quantifier-free interpretation of τ les (Z p ) in σ that describes linear equation systems equivalent to O.
Hereafter, for any σ X -structure A and tuples a and b, we will refer to O(A) as an "outer" linear equation system and refer to I( a, b)(A) as an "inner" linear equation system.
For the new linear equation system K(A) we take {v a, b | ( a, b) ∈ A k×ℓ } as the set of variables, and we include the equations b∈A v a, b = 1 for all a ∈ A k . In what follows, our aim is to extend K(A) by additional equations so that for every solution to K(A), there are values v b ∈ Z p such that for a ∈ A k it holds that For ( †) we follow a similar approach. For fixed tuples a, b and c, the condition on the right-hand side of ( †) is a simple Boolean combination of solvability queries. Hence, by Theorem 4.5, this combination can be expressed by a single linear equation system. Again we embed the respective linear equation system as a subsystem in K(A) where we add to each of its equations the term (1 + v a, b − v c, b ). With the same reasoning as above we conclude that this imposes the constraint ( †) on the variables v a, b and v c, b , which concludes the proof.
Corollary 4.7. If R has prime characteristic, then Σ qf FO (R) = Σ FO (R) = Σ T FO (R). As explained above, our results have some important consequences. For a prime p, let us denote by FOS p first-order logic extended by quantifiers deciding solvability over Z p . Corresponding extensions of first-order logic by rank operators over prime fields (FOR p ) were studied by Dawar et al. [12]. Their results imply that FOS p = FOR p over ordered structures, and that both logics have a strong normal form over ordered structures, i.e. that every formula is equivalent to a formula with only one application of a solvability or rank operator, respectively [27]. Corollary 4.7 allows us to generalise the latter result for FOS p to the class of all finite structures.
Corollary 4.8. Every formula φ ∈ FOS p is equivalent to a formula with a single application of a solvability quantifier to a quantifier-free formula.

Linear algebra in fixed-point logic with counting
In this section we present further applications of the techniques we developed during our study of the solvability problem that we pursued so far. Specifically, we turn our interest to the elements of linear algebra over finite commutative rings that can be expressed in fixed-point logic with counting. While the solvability problem over commutative rings is not definable in fixed-point logic with counting, we show here that various other natural matrix problems and properties, such as matrix inverse and matrix determinant, can be formalised in FPC over certain classes of rings.
To this end, we first apply the definable structure theory that we established in §3 to show that many linear-algebraic problems over commutative rings can be reduced to the respective problems over local rings. Next, we consider basic linear-algebraic queries over local rings, such as multiplication and matrix inversion, for the case where the local ring comes with a built-in linear ordering (that is, where the matrix is encoded as a finite τ matstructure). By Lemma 3.6 it follows that all of these definability results hold for matrices over k-generated local rings (for a fixed k) as well, since we can define in FPC a linear order in such rings. In particular, all of our results on τ mat -structures apply to matrices over chain rings (that is 1-generated, or principal ideal, local rings), which include the class of so called Galois rings. In the final part of this section we study matrices over unordered Galois rings specifically. Here our main result is that there are FPC-formulas that define the characteristic polynomial and the determinant of square matrices over Galois rings, which extends the results of Dawar et al. [12] concerning definability of the determinant of matrices over finite fields.

5.1.
Basic linear-algebraic operations over commutative rings. To begin with, we need to fix our encoding of matrices over a commutative ring R in terms of a finite relational structure. For this we proceed as we did for linear equation systems in §1. As before, for nonempty sets I and J, an I ×J-matrix A over a commutative ring R is a mapping A : I ×J → R. We set τ mat := (R, M, τ ring ), where R is a unary, and M is a ternary relation symbol, and we understand each τ mat -structure A as encoding a matrix over a commutative ring (given that A satisfies some basic properties which make this identification well-defined, e.g. that R A forms a commutative ring and that the projection of M A onto the third component is a subset of R A ). If we want to encode pairs of matrices over the same commutative ring R as a finite structure, we use the extended vocabulary τ mpair := (N, τ mat ), where N is an additional ternary relation symbol. Moreover, we consider a representation of matrices as structures in vocabulary τ mat := (τ mat , ) where it is assumed that A is a linear ordering on the set of ring elements R A . Similarly, structures of vocabulary τ mpair := (τ mpair , ) are used to encode pairs of matrices over the ring R A on which A is a linear ordering.
We first recall from §3 that every (finite) commutative ring R can be expressed as a direct sum of local rings, i.e. R = e∈B(R) e · R where all principal ideals eR are local rings. It follows that every I × J-matrix A over R can be uniquely decomposed into a set of matrices {A e : e ∈ B(R)}, where A = e∈B(R) A e and where A e is an I × J-matrix over the local ring e · R. Moreover, following Lemma 3.4, this set of matrices can be defined in first-order logic. Stated more precisely, Lemma 3.4 implies the existence of a parameterised first-order interpretation Θ(x) of τ mat in τ mat such that for any τ mat -structure A, which encodes an I × J-matrix A over the commutative ring R, and all e ∈ B(R), it holds that Θ[e](A) encodes the projection of A onto the local ring e · R, i.e. the I × J-matrix A e over the local ring e · R. Since the ring base B(R) of R is also definable in first-order logic (Lemma 3.3), this allows us to reduce most natural problems in linear algebra from arbitrary commutative rings to local rings. In particular, we are interested in the following linear-algebraic queries.
Proposition 5.1 (Local ring reductions). For each of the following problems over commutative rings, there is a first-order Turing reduction to the respective query over local rings: (1) Deciding solvability of linear equation systems (cf. §3).
(3) Deciding whether a square matrix is invertible (and defining its inverse in this case). (4) Defining the characteristic polynomial and the determinant of a square matrix.
It was shown in [8] that over finite fields, the class of invertible matrices can be defined in FPC and that there is an FPC-definable interpretation that associates each invertible matrix with its inverse. In what follows we show that the same holds when we consider matrices over ordered local rings. Our proof follows the approach taken by Blass et al. in [8]. As a first step, we show that exponentiation of matrices can be defined in FPC, even if the exponent is given as a binary string of polynomial length. We then show that for each set I, there is a formula of FPC that defines the number of invertible I × I matrices over a local ring R. Finally, combining the two results with the fact that the set of all invertible I × I matrices over R forms a group under multiplication, we conclude that the inverse to any invertible I × I matrix over R can be defined in FPC.
For the first step, to show that powers of matrices over ordered local rings can be interpreted in FPC, we need the following lemma on matrix multiplication. Recall that addition and multiplication of unordered matrices is defined in exactly the same way as for ordered matrices, except that we now have to ensure that the index sets of the two matrices, and not just their dimension, are matching.

Lemma 5.2 (Matrix multiplication).
There is an FPC-interpretation Θ of τ mat in τ mpair such that for all τ mpair -structures P, which encode an I × K-matrix A and a K × J-matrix B over a commutative ring R with a linear order, Θ(P) encodes the I × J-matrix A · B.
Proof. We reuse an idea of Blass et al. [8]. For i ∈ I and j ∈ J we have In other words, instead of individually summing up all products A(i, k) · B(k, j) for indices k ∈ K, which would require a linear order on K, we use the counting mechanism of FPC to obtain the multiplicities of how often a specific ring element r ∈ R appears in this sum. The entries of A · B can then easily be obtained by taking the sum over all ring elements r ∈ R weighted by the respective multiplicities (the right-hand expression). Since the ring R is ordered, this expression can easily be defined in FPC.
In [8], Blass et al. showed that exponentiation of square matrices over a finite field can be expressed in FPC (using the fact that matrix multiplication is in FPC). Moreover, by using the method of repeated squaring, they show that exponentiation can be expressed even when the value of the exponent is given as a binary string of length polynomial in the size of the input structure. We identify the set (η(υ), t) B with the integer m = i∈(η,t) B 2 i , i.e. the tuple (η(υ), t) defines in the structure B the t B -bit binary representation of m. Given that the product of two matrices can be defined in FPC (Lemma 5.2), it is not hard to see that the repeated squaring approach outlined in [8] for describing matrix exponentiation over finite fields also works for matrices over commutative rings with a linear order. We state this more formally as follows.
Lemma 5.4 (Matrix exponentiation). For each pair (η(υ), t), where η(υ) is an FPCformula with a free number variable υ and t is a number term, there is an FPC-interpretation Θ (η(υ),t) of τ mat in τ mat such that for all I × I matrices A over a local ring R with a linear order, the structure Θ A (η(υ),t) encodes the I × I matrix A n , where n = (η(υ), t) A . Recall that the set of invertible I × I-matrices over a commutative ring R forms a group under matrix multiplication, which is known as the general linear group (written GL I (R)). Hence, if we let ℓ := | GL I (R) | then an I × I-matrix A over R is invertible if, and only if, A ℓ = 1. The following lemma shows that the cardinality of the general linear group for local rings R can be defined in FPC.
Lemma 5.5 (Cardinality of GL I (R)). There is an FPC[τ mat ]-formula η(υ), with a free number variable υ, and a number term t (without free variables) in FPC[τ mat ], such that for any I × I-matrix A over a local ring R, it holds that (η(υ), t) A = | GL I (R) |.
Proof. Let R be a local ring with maximal ideal m and let I be a finite set of size n > 0. We denote the field R/m by F and its cardinality by q := | F |. Then we have Indeed, the first equation is easy to verify: an I ×I matrix over F is invertible if, and only if, its columns are linearly independent. Each set of i linearly independent columns generate q i different vectors in F I . Thus, if we have already fixed i linearly independent columns, there remain (q n − q i ) vectors in F I which can be used extend to this set to a set of i + 1 linearly independent columns. This counting argument shows that | GL I (F ) | = (q n − 1) · (q n − q) · (q n − q 2 ) · · · (q n − q n−1 ) = n−1 i=0 (q n − q i ).
For the second equation, we let π denote the canonical group epimorphism π : GL I (R) → GL I (F ). It is easy to see that | ker(π) | = | m | n 2 . The homomorphism theorem thus implies that | GL I (R) | = | m | n 2 | GL I (F ) | which yields the claim since q · | m | = | R |.
By the above claim we have an exact expression for the cardinality of | GL I (R) | for any non-empty set I and any finite local ring R. It is straightforward to verify that the binary encoding of this expression can be formalised by a formula and a number term of fixed-point logic with counting. (1) There is an FPC-interpretation Θ of τ mat in τ mat such that for any τ mat -structure A encoding an I × I-matrix A over a commutative ring R with linear order, Θ(A) encodes an I × I-matrix B such that AB = 1, if A is invertible.
(2) For every k ≥ 1, there is an FPC-interpretation Θ of τ mat in τ mat such that for any τ mat -structure A which encodes an I × I-matrix A over a commutative ring R that splits into a direct sum of k-generated local rings, it holds that Θ(A) encodes an I × I-matrix B over R such that B = A −1 , if A is invertible, and B = 0, otherwise.
Proof. For the first claim, we combine the arguments outlined above. For the second claim, we additionally apply Lemma 3.6 to obtain a linear order on the local summands of R.

5.2.
Characteristic polynomial over Galois rings. In [8], Blass et al. established that the problem of deciding singularity of a square matrix over GF (2) can be expressed in FPC, as we already explained above. Recall that a matrix A is singular over a field if, and only if, its determinant det(A) is zero. The result of Blass et al. therefore implies that the determinant of a matrix over GF (2) can be expressed in FPC by testing for singularity. This result was generalised by Dawar et al. [12], who showed that over any finite field (as well as over Z and Q) the characteristic polynomial (and thereby, the determinant) of a matrix can be defined in FPC (for full details, see [20]). Pakusa [27] observed that the same approach works for the definability of the determinant and characteristic polynomial of matrices over prime rings Z p n . Recall that for an I × I-matrix A over a commutative ring R, the characteristic polynomial χ A ∈ R[X] of A is defined as χ A = det(XE I − A), where E I denotes the I × I-identity matrix over R.
Here we show that the characteristic polynomial of matrices over any Galois ring can also be defined in FPC. A finite commutative ring R is called a Galois ring if it is a Galois (ring) extension of the ring Z p n for a prime p and n ≥ 1. As we will only work with the following equivalent characterisation of Galois rings we omit the definition of Galois ring extensions (for details, we refer to [6,25]).
Definition 5.7. A Galois ring is a finite commutative ring R which is isomorphic to a quotient ring Z p n [X]/(f (X)), where f (X) is a monic irreducible polynomial of degree r in Z p n [X] whose image under the reduction map µ : Z p n → Z p n /(p·Z p n ) ∼ = GF(p) is irreducible. Such a polynomial is called a Galois polynomial of degree r over Z p n .
We summarise some useful facts about Galois rings. For every ring Z p n and every r ≥ 1, there is a unique Galois extension of degree r over Z p n , which we denote by GR(p n , r). Moreover, any Galois ring is a chain ring, and thus we can use Lemma 3.6 to obtain an FPC-definable linear order on such rings. As Galois rings include all finite fields and all prime rings, the following theorem gives a generalisation of all known results concerning the logical complexity of the characteristic polynomial and determinant from [20,12,8].
Theorem 5.8 (Characteristic polynomial). There are FPC-formulas θ det (z) and θ char (z, υ), where z is an element variable and υ is a number variable, such that for any τ mat -structure A which encodes an I × I-matrix A over a Galois ring R we have: if, and only if, the determinant of A over R is d ∈ R; • A |= θ char [d, k] if, and only if, the coefficient of x k in the characteristic polynomial χ A (x) of A over R is d ∈ R.
Before we prove this theorem, we need some technical results. First of all, we fix an encoding of polynomials by number terms of FPC, as follows.
Definition 5.9 (Encoding polynomials). Let π(υ) be a number term (of FPC) in signature τ , where υ is a number variable. Given a τ -structure A and an integer m, we write poly X (π, A, m) to denote the integer polynomial a m X m + · · · + a 1 X + a 0 , where a i = π[i] A for each i ∈ [m]. and then map ι(A) → A ⋆ by lifting each h(X) in Z p n [X]/(f (X)) to the polynomial H(X) in S whose reduction modulo p n is h(X). Finally, we apply Csanky's algorithm over S to the matrix A ⋆ and then reduce the output modulo p n to get the correct result. This last reduction is sound as we have R = S/(p n ). As explained in [20, §3.4.3], Csanky's algorithm can be formalised in FPC even when the ring elements are given explicitly as polynomials in this way. Putting everything together, we conclude that each coefficient of the characteristic polynomial χ A (X) of A can be defined in FPC. Since the determinant of A is precisely the constant term of χ A (X), Theorem 5.8 now follows.

Discussion
Motivated by the question of finding extensions of FPC to capture larger fragments of PTIME, we have analysed the (inter-)definability of solvability problems over various classes of algebraic domains. Similar to the notion of rank logic [12] one can consider solvability logic, which is the extension of FPC by Lindström quantifiers that decide solvability of linear equation systems. In this context, our results from §2 and §3 can be seen to relate fragments of solvability logic obtained by restricting quantifiers to different algebraic domains, such as Abelian groups or commutative rings. We have also identified many classes of algebraic structures over which the solvability problem reduces to the very basic problem of solvability over cyclic groups of prime-power order. This raises the question, whether a reduction even to groups of prime order is possible. In this case, solvability logic would turn out to be a fragment of rank logic. On the other hand, it also remains open whether or not the matrix rank over finite fields can be expressed in fixed-point logic extended by solvability operators. With respect to specific algebraic domains, we prove that FPC can define a linear order on the class of all k-generated local rings, i.e. on classes of local rings for which every maximal ideal can be generated by k elements, where k is a fixed constant. Together with our results from §3, this can be used to show that all natural problems from linear algebra over (not necessarily local) k-generated rings reduce to problems over ordered rings under FP-reductions. An interesting direction of future research is to explore how far our techniques can be used to show (non-)definability in fixed-point logic of other problems from linear algebra over rings.
Finally, we mention an interesting topic of related research, which is the logical study of permutation group membership problems (GM for short). An instance of GM consists of a set Ω, a set of generating permutations π 1 , . . . , π n on Ω and a target permutation π, and the problem is to decide whether π is generated by π 1 , . . . , π n . This problem is known to be decidable in polynomial time (indeed it is in NC [5]). We can show that all the solvability problems we studied in this paper reduce to GM under first-order reductions (basically, an application of Cayley's theorem). In particular this shows that GM is not definable in FPC. By extending fixed-point logic by a suitable operator for GM we therefore obtain a logic which extends rank logics and in which all studied solvability problems are definable. This logic is worthy of further study as it can uniformly express all problems from (linear) algebra that have been considered so far in the context of understanding the descriptive complexity gap between FPC and PTIME.