UNIFORM INTERPOLANTS IN EUF : ALGORITHMS USING DAG-REPRESENTATIONS

. The concept of uniform interpolant for a quantiﬁer-free formula from a given formula with a list of symbols, while well-known in the logic literature, has been unknown to the formal methods and automated reasoning community for a long time. This concept is precisely deﬁned. Two algorithms for computing quantiﬁer-free uniform interpolants in the theory of equality over uninterpreted symbols ( EUF ) endowed with a list of symbols to be eliminated are proposed. The ﬁrst algorithm is non-deterministic and generates a uniform interpolant expressed as a disjunction of conjunctions of literals, whereas the second algorithm gives a compact representation of a uniform interpolant as a conjunction of Horn clauses. Both algorithms exploit eﬃcient dedicated DAG representations of terms. Correctness and completeness proofs are supplied, using arguments combining rewrite techniques with model theory.


Introduction
The theory of equality over uninterpreted symbols, henceforth denoted by EU F, is one of the simplest theories that have found numerous applications in computer science, compiler optimization, formal methods and logic. Starting with the works of Shostak [Sho84] and Nelson and Oppen [NO79] in the early eighties, some of the first algorithms were proposed in the context of developing approaches for combining decision procedures for quantifier-free theories including freely constructed data structures and linear arithmetic over the rationals. EUF was exploited for hardware verification of pipelined processors by Dill [BD94] and more widely subsequently in formal methods and verification using model checking frameworks. With the popularity of SMT solvers, where EU F serves as a glue for combining solvers for different theories, numerous new graph-based algorithms have been proposed in the literature over the last two decades for checking unsatisfiability of a conjunction of (dis)equalities of terms built using function symbols and constants. two algorithms for computing uniform interpolants in EU F (correctness and completeness of such algorithms are proved in Section 5). We conclude in Section 6. This paper extends a conference paper ( [GGK20]) in two respects: first, it improves the presentation and includes the full proofs, adding also further explanations; second, it contains additional material including detailed examples and some complexity considerations.
Related work on the use of UIs. The use of uniform interpolants in model-checking safety problems for infinite state systems was already mentioned in [GM08] and further exploited in a recent research line on the verification of data-aware processes [CGG + 19b, CGG + 19a, CGG + 20b, GGMR20,GGMR21]. Model checkers need to explore the space of all reachable states of a system; a precise exploration (either forward starting from a description of the initial states or backward starting from a description of unsafe states) requires quantifier elimination. The latter is not always available or might have prohibitive complexity; in addition, it is usually preferable to make over-approximations of reachable states both to avoid divergence and to speed up convergence. One well-established technique for computing over-approximations consists in extracting interpolants from spurious traces, see e.g. [McM06]. One possible advantage of uniform interpolants over ordinary interpolants is that they do not introduce over-approximations and so abstraction/refinements cycles are not needed in case they are employed (the precise reason for that goes through the connection between uniform interpolants, model completeness and existentially closed structures, see [CGG + 20b] for a full account). In this sense, computing uniform interpolants has the same advantages and disadvantages as computing quantifier eliminations, with two remarkable differences. The first difference is that uniform interpolants may be available also in theories not admitting quantifier elimination (EU F being the typical example); the second difference is that computing uniform interpolants may be tractable when the language is suitably restricted e.g. to unary function symbols (this was already mentioned in [GM08], see also Remark 3.4 below). Restriction to unary function symbols is sufficient in database driven verification to encode primary and foreign keys [CGG + 20b]. It is also worth noticing that, precisely by using uniform interpolants for this restricted language, in [CGG + 20b] new decidability results have been achieved for interesting classes of infinite state systems. Notably, such results are also operationally mirrored in the MCMT [GR10] implementation since version 2.8.

Preliminaries
We adopt the usual first-order syntactic notions, including signature, term, atom, (ground) formula; our signatures are always finite or countable and include equality. Without loss of generality, only functional signatures, i.e. signatures whose only predicate symbol is equality, are considered. A tuple x 1 , . . . , x n of variables is compactly represented as x. The notation t(x), φ(x) means that the term t, the formula φ has free variables included in the tuple x. This tuple is assumed to be formed by distinct variables, thus we underline that, when we write e.g. φ(x, y), we mean that the tuples x, y are made of distinct variables that are also disjoint from each other. A formula is said to be universal (resp., existential ) if it has the form ∀x(φ(x)) (resp., ∃x(φ(x))), where φ is quantifier-free. Formulae with no free variables are called sentences.
From the semantic side, the standard notion of Σ-structure M is used: this is a pair formed of a set (the 'support set', indicated as |M|) and of an interpretation function. The interpretation function maps n-ary function symbols to n-ary operations on |M| (in particular, constants symbols are mapped to elements of |M|). A free variables assignment I on M extends the interpretation function by mapping also variables to elements of |M|; the notion of truth of a formula in a Σ-structure under a free variables assignment I is the standard one. It may be necessary to expand a signature Σ with a fresh name for every a ∈ |M|: such expanded signature is called Σ |M| and M is by abuse seen as a Σ |M| -structure itself by interpreting the name of a ∈ |M| as a (the name of a is directly indicated as a for simplicity).
A Σ-theory T is a set of Σ-sentences; a model of T is a Σ-structure M where all sentences in T are true. We use the standard notation T |= φ to say that φ is true in all models of T for every assignment to the variables occurring free in φ. We say that φ is T -satisfiable iff there is a model M of T and an assignment to the variables occurring free in φ making φ true in M.
A Σ-embedding [CK90] (or, simply, an embedding) between two Σ-structures M and N is a map µ : |M| −→ |N | among the support sets |M| of M and |N | of N satisfying the condition (M |= ϕ ⇒ N |= ϕ) for all Σ |M| -literals ϕ (M is regarded as a Σ |M| -structure, by interpreting each additional constant a ∈ |M| into itself and N is regarded as a Σ |M|structure by interpreting each additional constant a ∈ |M| into µ(a)). If µ : M −→ N is an embedding that is just the identity inclusion |M| ⊆ |N |, we say that M is a substructure of N or that N is an extension of M.
Let M be a Σ-structure. The diagram of M, written ∆ Σ (M) (or just ∆(M)), is the set of ground Σ |M| -literals that are true in M. An easy but important result, called Robinson Diagram Lemma [CK90], says that, given any Σ-structure N , the embeddings µ : M −→ N are in bijective correspondence with expansions of N to Σ |M| -structures which are models of ∆ Σ (M). The expansions and the embeddings are related in the obvious way: the name of a is interpreted as µ(a). The typical use of the Robinson Diagram Lemma is the following: suppose we want to show that some structure M can be embedded into a structure N in such a way that some set of sentences Θ are true. Then, by the Lemma, this turns out to be equivalent to the fact that the set of sentences ∆(M) ∪ Θ is consistent: thus, the Diagram Lemma can be used to transform an embeddability problem into a consistency problem (the latter is a problem of a logical nature, to be solved for instance by appealing to the compactness theorem for first-order logic).
Notably, if T has uniform quantifier-free interpolation, then it has ordinary quantifierfree interpolation, in the sense that if we have T |= φ(e, z) → φ (z, y) (for quantifier-free formulae φ, φ ), then there is a quantifier-free formula θ(z) such that T |= φ(e, z) → θ(z) and T |= θ(z) → φ (z, y). In fact, if T has uniform quantifier-free interpolation, then the interpolant θ is independent on φ (the same θ(z) can be used as interpolant for all entailments T |= φ(e, z) → φ (z, y), varying φ ). Uniform quantifier-free interpolation has a direct connection to an important notion from classical model theory, namely model completeness (see [CGG + 19c] for more information).
2.2. Problem Statement. In this paper the problem of computing UIs for the case in which T is pure identity theory in a functional signature Σ is considered; this theory is called EUF(Σ) or just EU F in the SMT-LIB2 terminology. Two different algorithms are proposed for that (while proving correctness and completeness of such algorithms, it is simultaneously shown that UIs exist in EU F). The first algorithm computes a UI in disjunctive normal form format, whereas the second algorithm supplies a UI in conjunctive normal form format. Both algorithms use suitable DAG-compressed representation of formulae.
The following notation is used throughout the paper. Since it is easily seen that existential quantifiers commute with disjunctions, it is sufficient to compute UIs for primitive formulae, i.e. for formulae of the kind ∃e φ(e, z), where φ is a constraint, i.e. a conjunction of literals. We partition all the 0-ary symbols from the input as well as symbols newly introduced into disjoint sets. We use the following conventions: -e = e 0 , . . . , e N (with N integer) are symbols to be eliminated, called variables, -z = z 0 , . . . , z M (with M integer) are symbols not to be eliminated, called parameters, -symbols a, b, . . . stand for both variables and parameters, and for (fresh) constants as well (usually introduced during skolemization). In the following we will also use symbols y for indicating variables that changed their status and do not need to be eliminated anymore: we use symbols a, b, . . . for them as well. Variables e are usually skolemized during the manipulations of our algorithms and proofs below, in the sense that they have to be considered as fresh individual constants.
Remark 2.1. UI computations eliminate symbols which are existentially quantified variables (or skolemized constants). Elimination of function symbols can be reduced to elimination of variables in the following way. Consider a formula ∃f φ(f, z), where φ is quantifier-free. Successively abstracting out functional terms, we get that ∃f φ(f, z) is equivalent to a formula of the kind ∃e ∃f ( i (f (t i ) = e i ) ∧ ψ), where the e are fresh variables (with e i ∈ e), t i are terms, f does not occur in t i , e i , ψ and ψ is quantifier-free. The latter is semantically equivalent to ∃e( i =j (t i = t j → e i = e j ) ∧ ψ), where t i = t j is the conjunction of the component-wise equalities of the tuples t i and t j .
2.3. Flat Literals, DAGs and Congruence Closure. A flat literal is a literal of one of the following kinds f (a 1 , . . . , a n ) = b, a 1 = a 2 , a 1 = a 2 (2.1) where a 1 , . . . , a n and b are (not necessarily distinct) variables or constants. A formula is flat iff all literals occurring in it are flat; flat terms are terms that may occur in a flat literal (i.e. terms like those appearing in (2.1)). We call a DAG-definition (or simply a DAG) any formula δ(y, z) of the following form (where y := y 1 . . . , y n ): (y i = f i (y 1 , . . . , y i−1 , z)) .
Thus, δ(y, z) provides an explicit definition of the y in terms of the parameters z.
DAGs are commonly used to represent formulae and substitutions in compressed form: in fact a formula like is equivalent to Φ((y)σ δ , z), and is called DAG-representation . The formula Φ((y)σ δ , z) is said to be the unravelling of (2.2): notice that computing such an unravelling in uncompressed form by explicitly performing substitutions causes an exponential blow-up. This is why we shall systematically prefer DAG-representations (2.2) to their uncompressed forms. As above stated, our main aim is to compute the UI of a primitive formula ∃e φ(e, z); using trivial logical manipulations (that have just linear complexity costs), it can be shown that, without loss of generality the constraint φ(e, z) can be assumed to be flat. To do so, it is sufficient to perform a preprocessing procedure by applying well-known Congruence Closure Transformations: the reader is referred to [Kap97] for a full account.

The Tableaux Algorithm
The algorithm proposed in this section is tableaux-like. It manipulates formulae in the following DAG-primitive format ∃y (δ(y, z) ∧ Φ(y, z) ∧ ∃e Ψ(e, y, z)) (3.1) where δ(y, z) is a DAG and Φ, Ψ are flat constraints (notice that the e do not occur in Φ). We call a formula of that format a DAG-primitive formula. To make reading easier, we shall omit in (3.1) the existential quantifiers, so as (3.1) will be written simply as We remark that Ψ can contain literals whose terms depend explicitly on e, whereas Φ does not contain any occurrence of the e variables. Initially the DAG δ and the constraint Φ are the empty conjunction. In the DAG-primitive formula (3.2), variables z are called parameter variables, variables y are called (explicitly) defined variables and variables e are called (truly) quantified variables. Variables z are never modified; in contrast, during the execution of the algorithm it could happen that some quantified variables may disappear or become defined variables (in the latter case they are renamed: a quantified variables e i becoming defined is renamed as y j , for a fresh y j ). Below, letters a, b, . . . range over e ∪ y ∪ z. are said to be compatible iff for every i = 1, . . . , n, either a i is identical to b i or both a i and b i are e-free. The difference set of two compatible terms as above is the set of disequalities 3.1. The Tableaux Algorithm. Our algorithm applies the transformations below in a "don't care" non-deterministic way. By saying this, we mean that the output is independent (up to logical equivalence) of the order of the application of the transformations, once the following priority is respected: the last transformation has lower priority with respect to the remaining transformations. The last transformation is also responsible for splitting the execution of the algorithm in several branches: each branch will produce a different disjunct in the output formula. Each state of the algorithm is a DAG-primitive formula like (3.2). We now provide the rules that constitute our 'tableaux-like' algorithm. (2) : DAG Update Rule: if Ψ contains e i = t(y, z), remove it, rename everywhere e i as y j (for fresh y j ) and add y j = t(y, z) to δ(y, z). More formally: (3) : e-Free Literal Rule: if Ψ contains a literal L(y, z), move it to Φ(y, z). More formally: (4) : Splitting Rule: If Ψ contains a pair of atoms t = a and u = b, where t and u are compatible flat terms like in (3.3), and no disequality from the difference set of t, u belongs to Φ, then non-deterministically apply one of the following alternatives: (4.0): remove from Ψ the atom f (b 1 , . . . , b n ) = b, add to Ψ the atom a = b and add to Φ all equalities a i = b i such that a i = b i is in the difference set of t, u; (4.1): add to Φ one of the disequalities from the difference set of t, u (notice that the difference set cannot be empty, otherwise Rule (1.i) applies). When no more rule is applicable, delete Ψ(e, y, z) from the resulting formula δ(y, z) ∧ Φ(y, z) ∧ Ψ(e, y, z) so as to obtain for any branch an output formula in DAG-representation of the kind ∃y (δ(y, z) ∧ Φ(y, z)) .
We will see in Remark 3.4 that in the case of only unary functions, Rule (4) can be disregarded, and the algorithm becomes much simpler and computationally tractable. Example 3.1. We give a simple example for the application of Splitting Rule. Let the pair of atoms f (z 1 , e) = z 2 and f (z 3 , e) = z 4 be in Ψ such that z 1 = z 3 is not in Φ. Since the difference set of f (z 1 , e) and f (z 3 , e) is {z 1 = z 3 } and z 1 = z 3 ∈ Φ, Splitting Rule applies. Applying the first alternative (Rule (4.0)), the atom f (z 3 , e) = z 4 is removed from Ψ, and the atoms z 1 = z 3 and z 2 = z 4 are added to Ψ. Applying the second alternative (Rule (4.1)), the disequality z 1 = z 3 is added to Ψ.
Notice that in the above example the literals added to Ψ as a consequence of the Splitting Rule can then immediately be moved to Φ via Rule (3), as they are e-free; this is always the case for the disequalities from difference sets in Rule (4.0) and Rule (4.1) (because they only involve e-free terms, by definition of difference set), but not necessarily for the atom a = b mentioned in Rule (4.0), because this atom might not be e-free.
Remark 3.1. Splitting Rule (4) creates branches in an optimized way: indeed, the alternatives a i = b i in (4.0) and a i = b i in (4.1) are not generated for every pair of variables a, b, but the rule is applied only when the left members are compatible flat terms. Notice also that in case for every a i in t and for every b i in u the pairs a i , b i are all identical, Rule (1.i) applies instead.
The following remark will be useful to prove the correctness of our algorithm, since it gives a description of the kind of literals contained in a state triple that is terminal (i.e., when no rule applies).
Remark 3.2. Notice that if no transformation applies to (3.1), the set Ψ can only contain disequalities of the kind e i = a, together with equalities of the kind f (a 1 , . . . , a n ) = a. However, when it contains f (a 1 , . . . , a n ) = a, one of the a i must belong to e (otherwise (2) or (3) applies). Moreover, if f (a 1 , . . . , a n ) = a and f (b 1 , . . . , b n ) = b are both in Ψ, then either they are not compatible or a i = b i belongs to Φ for some i and for some variables a i , b i not in e (otherwise (4) or (1.i) applies).
The following proposition states that, by applying the previous rules, termination is always guaranteed.
Proposition 3.1. The non-deterministic procedure presented above always terminates.
Proof. It is sufficient to show that every branch of the algorithm must terminate. In order to prove that, first observe that the total number of the variables involved never increases and it decreases if (1.ii) is applied (it might decrease also by the effect of (1.0)). Whenever such a number does not decrease, there is a bound on the number of disequalities that can occur in Ψ, Φ. Now transformation (4.1) decreases the number of disequalities that are actually missing; the other transformations do not increase this number. Finally, all transformations except (4.1) reduce the length of Ψ.
Remark 3.3. The overall complexity of the above algorithm is exponential in time, because of the number of branches created by Splitting Rule (4). However, the number of rules applied in a single branch is quadratic in time in the dimension of the input: this fact can be proved by relying on the termination argument shown in Proposition 3.1. Indeed, every rule of the algorithm above except for Rule (4.1) reduces the length of Ψ, which has length O(n) (where n is the dimension of the input). Let c 1 be a counter that decreases every time a rule (except for Rule (4.1)) is applied: hence, c 1 := O(n). Moreover, whenever Rule (4.1) is applied, the length of Ψ remains the same, but the number of disequalities that are actually missing decreases: this number of missing disequalities is clearly O(n 2 ). Let c 2 be a counter that decreases whenever Rule (4.1) is applied: hence, c 2 := O(n 2 ). Now, consider a counter c := c 1 + c 2 : this counter decreases every time a rule of the algorithm above is applied. Since , we conclude that the number of rules applied in a single branch is quadratic, as wanted.
Flattening gives the set of literals where the newly introduced variables e 1 , e 2 need to be eliminated too. We use lists of integers to represent the nodes of the tree created by the tableaux-like algorithm ( Figure 1). Applying (4.0) to the first and the second literals of (3.5), the first branch is created: node [1] is generated, where f (z 4 , e) = s 2 is removed and the new equalities z 3 = z 4 , s 1 = s 2 are introduced. Then, applying (4.0) to the first and the third literals of (3.5), we get a new branch: node [1.1] is generated, where f (z 1 , e) = e 1 is removed and the new equalities z 3 = z 1 , s 1 = e 1 are introduced. As shown in Figure 2, this causes e 1 to be renamed as y 5 by (2). Applying again (4.0) to the first and the fourth literals of (3.5), new branch is created: in node [1.1.1], f (z 2 , e) = e 2 is removed and the equalities z 3 = z 2 , e 2 = s 1 are added; moreover, e 2 is renamed as y 6 by using (2). In addition, in node [1.1.2], we obtain that no literal is canceled and the inequality z 3 = z 2 from the difference set of f (z 3 , e) and f (z 2 , e) is added. To all the newly introduced literals in y, z in all the branches, we can apply (3). The branch of [1.
We also analyze a portion of the branch starting with node [2]: as a consequence of (4.1), in node [2] the new inequality z 3 = z 4 is introduced. Then, applying (4.0) to the first and the third literals of (3.5), we get a new branch: as shown in Figure 3, node [2.1] is generated, where f (z 1 , e) = e 1 is removed and the new equalities z 3 = z 1 , s 1 = e 1 are introduced. The last equality causes e 1 to be renamed as y 5 by (2). Applying again (4.0) to the second and the fourth literals of (3.5), a new branch is created: in node [2.1.1], f (z 2 , e) = e 2 is removed and the equalities z 4 = z 2 , s 2 = e 2 are added; moreover, e 2 is renamed as y 6 by using again (2). To all the newly introduced literals in y, z in all the branches, we can apply (3).
The algorithm generates 16 branches, each of them produces one disjunct of the output formula, so that the UI turns out to be equivalent to Notice that this is consistent with: (1) the formula in Leaf 1, where z 3 = z 4 implies s 1 = s 2 , and where z 1 = z 3 and z 2 = z 4 implies t = f (s 1 , s 2 ); (2) the formula in Leaf 2, where z 3 = z 4 implies s 1 = s 2 ; (3) the formula in Leaf 6, where z 1 = z 3 and z 2 = z 4 implies t = f (s 1 , s 2 ).

The Conditional Algorithm
This section discusses a new algorithm with the objective of generating a compact representation of the UI in EU F: this representation avoids splitting and is based on conditions in Horn clauses generated from literals whose left sides have the same function symbol. A by-product of this approach is that the size of the output UI often can be kept polynomial. Further, the output of this algorithm generates the UI of ∃e φ(e, z) (where φ(e, z) is a conjunction of literals and e = e 0 , . . . , e N , z = z 0 , . . . , z M , as usual) in conjunctive normal form as a conjunction of Horn clauses (we recall that a Horn clause is a disjunction of literals containing at most one positive literal). Toward this goal, a new data structure of a conditional DAG, a generalization of a DAG, is introduced so as to maximize sharing of sub-formulas.
Using the core preprocessing procedure explained in Subsection 2.3, it is assumed that φ is the conjunction S 1 , where S 1 is a set of flat literals containing only literals of the following two kinds: f (a 1 , . . . , a h ) = a (4.1) . . . (recall that we use letters a, b, . . . for elements of e ∪ z). Since literals not involving variables to be eliminated or supplying an explicit definition of one of them can be moved directly to the output, we can assume that variables in e must occur in (4.2) and in the left side of (4.1). We do not include equalities like a = e because they can be eliminated by replacement.
4.1. The Conditional Algorithm. The algorithm requires two steps in order to get a set of clauses representing the output in a suitably compressed format.
Step 1. Out of every pair of literals f (a 1 , . . . , a h ) = a and f (a 1 , . . . , a h ) = a of the kind (4.1) (where a is syntactically different from a ) we produce the Horn clause which can be further simplified by deleting identities in the antecedent. Let us call S 2 the set of clauses obtained from S 1 by adding these new Horn clauses to it.
Step 2. We saturate S 2 with respect to the following rewriting rule  Notice that we apply the rewriting rule only to conditional equalities of the kind Γ → e j = e i : this is because clauses like Γ → e j = z i are considered 'conditional definitions' (and the clauses like Γ → z j = z i as 'conditional facts').
We let S 3 be the set of clauses obtained from S 2 by saturating it with respect to the above rewriting rule, by removing from antecedents identical literals of the kind a = a and by removing subsumed clauses.

S. Ghilardi, A. Gianola, and D. Kapur
Vol. 18:2  Step 1 produces the following set S 2 of Horn clauses z 1 = z 2 → e 1 = z 3 , z 4 = z 5 → e 2 = z 6 , e 1 = z 1 → e 2 = z 2 , e 2 = z 1 → e 1 = z 2 Since there are no Horn clauses whose consequent is an equality of the kind e i = e j , Step 2 does not produce further clauses and we have S 3 = S 2 .

Conditional DAGs.
In order to be able to extract the output UI in a uncompressed format out of the above set of clauses S 3 , we must identify all the 'implicit conditional definitions' it contains. As for illustration, Example 4.1 contains, among the others, the following 'implicit' conditional definitions: the variable e 1 can be conditionally defined with e 1 := z 3 , when z 1 = z 2 holds (because z 1 = z 2 → e 1 = z 3 is in S 3 ); moreover, when e 1 becomes conditionally defined, also e 2 becomes implicitly defined as e 2 := z 2 , under the condition that e 1 = z 1 holds (because e 1 = z 1 → e 2 = z 2 is in S 3 as well). All such conditional definitions need to be made explicit, and this is what we are going to formalize in the following. Let w be an ordered subset of the e = {e 1 , . . . , e N }: that is, in order to specify w we must take a subset of the e and an ordering of this subset. Intuitively, these w will play the role of placeholders inside a conditional definition.
If we let w be w 1 , . . . , w s (where, say, w i is some e k i with k i ∈ {1, . . . , N }), we let L i be the language restricted to z and w 1 , . . . , w i (for i ≤ s): in other words, an L i -term or an L i -clause may contain only terms built up from z, w 1 , . . . , w i by applying to them function symbols. In particular, L s (also called L w ) is the language restricted to z ∪ w. We let L 0 be the language restricted to z.
Given a set S of clauses and w as above, a w-conditional DAG δ (or simply a conditional DAG δ) built out of S is an s-tuple of Horn clauses from S where Γ i is a finite tuple of L i−1 -atoms and t i is an L i−1 -term. Intuitively, a conditional DAG takes into consideration the given ordered subset w of the e symbols and on top of that it builds a sequence of conditional equations (i.e., Horn clauses), where step by step more and more e symbols are employed and iteratively defined in terms of previously already defined e symbols that are precedent in the order. Roughly speaking, each set of "dependencies" induced among the e-symbols provides a suitable set of (conditional) definitions. Conditional DAGs are used to define suitable formulae that are needed for the construction of the uniform interpolant. We now define such formulae. Given a w-conditional DAG δ we can define the formulae φ i δ (for i = 1, . . . , s + 1) as follows: -φ s+1 δ is the conjunction of all L w -clauses belonging to S; . It can be seen that φ i δ is equivalent to a quantifier-free L i−1 formula, 5 in particular φ 1 δ (abbreviated as φ δ ) is equivalent to an L 0 -quantifier-free formula. The explicit computation of such quantifier-free formulae may however produce an exponential blow-up. The intuition behind these constructions is that in order to produce the correct uniform interpolant one needs to consider all the possible L w -clauses in S and then iteratively 'eliminate' the e symbols in them by exploiting the conditional definitions of the e symbols (determined by the conditional DAG δ) in terms of previously defined e symbols in the order of w.
Example 4.2. Let us analyze the conditional DAG δ that can be extracted out of the set S 3 of the Horn clauses mentioned in Example 4.1 (we disregard those δ such that φ δ is the empty conjunction ). The w 1 -conditional DAG δ 1 with w 1 = e 1 , e 2 and conditional definitions z 1 = z 2 → e 1 = z 3 , e 1 = z 1 → e 2 = z 2 where e 2 depends upon e 1 , produces formula φ δ 1 , and similarly the w 2 -conditional DAG δ 2 with w 2 = e 2 , e 1 and conditional definitions z 4 = z 5 → e 2 = z 6 , e 2 = z 1 → e 1 = z 2 .
5 Since φ i δ is logically equivalent to ( Γi) → φ i+1 δ (ti/wi), it is immediate to see that it can be recursively turned, again up to equivalence, into a conjunction of Horn clauses. where e 1 depends upon e 2 , produces formula φ δ 2 : notice that φ δ 1 and φ δ 2 are not logically equivalent. Indeed, φ δ 1 is logically equivalent to where we used the notation S 3 \ {e 0 }[z 3 /e 1 , z 2 /e 2 ] to mean the result of the substitution of e 1 with z 3 and of e 2 with z 2 in the conjunction of S 3 -clauses not involving e 0 . Notice that, intuitively, this formula is obtained by iteratively defining, step by step, bigger e variables in terms of smaller ones: when z 1 = z 2 ∧ z 3 = z 1 holds, e 1 is conditionally replaced by z 3 and e 2 by z 2 . Analogously, φ δ 2 is logically equivalent to (the explanation of the notation S 3 \ {e 0 }[z 6 /e 2 , z 2 /e 1 ] is the same as the explanation for the notation S 3 \ {e 0 }[z 3 /e 1 , z 2 /e 2 ] used above). A third possibility is to use the conditional definitions z 1 = z 2 → e 1 = z 3 and z 4 = z 5 → e 2 = z 6 with (equivalently) either w 1 or w 2 resulting in a conditional dag δ 3 with φ δ 3 logically equivalent to (4.7) The next lemma shows the relevant property of φ δ : Lemma 4.1. For every set of clauses S and for every w-conditional DAG δ built out of S, the formula S → φ δ is logically valid.
Proof. We prove that S → φ i δ is valid by induction on i. The base case is clear. For the case i ≤ s, proceed, e.g., in natural deduction as follows: assume S, Γ i andw i = t i in order to prove φ i+1 δ (w i /w i ). Since Γ i → w i = t i ∈ S, then by implication elimination you get w i = t i and also w i =w i by transitivity of equality. Now you get what you need from induction hypothesis and equality replacement.
Notice that it is not true that the conjunction of all possible φ δ (varying δ and w) implies S: in fact, such a conjunction can be empty for instance in case S is just {e 1 = e 2 }.

Extraction of UI's.
We shall prove below that in order to get a UI of ∃e φ(e, a), one can take the conjunction of all possible φ δ , varying δ among the conditional DAGs that can be built out of the set of clauses S 3 from Step 2 of the above algorithm. We highlight that in order to generate the correct output (i.e., the uniform interpolant), one needs to consider all the possible conditional DAGs built out of the set of clauses S 3 from Step 2, which implies to take into consideration all DAGs that can be defined considering all possible ordered subsets w.
Example 4.3. If φ is the conjunction of the literals of Example 4.1, then the conjunction of (4.5), (4.6) and (4.7) is a UI of ∃e φ; in fact, no further non-trivial conditional dag δ can be extracted (if we take w = e 1 or w = e 2 or w = ∅ to extract δ, then it happens that φ δ is the empty conjunction ). Step 2 produces by rewriting the further clauses z 1 = z 2 → f (z 1 , e 0 ) = e 1 and z 1 = z 2 → h(e 1 ) = z 0 . We can extract two conditional DAGs δ (using both the conditional definitions (4.8) or just the first one); in both cases φ δ is z 1 = z 2 ∧ z 3 = z 4 → h(z 0 ) = z 0 , which is the UI.
As it should be evident from the two examples above, the conditional DAGs representation of the output considerably reduces computational complexity in many cases; this is a clear advantage of the present algorithm over the algorithm from Section 3 and over other approaches like, e.g. [CGG + 19c]. Still, the next example shows that in some cases the overall complexity remains exponential.
Example 4.5. Let e be e 0 , . . . , e N and let z be {z 0 , z 0 } ∪ {z i,j , z i,j | 1 ≤ i < j ≤ N }. Let φ(e, z) be the conjunction of the identities f (e 0 , e 1 ) = z 0 , f (e 0 , e N ) = z 0 and the set of identities h ij (e 0 , z ij ) = e i , h ij (e 0 , z ij ) = e j , varying i, j such that 1 ≤ i < j ≤ N . We now show that applying the conditional algorithm we get an UI which is exponentially long.
After applying Step 1 of the algorithm presented in Subsection 4.1, we get the Horn clauses z ij = z ij → e i = e j , as well as the clause e 1 = e N → z 0 = z 0 . If we now apply Step 2, we can never produce a conditional clause of the kind Γ → e i = t with t being e-free (because we can only rewrite some e i into some e j ). Thus no sequence of clauses like (4.4) can be extracted from S 3 : notice in fact that the term t 1 from such a sequence must not contain the variables e. In other words, the only w-conditional DAG δ that can be extracted is based on the empty w ⊆ e and is empty itself.
In order to extract the UI, we need to compute the formulae φ δ from any w-conditional DAG δ, which is only one in such a case. However, this unique δ produces a formula φ δ that is quite big: it is the conjunction of the clauses from S 3 where the e do not occur (S 3 contains in fact Γ → z 0 = z 0 for exponentially many e-free Γ's).
We conclude this example by commenting on the reason why φ δ has an exponential size. In fact, for every minimal set of pairs I ⊆ {1, . . . , N } × {1, . . . , N } such that the equivalence relation generated by I contains the pair (1, N ), we have that S 3 contains the clause Γ I → z 0 = z 0 , where Γ I is the set of equalities z ij = z ij varying (i, j) ∈ I. 6

Correctness and Completeness Proofs
In this section we prove correctness and completeness of our two algorithms. To this aim, we need some preliminaries, both from model theory and from term rewriting.
Extensions and UI are related to each other by the following result we take from [CGG + 19c]: Lemma 5.1 (Cover-by-Extensions). Let T be a first order theory. A formula ψ(y) is a UI in T of ∃e φ(e, y) iff it satisfies the following two conditions: 6 You can easily find exponentially many such I, e.g. by selecting a subset X of {1, . . . , N } containing both 1 and N and letting I be the set {(i, j) | i < j, i ∈ X, j ∈ X}. For term rewriting we refer to a textbook like [BN98]; we only recall the following classical result: Lemma 5.2. Let R be a canonical ground rewrite system over a signature Σ. Then there is a Σ-structure M such that for every pair of ground terms t, u we have that M |= t = u iff the R-normal form of t is the same as the R-normal form of u. Consequently R is consistent with a set of negative literals S iff for every t = u ∈ S the R-normal forms of t and u are different.
We are now ready to prove correctness and completeness of our algorithms. We first give the relevant intuitions for the proof technique, which is the same for both cases. By  (a 1 , . . . , a n ) = a are obviously oriented from left-to-right) and every term occurring in ∆(M) is in normal form. If an algorithm works properly, it will be possible to see that the completion of the union of ∆ Σ (M) with the input constraint (or with a constraint equivalent to it) is trivial and does not produce inconsistencies. To sum up, the completeness proofs of both algorithms require the following technical ingredients: (1) Lemma 5.1, for transforming the problem of computing UI into an embeddability problem; (2) Robinson Diagram Lemma, for turning the previous problem into a consistency problem, which is more tractable; (3) the completion of the diagram joined with the input constraint, so as to get a canonical rewriting system.

Correctness and Completeness of the Tableaux Algorithm
In this subsection, we prove the correctness and completeness of the Tableaux Algorithm. We first summarize the structure of the proof by commenting on its main steps. As discussed above, the proof of Theorem 5.1 relies on Lemma 5.1: in order to prove that the output formula is a uniform interpolant, it is sufficient to show that the embeddability conditions stated in Lemma 5.1 hold. This is achieved, thanks to Robinson Diagram Lemma, by showing that the Robinson Diagram ∆ Σ (M), where M satisfies the output formula, is consistent with the input formula, as manipulated up to logical equivalencies by the algorithm. In the case of the Tableaux Algorithm, after a normalization of a rewriting system suitably extending this Diagram, we show that all the obtained oriented equalities form a canonical rewriting system: this is an immediate consequence of Remark 3.2. We then conclude applying Lemma 5.2: this lemma allows us to exhibit a model of the canonical rewriting system, showing in turn the consistency of ∆ Σ (M) with the input formula.
Theorem 5.1. Suppose that we apply the algorithm of Subsection 3.1 to the primitive formula ∃e(φ(e, z)) and that the algorithm terminates with its branches in the states δ 1 (y 1 , z) ∧ Φ 1 (y 1 , z) ∧ Ψ 1 (e 1 , y 1 , z), . . . , δ k (y k , z) ∧ Φ k (y k , z) ∧ Ψ k (e k , y k , z) then the UI of ∃e(φ(e, z)) in EU F is the unravelling (see Subsection 2.3) of the formula y 1 , z)), it is sufficient to check that if a formula like (3.1) is terminal (i.e. no rule applies to it) then its UI is ∃y (δ(y, z) ∧ Φ(y, z)). To this aim, we apply Lemma 5.1: we pick a model M satisfying the formula δ(y, z) ∧ Φ(y, z) from the output via an assignment I to the variables y, z 7 and we show that M can be embedded into a model M such that, for a suitable extensions I of I to the variables e, we have that (M , I ) satisfies also Ψ(e, y, z) from the input. This embeddability problem can be transformed into a consistency problem as follows. In fact, what we need (by Robinson Diagram Lemma) is to find a model for the following set of literals whereã is the value of a under the assignment I (here all variables in (5.2) are seen as constants, so (5.2) is a set of ground literals). We can orient the equalities in (5.2) by letting function symbols having bigger precedence over constants and by letting a having bigger precedence overã. Normalizing (5.2) replaces a withã in Ψ: callΨ the resulting set of literals (we conventionally useẽ i as an alias for e i , for all e i ∈ e, to have a uniform notation).
After this normalization, we show that all oriented equalities in ∆ Σ (M) ∪Ψ ∪ {a =ã} a∈y∪z form a canonical rewriting system. This is due to Remark 3.2: in fact if f (a 1 , . . . , a n ) and

Correctness and Completeness of the Conditional Algorithm
In this subsection we provide the full proof of correctness and completeness of the Conditional Algorithm. First of all, we briefly present the main ideas behind this proof. As in the case of the Tableaux algorithm, exploiting Lemma 5.1, we need to show that the embeddability conditions stated in Lemma 5.1 hold. We do so by using Robinson Diagram Lemma: we prove that the Robinson Diagram ∆(M), where M satisfies the output formula, is consistent with the input formula. However, in the case of the Conditional Algorithm, the proof of this fact is more involved: indeed, we use the ground Knuth-Bendix completion in order to prove that no inconsistent literal can be produced, and this requires a careful analysis of the equalities that can be generated during the completion. Specifically, a particular attention is needed for equalities involving only symbols from a certain subset 7 Actually the values of the assignment I to the z uniquely determines the values of I to the y. The fact that no inconsistency can be produced concludes the proof of the theorem. In order to prove Lemma 5.4 (used in the proof of Theorem 5.2), we need to show the following preliminary lemma: (a 1 , . . . , a h ) = b and Γ → f (a 1 , . . . , a h ) = b both belong to the set of clauses S 3 obtained after Step 2 in Subsection 4.1 and b is not the same term as b , then S 3 contains also a clause subsuming the clause Proof. By induction on the number K of applications of the rewriting rule of Step 2 needed to derive Γ → f (a 1 , . . . , a h ) = b and Γ → f (a 1 , . . . , a h ) = b . If K is 0, the claim is clear by the instruction of Step 1. Suppose that K > 0 and let Γ → f (a 1 , . . . , a h ) = b be obtained from Γ 1 → e i = e j by rewriting e j to e i from some clause C. We need to distinguish cases depending on the position p of the rewriting. All cases being treated in the same way, suppose for instance that p is in the antecedent, 8 so that C is Γ 2 → f (a 1 , . . . , a h ) = b and that Γ → f (a 1 , . . . , a h ) = b is Γ 1 , Γ 2 [e i ] p → f (a 1 , . . . , a h ) = b . Then by induction hypothesis S 3 contains a clause subsuming Γ, Γ 2 , a 1 = a 1 , . . . , a h = a h → b = b and rewriting with Γ 1 → e i = e j produces Γ, Γ 1 , Γ 2 [e i ] p , a 1 = a 1 , . . . , a h = a h → b = b as required.
We now state and prove the theorem of correctness and completeness of the Conditional Algorithm.
Theorem 5.2. Let S 3 be obtained from ∃e φ(e, z) as in Steps 1-2 of Subsection 4.1. Then the conjunction C of all possible φ δ (varying δ among the conditional DAGs that can be built out of S 3 ) is a UI of ∃e φ(e, z) in EU F.
Proof. We use Lemma 5.1 in order to show that the output C is the UI of the input formula ∃e φ(e, z). Condition (i) of that Lemma is ensured by Lemma 4.1 above because S 3 is logically equivalent to φ. So let us take a model M and elementsã from its support such that we have M |= δ φ δ under the assignment of theã to the parameters z. We need to expand it to a superstructure N in such a way that we have N |= S 1 , under some assignment to z, e extending the assignment z →ã (recall that S 1 is logically equivalent to φ too). From now on, we consider the assignment z →ã fixed, so that when we write M |= C for a clause C(z) we mean that M |= C holds under the assignment z →ã. Now, we can transform the embeddability problem of finding the aforementioned superstructure N into a consistency problem as follows. First of all, notice that every w-conditional DAG δ extracted from S 3 (let it be given by the clauses (4.4)) is naturally 8 There is a case, where p is in the consequent, that is treated in a slightly different way than all the other ones. If, using Γ1 → ei = ej, we rewrite a clause of the form Γ → f (a 1 , . . . , a n ) = ej into Γ1, Γ → f (a 1 , . . . , a n ) = ei (so that b is the same as ei) and if b is ej, then, instead of applying induction, we can directly take Γ1 → ei = ej as the clause we are looking for. If b is not ej, induction applies as in all the other cases. equipped with a substitution σ δ which is given in DAG form by w 1 → t 1 , . . . , w s → t s . We say that δ is realized in M iff we have that Let δ be a w-conditional DAG which is realized in M and let it be maximal with this property (a w-conditional DAG δ is said to be bigger than a w -conditional DAG δ iff w includes w -the inclusion is as sets, the order is disregarded). Since M |= φ δ and δ is realized in M, it is clear that all L w -clauses from S 3 (hence also all L w -literals from S 1 ) are true in M. Let u = u 1 , . . . , u k be the variables from e \ w and let S u be the literals from S 1 which are not L w -literals. What we need (by Robinson Diagram Lemma) is to find a model for the following set of literals whereb i is the value of w i σ δ under the assignment z →ã. This is the consistency problem we solve in the remaining part of the proof: in order to do so, we obtain by completion a suitable canonical rewriting system that does not introduce inconsistencies.
We orient the functional equalities in (5.3) from left to right and the equalities w i =b i also from left to right. The e are ordered as e 1 > · · · > e N and are bigger than the constants naming the elements of |M|; function symbols are bigger than constant symbols. We show that the ground Knuth-Bendix completion of (5.3) cannot produce any inconsistent literal of the kind t = t (this completes the proof of the Theorem).
First notice that the rules {w i =b i | i = 1, . . . , s} ∪ {z i =ã i | i = 1, . . . , M } simply eliminates the w and the z from S u (they become inactive after such normalization steps). LetS u be the set of equalities resulting after this elimination. It turns out thatS u can only contain equalities of the kinds f (a 1 , . . . , a h ) = a (5.4) where a 1 , . . . a h , a can be either among the u or constants naming elements of |M|. However some of the u must be among a 1 , . . . , a h for each equality of the kind (5.4) because atoms not containing the u are removed by ∆ Σ (M) and atoms like u i = t (where t does not contain any of the u) cannot be there because δ is maximal. During completion, in addition to these kinds of atoms, only atoms of the kind can possibly be produced. This is a consequence of the next Lemma. Below we say that a tuple of atoms Γ is realized in M iff the Γ are L w -atoms and M |= Γσ δ ; similarly we say that a literal Θ is conditionally realized in M if there exists Γ realized in M with Γ → Θ ∈ S 3 (if Θ is a negative literal, Γ → Θ stands for Γ, ¬Θ → ).
Lemma 5.4. Suppose that a literal Λ is produced during the completion ofS u . Then it must be of the kinds (5.4), (5.5), (5.6). Moreover there exists a literal Λ such that (i) Λ is conditionally realized in M; (ii) Λ is obtained from Λ by rewriting z, w respectively toã,b.
Proof. By straightforward case analysis; we analyze the most interesting case given by the superposition of two rules of the kind (5.4). Suppose that with Γ , Γ realized in M, with a 1 , . . . , a h , a rewritable (using z, w →ã,b) to a 1 , . . . , a h , a, respectively, and with a 1 , . . . , a h , a also rewritable (using z, w →ã,b) to a 1 , . . . , a h , b, respectively. By Lemma 5.3, S 3 contains a clause subsuming Γ , Γ , a 1 = a 1 , . . . , a h = a h → a = a (5.7) which is as required because Γ , Γ , a 1 = a 1 , . . . , a h = a h is realized in M and a = a rewrites (using z, w →ã,b) to a = b. It remains to check that a = b is of the kind (5.6). If both a , a taken from the consequent of (5.7) belong to z ∪ w, then since the antecedent of (5.7) is realized in M and (5.7) belongs to S 3 , a and b must be the same element from |M|, so that a = b is a trivial identity (which does not enter into the completion). It cannot be that only one between a and a belongs to z ∪ w (the other one being from u) because δ is maximal among conditional DAGs realized by M and thus it cannot be properly enlarged by adding to it the additional conditional definition which would be supplied by (5.7). Thus it must be the case that both a , a are from u, which implies that they cannot be rewritten (using z, w →ã,b), so that a is a, a is b and a = b is of the kind (5.6).
Proof of Theorem 5.2 (continued). OnceS u (standing alone) is completed, only literals of the kinds (5.4), (5.5), (5.6) are produced. No completion inference is possible between literals of the kinds (5.4), (5.5), (5.6) on one side and literals from ∆ Σ (M) ∪ {w i =b i | i = 1, . . . , s} ∪ {z i =ã i | i = 1, . . . , M } on the other side; hence the completion ofS u alone, once joined to ∆ Σ (M) ∪ {w i =b i | i = 1, . . . , s} ∪ {z i =ã i | i = 1, . . . , M } yields a completion of (5.3). The only possible inconsistencies that can arise are given by literals of the kind u i = u i . Suppose that indeed one such a literal u i = u i is produced during the completion ofS u . Applying the above lemma, there should be in S 3 a clause like Γ, u i = u i → (i.e. after simplification, a clause like Γ → ) with Γ being realized in M. The last means that M |= Γσ δ . This cannot be, because Γ → is a L w -clause from S 3 : in fact, we have that M |= φ δ and that δ is realized in M, which imply that M |= Cσ δ holds for every L w -clause C from S 3 by the definition of φ δ . In particular, we should have M |= ¬ Γσ δ , taking Γ → as C.

Conclusions
Two different algorithms for computing uniform interpolants (UIs) from a formula in EUF with a list of symbols to be eliminated are presented. They share a common subpart as well as they are different in their overall objectives. The first algorithm generates a UI expressed as a disjunction of conjunctions of literals, whereas the second algorithm gives a compact representation of a UI as a conjunction of Horn clauses. The output of both algorithms needs to be expanded if a fully (or partially) unravelled uniform interpolant is needed for an application. This restriction/feature is similar in spirit to syntactic unification where also efficient unification algorithms never produce output in fully expanded form to avoid an exponential blow-up.
For generating a compact representation of the UI, both algorithms make use of DAG representations of terms by introducing new symbols to stand for subterms arising in the full expansion of the UI. Moreover, the second algorithm uses a conditional DAG, a new data structure introduced in the paper, to represent subterms under conditions. The complexity of the algorithms is also analyzed. It is shown that the first algorithm generates exponentially many branches with each branch of at most quadratic length; the UIs produced by the second algorithm have polynomial size in all the hand-made examples we tried (but the worst case size is still exponential as witnessed by ad hoc examples like Example 4.5). A fully expanded UI can easily be of exponential size. An implementation of both the algorithms, along with a comparative study are planned as future work. In parallel with the implementation, a characterization of classes of formulae for which computation of UIs requires polynomial time in our algorithms (especially in the second one) needs further investigation.