Towards Uniform Certification in QBF

We pioneer a new technique that allows us to prove a multitude of previously open simulations in QBF proof complexity. In particular, we show that extended QBF Frege p-simulates clausal proof systems such as IR-Calculus, IRM-Calculus, Long-Distance Q-Resolution, and Merge Resolution. These results are obtained by taking a technique of Beyersdorff et al. (JACM 2020) that turns strategy extraction into simulation and combining it with new local strategy extraction arguments. This approach leads to simulations that are carried out mainly in propositional logic, with minimal use of the QBF rules. Our proofs therefore provide a new, largely propositional interpretation of the simulated systems. We argue that these results strengthen the case for uniform certification in QBF solving, since many QBF proof systems now fall into place underneath extended QBF Frege.


Introduction
The problem of evaluating Quantified Boolean Formulas (QBF), an extension of propositional satisfiability (SAT), is a canonical PSPACE-complete problem [SM73,AB09].Many tasks in verification, synthesis and reasoning have succinct QBF encodings [SBPS19], making QBF a natural target logic for automated reasoning.As such, QBF has seen considerable interest from the SAT community, leading to the development of a variety of QBF solvers (e.g., [LB10,JKMC16,RT15,JM15b,PSS19a]).The underlying algorithms are often highly nontrivial, and their implementation can lead to subtle bugs [BLB10].While formal verification of solvers is typically impractical, trust in a solver's output can be established by having it generate a proof trace that can be externally validated.This is already standard in SAT solving with the DRAT proof system [WHJ14], for which even formally verified checkers are available [CHJ + 17].A key requirement for standard proof formats like DRAT is that they simulate all current and emerging proof techniques.
Currently, there is no decided-upon checking format for QBF proofs (although there have been some suggestions [JBS + 07, HSB17]).The main challenge of finding such an universal format, is that QBF solvers are so radically different in their proof techniques, that each solver basically works in its own proof system.For instance, solvers based on CDCL and (some) clausal abstraction solvers can generate proofs in Q-resolution (Q-Res) [KKF95] or long-distance Q-resolution (LD-Q-Res) [BJ12], while the proof system underlying expansion based solvers combines instantiation of universally quantified variables with resolution (∀Exp+Res) [JM15a].Variants of the latter system have been considered: IR-calc (Instantiation Resolution) admits instantiation with partial assignments, and IRMcalc (Instantiation Resolution Merge) additionally incorporates elements of long-distance Q-resolution [BCJ19].
A universal checking format for QBF ought to simulate all of these systems.A good candidate for such a proof system has been identified in extended QBF Frege (eFrege + ∀red): Beyersdorff et al. showed [BBCP20] that a lower bound for eFrege + ∀red would not be possible without a major breakthrough.
In this work, we show that eFrege + ∀red does indeed p-simulate IRM-calc, Merge Resolution (M-Res) and LQU + -Res (a generalisation of LD-Q-Res), thereby establishing eFrege + ∀red and any stronger system (e.g., QRAT [HSB17] or G [KP90]) as potential universal checking formats in QBF.As corollaries, we obtain (known) simulations of ∀Exp+Res [KHS17] and LD-Q-Res [KS19] by QRAT, as well as a (new) simulation of IR-calc by QRAT, answering a question recently posed by Chede and Shukla [CS21].A simulation structure with many of the known QBF proof systems and our new results is given in Figure 1.Our proofs crucially rely on a property of QBF proof systems known as strategy extraction.Here, "strategy" refers to winning strategies of a set of PSPACE two-player games (see Section 2 for more details) each of which corresponds exactly to some QBF.A proof system is said to have strategy extraction if a strategy for the two-player game associated with a QBF can be computed from a proof of the formula in polynomial time.
Balabanov and Jiang discovered [BJ12] that Q-Resolution admitted a form of strategy extraction where a circuit computing a winning strategy could be extracted in linear time from the proofs.Strategy extraction was subsequently proven for many QBF proof systems (cf. Figure 1): the expansion based systems ∀Exp+Res [BCJ19], IR-calc [BCJ19] and IRM-calc [BCJ19], Long-Distance Q-Resolution [ELW13], including with dependency schemes [ELW13], Merge Resolution [BBM18], Relaxing Stratex [Che16] and C-Frege + ∀red systems including eFrege + ∀red [BBCP20].Strategy extraction also gained notoriety because it became a method to show Q-resolution lower bounds [BCJ19].Beyersdorff et al. [BBCP20,BCMS18] generalised this approach to more powerful proof systems, allowing them to establish a tight correspondence between lower bounds for eFrege + ∀red and two major open problems in circuit complexity and propositional proof complexity: they showed that proving a lower bound for eFrege + ∀red is equivalent to either proving a lower bound for P/poly or a lower bound for propositional eFrege.It was conjectured by Chew [Che21] that all the aforementioned proof systems that had strategy extraction were very likely to be simulated by eFrege + ∀red.An outline of how to use strategy extraction to obtain the corresponding simulations was also provided.
We follow this outline in proving simulations for multiple systems by eFrege + ∀red.While the strategy extraction for expansion based systems [BCJ19] has been known for a while using the technique from Goultiaeva et.al [GVB11], there currently is no intuitive way to formalise this strategy extraction into a simulation proof.Here we specifically studied a new strategy extraction technique given by Schlaipfer et al. [SSWZ20], that creates local strategies for each ∀Exp+Res line.Inductively, we can affirm each of these local strategies and prove the full strategy extraction this way.This local strategy extraction technique is based on arguments of Suda and Gleiss [SG18], which allow it to be generalised to the expansion based system IRM-calc.We thus manage to prove a simulation for ∀Exp+Res and generalise it to IR-calc and then to IRM-calc.We also show a much more straight-forward simulation of M-Res and an adaptation of the IRM-calc argument to LQU + -Res.
The remainder of the paper is structured as follows.In Section 2 we go over general preliminaries and the definition of eFrege + ∀red.The remaining sections are each dedicated to simulations of different calculi by eFrege + ∀red.In Section 3 we begin with a simulation of M-Res as a relatively easy example.
In Section 4 we find show how eFrege + ∀red simulates expansion based systems.We find a propositional interpretation and a local strategy for IR-calc.This leads to a full simulation of IR-calc by eFrege + ∀red.In Section 5 we extend this simulation to IRM-calc which involves dealing with merged literals.In Section 6 we study the strongest CDCL proof system LQU + -Res and explain why it is also simulated by eFrege + ∀red, using a similar argument to IRM-calc.We leave some of the finer details of the simulation of IRM-calc and LQU + -Res in the Appendix.
When investigating QBF in computer science we want to standardise the input formula.In a prenex QBF, all quantifiers appear outermost in a (quantifier) prefix, and are followed by a propositional formula, called the matrix.If every propositional variable of the matrix is bound by some quantifier in the prefix we say the QBF is a closed prenex QBF.We often want to standardise the propositional matrix, and so we can take the same approach as seen often in propositional logic.We denote the set of universal variables as U , and the set of existential variables as E. A literal is a propositional variable (x) or its negation (¬x or x).A clause is a disjunction of literals.Since disjunction is idempotent, associative and commutative we can think of a clause simultaneously as a set of literals.The empty clause is just false.A conjunctive normal form (CNF) is a conjunction of clauses.Again, since conjunction is idempotent, associative and commutative a CNF can be seen as set of clauses.The empty CNF is true, and a CNF containing an empty clause is false.Every propositional formula has an equivalent formula in CNF, we therefore restrict our focus to closed PCNF QBFs, that is closed prenex QBFs with CNF matrices.
2.2.QBF Proof Systems.[CR79] is a polynomial-time checking function that checks that every proof maps to a valid theorem.Different proof systems have varying strengths, in one system a theorem may require very long proofs, in another the proofs could be considerably shorter.We use proof complexity to analyse the strength of proof systems [Kra19].A proof system is said to have an Ω(f (n))-lower bound, if there is a family of theorems such that shortest proof (in number of symbols) of the family are bounded below by Ω(f (n)) where n is the size (in number of symbols) of the theorem.Proof system p is said to simulate proof system q if there is a fixed polynomial P (x) such that for every q-proof π of every theorem y there is a p-proof of y no bigger than P (|π|) where |π| denotes the size of π.A stricter condition, proof system p is said to p-simulate proof system q if there is a polynomial-time algorithm that takes q-proofs to p-proofs preserving the theorem.2.2.2.Extended Frege+∀-Red.Frege systems are "text-book" style proof systems for propositional logic.They consist of a finite set of axioms and rules where any variable can be replaced by any formula (so each rule and axiom is actually a schema).A Frege system needs also to be sound and complete.Frege systems are incredibly powerful and can handle simple tautologies with ease.No lower bounds are known for Frege systems and all Frege systems are p-equivalent [CR79,Rec76].For these reasons we can assume all Frege-systems can handle simple tautologies and syllogisms without going into details.

Proof Complexity. A proof system
Extended Frege (eFrege) takes a Frege system and allows the introduction of new variables that do not appear in any previous line of the proof.These variables abbreviate formulas, but since new variables can be consecutively nested, they can be understood to represent circuits.The rule works by introducing the axiom of v ↔ f for new variable v (not appearing in the formula f ).Alternatively one can consider eFrege as the system where lines are circuits instead of formulas.
Extended Frege is a very powerful system, it was shown [Kra95,Bey09] that any propositional proof system f can be simulated by eFrege + ||ϕ|| where ϕ is a polynomially recognisable axiom scheme.The QBF analogue is eFrege + ∀red, which adds the reduction rule to all existing eFrege rules [BBCP20].eFrege + ∀red is refutationally sound and complete for closed prenex QBFs.The reduction rules allows one to substitute a universal variable in a formula with ⊥ or with ⊤ as long as no other variable appearing in that formula is right of it in the prefix.Extension variables now must appear in the prefix and must be quantified right of the variables used to define it, we can consider them to be defined immediately right of these variables as there is no disadvantage to this.2.3.QBF Strategies.With a closed prenex QBF Πϕ, the semantics of a QBF has an alternative definition in games.The two-player QBF game has an ∃-player and a ∀-player.The game is played in order of the prefix Π left-to-right, whoever's quantifier appears must assign the quantified variable to ⊥ or ⊤.The existential player is trying to make the matrix ϕ become true.The universal player is trying to make the matrix become false.Πϕ is true if and only if there winning strategy for the ∃ player.Πϕ is false if and only if there winning strategy for the ∀ player.
A strategy for a false QBF is a set of functions f u for each universal variable u on variables left of u in the prefix.In a winning strategy the propositional matrix must evaluate to false when every u is replaced by f u .A QBF proof system has strategy extraction if there is a polynomial time program that takes in a refutation π of some QBF Ψ and outputs circuits that represent the functions of a winning strategy.
A policy is similarly defined as a strategy but with partial functions for each universal variables instead of a fully defined function.

Extended Frege+∀-Red p-simulates M-Res
In this section we show a first example of how the eFrege + ∀red simulation argument works in practice for systems that have strategy extraction.Merge resolution provides a straightforward example because the strategies themselves are very suitable to be managed in propositional logic.In later theorems where we simulate calculi like IR-calc and IRM-calc, representing strategies is much more of a challenge.
3.1.Merge Resolution.Merge resolution (M-Res) was first defined by Beyersdorff, Blinkhorn and Mahajan [BBM18].Its lines combine clausal information with a merge map, for each universal variable.Merge maps give a "local" strategy which when followed forces the clause to be true or the original CNF to be false.
3.1.1.Definition of Merge Resolution.Each line of an M-Res proof consists of a clause on existential variables and partial universal strategy functions for universal variables.These functions are represented by merge maps, which are defined as follows.For universal variable u, let E u be the set of existential variables left of u in the prefix.For line number i, A non-trivial merge map M u i is a collection of nodes in [i] along with the construction function M u i , which details the structure.For j ∈ [i], the construction function M u i (j) is either in {⊥, ⊤} for leaf nodes or E u × [j] × [j] for internal nodes.The root r(u, i) is the highest value of all the nodes M u i .The strategy function h u i,j : {0, 1} Eu → {0, 1} maps assignments of existential variables E u in the dependency set of u to a value for u.The function h u i,t for leaf nodes t is simply the truth value M u i (t).For internal nodes a with M u i (a) = (x, b, c), we should interpret h u i,a as "If x then h u i,b , else ).In summary the merge map M u i (j) is a representation of the strategy given by function h u i,r(u,i) .
The merge resolution proof system inevitably has merge maps for the same universal variable interact, and we have two kinds of relations on pairs of merge maps.Definition 3.1.Merge maps M u j and M u k are said to be consistent if M u j (i) = M u k (i) for each node i appearing in both M u j and M u k .Definition 3.2.Merge maps M u j and M u k are said to be isomorphic if there exists a bijection f from the nodes of M u j to the nodes of With two merge maps M u j and M u k , we define two operations as follows: k is trivial (representing a "don't care"), or M u j and M u k are isomorphic and returns M u k if M u j is trivial and not isomorphic to M u k .If neither M u j or M u k is trivial and the two are not isomorphic then the operation fails.
. Merge map M u i has a new node r(u, i) as a root node (which is greater than the maximum node in each of M u i (a) or M u j (a)), and is defined as Proofs in M-Res consist of lines, where every line is a pair Here, C i is a purely existential clause (it contains only literals that are from existentially quantified variables).The other part is a set containing merge maps for each universal variable (some of the merge maps can be trivial, meaning they do not represent any function).Each line is derived by one of two rules: Axiom: C i = {l | l ∈ C, var(l) ∈ E} is the existential subset of some clause C where C is a clause in the matrix.If universal literals u, ū do not appear in C, let M u i be trivial.If universal variable u appears in C then let i be the sole node of M u i with M u i (i) = ⊥.Likewise if ¬u appears in C then let i be the sole node of ¬x}, and every M u i can either be defined as Select(M u j , M u k ), when M u j and M u k are isomorphic or one is trivial, or as Merge(x, M u j , M u k ) when x < u and M u j and M u k are consistent.
3.2.Simulation of Merge Resolution.We now state the main result of this section.
For a false QBF Πϕ refuted by M-Res, the final set of merge maps represent a falsifying strategy for the universal player, the strategy can be asserted by a proposition S that states that all universal variables are equivalent to their strategy circuits.It then should be the case that if ϕ is true, S must be false, a fact that can be proved propositionally, formally ϕ ⊢ ¬S.
To build up to this proof we can inductively find a local strategy S i for each clause C i that appears in an M-Res line (C i , {M u i }) such that ϕ ⊢ S i → C i .Elegantly, S i is really just a circuit expressing that each u ∈ U takes its value in M u i (if non-trivial).Extension variables are used to represent these local strategy circuits and so the proof ends up as a propositional extended Frege proof.
The final part of the proof is the technique suggested by Chew [Che21] which was originally used by Beyersdorff et al. [BBCP20].That is, to use universal reduction starting from the negation of a universal strategy and arrive at the empty clause.
Proof of Theorem 3.3.Definition of extension variables.We create new extension variables for each node in every non-trivial merge map appearing in a proof.s u i is created for the node i in merge map M u i .s u i is defined as a constant when i is leaf node in M u i .If i is an internal node s u i is defined as ).Because x has to be before u in the prefix, s u i is always defined before universal variable u.Induction Hypothesis: It is easy for eFrege to prove u∈U i (u ↔ s u r(u,i) ) → C i from the axioms of ϕ, where r(u, i) is the index of the root node of Merge map M u i .U i is the subset of U for which M u i is non-trivial.Base Case: Axiom: Suppose C i is derived by axiom download of clause C. If u has a strategy, it is because it appears in a clause and so u ↔ s u i , where s u i ↔ c u for c u ∈ ⊤, ⊥, c u is correctly chosen to oppose the literal in C so that C i is just the simplified clause of C replacing all universal u with their c u .This is easy for eFrege to prove.Inductive Step: Resolution: , where r(u, i) is the root index of the Merge map for u on line i.We resolve these together.
To argue that u∈U i (u ↔ s u r(u,i) ) → C j we prove by induction that we can replace u ↔ s u r(u,j) with u ↔ s u r(u,i) one by one.Induction Hypothesis: U i is partitioned into W the set of adjusted variables and V the set of variables yet to be adjusted. ( ) → C j is the premise of the (outer) induction hypothesis, since U j ⊆ U i .
Inductive Step: Starting with ( v∈V In (1) (u ↔ s u r(u,j) ) is already (u ↔ s u r(u,i) ) as r(u, j) = r(u, i).In (2) we are simply weakening the implication.In (3) we prove inductively from the leaves to the root that s u f (t) = s u j,t .Eventually, we end up with s u f (r(u,k)) = s u r(u,i) .Then (u ↔ s u r(u,j) ) can be replaced by (u ↔ s u f (r(u,j)) ).As f is an isomorphism f (r(u, j)) = r(u, k) and because Select is used r(u, k) = r(u, i).Therefore we have (u ↔ s u r(u,i) ).In (4) We need to replace s u r(u,j) with s u r(u,i) .For this we use the definition of merging that x → (s u r(u,i) ↔ s u r(u,j) ) and so we have (s u r(u,i) ↔ s u r(u,j) ) ∨ ¬x but the ¬x is absorbed by the C j in right hand side of the implication.

Finalise Inner Induction: At the end of this inner induction, we have u∈U
Finalise Outer Induction: Note that we have done three nested inductions on the nodes in a merge maps, on the universal variables, and then on the lines of an M-Res proof.
Nonetheless, this gives a quadratic size eFrege proof in the number of nodes appearing in the proof.In M-Res the final line will be the empty clause and its merge maps.The induction gives us u∈U l (u ↔ s u r(u,l) ) → ⊥.In other words, if l) ).We continue this until we reach the empty disjunction.

Extended Frege+∀-Red p-simulates IR-calc
4.1.Expansion-Based Resolution Systems.The idea of an expansion based QBF proof system is to utilise the semantic identity: ∀uϕ(u) = ϕ(0) ∧ ϕ(1), to replace universal quantifiers and their variables with propositional formulas.With ∀u∃xϕ(u) = ∃xϕ(0)∧∃xϕ(1) the x from ∃xϕ(0) and from ∃xϕ(1) are actually different variables.The way to deal with this while maintaining prenex normal form is to introduce annotations that distinguish one x from another.We will also introduce a third annotation * which will be used only for the purpose of short proofs.(1) An extended assignment is a partial mapping from the universal variables to {0, 1, * }.We denote an extended assignment by a set or list of individual replacements i.e. 0/u, * /v is an extended assignment.We often use set notation where appropriate.(2) An annotated clause is a clause where each literal is annotated by an extended assignment to universal variables.(3) For an extended assignment σ to universal variables we write l restrict l (σ) to denote an annotated literal where restrict l (σ) = {c/u ∈ σ | lv(u) < lv(l)}.(4) Two (extended) assignments τ and µ are called contradictory if there exists a variable x ∈ dom(τ ) ∩ dom(µ) with τ (x) ̸ = µ(x).
4.1.1.Definitions.The most simple way to use expansion would be to expand all universal quantifiers and list every annotated clause.The first expansion based system we consider, ∀Exp+Res (Figure 2), has a mechanism to avoid this potential exponential explosion in some (but not all) cases.An annotated clause is created and then checked to see if it could be obtained from expansion.This way a refutation can just use an unsatisfiable core rather than all clauses from a fully expanded matrix.The drawback of ∀Exp+Res is that one might end up repeating almost the same derivations over and over again if they vary only in changes in the annotation which make little difference in that part of the proof.This was used to find a lower bound to ∀Exp+Res for a family of formulas easy in system Q-Res [JM15a].To rectify this, IR-calc improved on 14:9 C is a clause from the matrix and τ is a {0, 1} assignment to all universal variables.
∀Exp+Res to allow a delay to the annotations in certain circumstances.Annotated clauses now have annotations with "gaps" where the value of the universal variable is yet to be set.When they are set there is the possibility of choosing both assignments without the need to rederive the annotated clauses with different annotations.
For α an assignment of the universal variables and C an annotated clause we define inst(α, C) := l τ ∈C l restrict l (τ • α) .Annotation α here gives values to unset annotations where one is not already defined.Because the same α is used throughout the clause, the previously unset values gain consistent annotations, but mixed annotations can occur due to already existing annotations. (Axiom) where the notation 0/u for literals u is shorthand for 0/x if u = x and 1/x if u = ¬x.The definition of IR-calc is given in Figure 3. Resolved variables have to match exactly, including that missing values are missing in both pivots.However, non-contradictory but different annotations may still be used for a later resolution step after the instantiation rule is used to make the annotations match the annotations of the pivot.

4.1.2.
Local Strategies for ∀Exp+Res.The work from Schlaipfer et al. [SSWZ20] creates a conversion of each annotated clause C appearing in some ∀Exp+Res proof into a propositional formula con(C) defined in the original variables of ϕ (so without creating new annotated variables).C appearing in a proof asserts that there is some (not necessarily winning) strategy for the universal player to force con(C) to be true under ϕ.The idea is that for each line C in an ∀Exp+Res refutation of Πϕ there is some local strategy S such that S ∧ ϕ → con(C).If C is empty, then S is a winning strategy for the universal player.Otherwise, S only wins if the existential player cooperates by playing according to one of the annotated literals l τ ∈ C, that is, if the existential player promises to falsify the literal l whenever the assignment chosen by the universal player is consistent with the annotation τ .Suda and Gleiss showed that the resolution rule can then be understood as combining strategies so that the "promises" of the existential player corresponding to the pivot literals x τ and ¬x τ cancel out [SG18].
The extra work by Schlaipfer et al. is that the strategy circuits (for each u) can be constructed in polynomial time, and can be defined in variables left of u i in the prefix.Let u 1 . . .u n be all universal variables in order.For each line in an ∀Exp+Res proof we have a strategy which we will here call S. For each u i there is an extension variable Val i S , before u i , that represents the value assigned to u i by S (under an assignment of existential variables).Using these variables, we obtain a propositional formula representing the strategy as S = n i=1 u i ↔ Val i S .Additionally, we define a conversion of annotated logic in ∀Exp+Res to propositional logic as follows.For annotations τ let anno(τ ) = 1/u i ∈τ u i ∧ 0/u i ∈τ ūi .We convert annotated literals as con(l τ ) = l ∧ anno(τ ) and clauses as con(C) = l∈C con(l).4.2.Policies and Simulating IR-calc.The conversion needs to be revised for IR-calc.In particular the variables not set in the annotations need to be understood.The solution is to basically treat unset as a third value, and work with local strategies that do not set all universal variables.Following Suda and Gleiss, we refer to such (partial) strategies as policies [SG18].
In practice, this requires new Set i S variables (left of u i ) which state that the ith universal variable is set by policy S. We include these variables in our encoding of policy S and let . The conversion of annotations, literals and clauses also has to be changed.For annotations τ of some quantified variable x let Let con S (l τ ) = l ∧ anno x,S (τ ) and con S (C) = l∈C con S (l) similarly to before, we just reference a particular policy S.This means that we again want S ∧ ϕ → con S (C) for each line, note that Set i S variables are defined in their own way.The most crucial part of simulating IR-calc is that after each application of the resolution rule we can obtain a working policy.
Lemma 4.3.Suppose, there are policies L and R such that The proof of the simulation of IR-calc relies on Lemma 4.3.To prove this we have to first give the precise definitions of the policy B based on policies L and R. Schlaipfer et al.'s work [SSWZ20] is used to crucially make sure the strategy B, respects the prefix ordering.4.2.1.Building the Strategy.We start to define Val i B and Set i B on lower i values first.In particular we will always start with 1 ≤ i ≤ m where u m is the rightmost universal variable still before the pivot variable x in the prefix.Starting from i = 0, the initial segments of anno x,L (τ ) and anno x,R (τ ) may eventually reach such a point j where one is contradicted.Before this point L and R are detailing the same strategy (they may differ on Val i but only when Set i is false) so this part of B can be effectively played as both L and R simultaneously.Without loss of generality, as soon as L contradicts anno x,L (τ ), we know that con L (x τ ) is not satisfied by L and thus it makes sense for B to copy L, at this point and the rest of the strategy as it will satisfy con B (C 1 ).It is entirely possible that we reach i = m and not contradict either anno x,L (τ ) or anno x,R (τ ).Fortunately after this point in the game we now know the value the existential player has chosen for x.We can use the x value to decide whether to play B as L (if x is true) or R (if x is false).
To build the circuitry for Val i B and Set i B we will introduce other circuits that will act as intermediate.First we will use constants Set i τ and Val i τ that make anno x,S (τ ) equivalent to . This mainly makes our notation easier.Next we will define circuits that represent two strategies being equivalent up to the ith universal variable.This is a generalisation of what was seen in the local strategy extraction for ∀Exp+Res [SSWZ20].
We specifically use this for a trigger variable that tells you which one of L and R differed from τ first.
Dif 0 L := 0 and Dif i L : L and Dif i R can both be true but only if the strategies start to differ from τ at the same point.
Using these auxiliary variables, we can define a bottom policy B that chooses between the left policy L and the right policy R as indicated above, following Suda and Gleiss's Combine operation [SG18].If one of the policies is inconsistent with the annotation τ (this includes setting a variable that is not set by τ ), policy B follows whichever policy is inconsistent first, picking L if both policies start deviating at the same time.If both policies are consistent with τ , policy B follows R if the pivot x is false, otherwise it follows L. Definition 4.4 (Definition of resolvent policy for IR-calc).For 0 otherwise.We will now define variables B L and B R .These say that B is choosing L or R, respectively.These variables can appear rightmost in the prefix, as they will be removed before reduction takes place.The purpose of B L (resp.B R ) is that con B becomes the same as con L (resp.con R ).
The important points are that B is set up so that it either takes values in L or R, i.e.B → B L ∨ B R , specifically we need that whenever the propositional formula anno x,B (τ ) is satisfied, B = B L when x, and B = B R when ¬x.The variables Set i B and Val i B that comprise the policy are carefully constructed to come before u i .A number of technical lemmas involving all these definitions is necessary for the simulation.
Lemma 4.5.For 0 < j ≤ m the following propositions have short derivations in Extended Frege: Note that since ¬ Dif 0 R , Eq 0 L=τ , Eq 0 R=τ are all true.The proofs for Dif j R , ¬ Eq j L=τ and ¬ Eq j R=τ are identical modulo the variable names.Lemma 4.6.For 0 ≤ i ≤ j ≤ m the following propositions that describe the monotonicity of Dif have short derivations in Extended Frege: Proof.For Dif L and Dif R , Induction Hypothesis on j: For ¬ Eq f =g , Induction Hypothesis on j: ¬ Eq i f =g → ¬ Eq j f =g has an O(j) proof.Base Case j = i: ¬ Eq i f =g → ¬ Eq i f =g is a tautology that Frege can handle.Inductive Step j + 1: Eq j+1 f =g := Eq j f =g ∧A where A is an expression.Therefore in all cases ¬ Eq j f =g → ¬ Eq j+1 f =g is a straightforward corollary with a constant-size number of additional Frege steps.Using the induction hypothesis ¬ Eq i f =g → ¬ Eq j f =g we can get ¬ Eq i f =g → ¬ Eq j+1 f =g .
Lemma 4.7.For 0 ≤ i ≤ j ≤ m the following propositions describe the relationships between the different extension variables and have short derivations in Extended Frege: L is defined as 0 so ¬ Dif i L is true and trivially implied by Eq i L=τ .This can be shown in a constant-size Frege proof.
Again, using the induction hypothesis, Eq i+1 L=τ now implies ,Dif i L Set i+1 L and (Val i+1 L ↔ Val i+1 τ ) which is enough for Dif i+1 L .Therefore using the induction hypothesis Eq i+1 L=τ → ¬ Dif i+1 L .This can be shown in a constant number of Frege steps.Similarly for R.
The formulas Dif i L ∧¬ Dif i−1 L → Eq i−1 R=τ are simple corollaries of the inductive definition of Dif i L , and combined with Eq Similarly if we swap L and R.
Lemma 4.8.For any 0 ≤ i ≤ m the following propositions are true and have short Extended Frege proofs.
Proof.We primarily use the disjunction in Lemma 4.5 Dif L is saying the difference triggers at that point.We can represent that in a proposition that can be proven in Extended Frege: ).We want to show that this also triggers the negation of anno x,L (τ ).If L differs from τ on a Set i L value we contradict anno x,L (τ ) in one of two ways: Each disjunct is a constant size Frege derivation When put together with the big disjunction this lends itself to a linear-size (in m) Frege derivation which is also symmetric for R. Lemma 4.9.For any 1 ≤ j ≤ m the following propositions are true and have a short Extended Frege proof.
R=τ and ¬ Eq j−1 L=τ are the problems here respectively, but they can be removed via induction to eventually get ¬ Dif j L ∧¬ Dif j R → Eq j L=τ and ¬ Dif j L ∧¬ Dif j R → Eq j R=τ .The remaining implications are corollaries of these and rely on the definition of Eq, Set B and Val B .Induction Hypothesis on j: ¬ Dif j L ∧¬ Dif j R → Eq j L=τ and ¬ Dif j L ∧¬ Dif j R → Eq j R=τ .Base Case j = 0: Eq j L=τ and Eq j R=τ are both true by definition so the implications automatically hold.
R=τ so we get ¬ Eq j L=τ → ¬ Eq j−1 L=τ ∨ Dif j L ∨¬ Eq j−1 R=τ , which using the induction hypothesis to remove ¬ Eq j−1 L=τ and ¬ Eq j−1 R=τ gives us ¬ Eq j L=τ → Dif j−1 R ∨ Dif j−1 L which can be weakened to ¬ Eq j L=τ → Dif j R ∨ Dif j L which is equivalent to ¬ Dif j L ∧¬ Dif j R → Eq j L=τ .This is done similarly when swapping L and R.
We can obtain the remaining propositions as corollaries by using the definition of Eq.
Lemma 4.10.For any 0 ≤ i ≤ m the following propositions are true and have short Extended Frege proofs.
. We will assume the definition )) and show that following proposition (that determines B) is falsified )) The first thing is that we only need to consider Dif i L ∧¬ Dif i−1 L as Dif i−1 L already falsifies our proposition.Next we show ¬ Dif i−1 R is forced to be true in this situation.To do this we need Lemma 4.7 for Dif )), we break this down into three cases (1 , in a polynomial number of Frege lines.
Now we suppose we want to prove the second proposition R is enough to satisfy the formula, so the case we need to explore is when Dif i−1 R is false.We can show that L=τ using Lemma 4.9.This allows us to examine just the part where Dif R is being triggered to be true by definition: ) will be satisfied.We look at the three ways the term (¬ Set i τ ∧¬ Set i L ∧ Set i R ) can be falsified and show that all the parts of the remaining term must be satisfied when assuming , in a polynomial number of Frege lines.
Lemma 4.11.The following propositions are true and have short Extended Frege proofs.
from Lemma 4.5.So there is some j where this is the case.i can be looked at in cases, where means that B and L are consistent for those i as proven in Lemma 4.9.Each of these use a polynomial number of Frege steps and uses of previous lemmas (each of which consist of a polynomial number of Frege steps).
Lemma 4.12.The following propositions are true and have short Extended Frege proofs.
R whenever Set i B is also true.Extended Frege can prove the O(m) propositions that show these equalities for 1 ≤ i ≤ m.
For i > m, by definition The following proposition is true and has a short Extended Frege proof.
Proof.This roughly says that B either is played entirely as L or is played as R. We can prove this by combining Lemmas 4.11 and 4.12, it essentially is a case analysis in formal form.
Lemma 4.14.The following propositions are true and have short Extended Frege proofs.
L ∧¬ Dif m R from the left hand side.This is where we use L ∧ Dif i L → ¬ anno x,L (τ ) and R ∧ Dif i R → ¬ anno x,R (τ ) from Lemma 4.8.These can be simplified to Putting these together allows us to remove B L and B R , deriving B ∧ anno x,B (τ ) → con B (C 1 ∨ C 2 ), which can be rewritten as B → con B (C 1 ∨ C 2 ) ∨ ¬ anno x,B (τ ).14:17 We now have two formulas Theorem 4.15.eFrege + ∀red p-simulates IR-calc.
Proof.We prove by induction that every annotated clause C appearing in an IR-calc proof has a local policy S such that ϕ ⊢ eFrege S → con S (C) and this can be done in a polynomial-size proof.
Axiom: Suppose C ∈ ϕ and D = inst(C, τ ) for partial annotation τ .We construct policy B such that B → con B (D) can be derived from C.
Instantiation: Suppose we have an instantiation step for C on a single universal variable u i using instantiation 0/u i , so the new annotated clause is D = inst(C, 0/u i ).From the induction hypothesis T → con T (C) we will develop B such that B → con B (D).
T ∧ Set j T becomes Val j T ∨¬ Set j T for instantiation by 1/u j .Either case means B satisfies the matching annotations anno as T appearing in our converted clauses con B (C) and con B (D), proving the rule as an inductive step.
Resolution: See Lemma 4.3.Contradiction: At the end of the proof we have T → con T (⊥).T is a policy, so we turn it into a full strategy B by having for each i: and Set i B = 1.Effectively this instantiates ⊥ by the assignment that sets everything to 0 and we can argue that B → con B (⊥) although con B (⊥) is just the empty clause.So we have ¬B.But ¬B is just n i=1 (u i ⊕ Val i B ). Furthermore, just as in Schlaipfer et al.'s work [SSWZ20], we have been careful with the definitions of the extension variables Val i B so that they are left of u i in the prefix.In eFrege + ∀red we can use the reduction rule (this is the first time we use the reduction rule).We show an inductive proof of n−k i=1 (u i ⊕ Val i B ) for increasing k eventually leaving us with the empty clause.This essentially is where we use the ∀-Red rule.Since we already have n i=1 (u i ⊕ Val i B ) we have the base case and we only need to show the inductive step.
We derive from n+1−k i=1 ) from reduction.We can resolve both to derive n−k i=1 (u i ⊕ Val i B ).We continue this until we reach the empty disjunction.
While this can be proven as a corollary of the simulation of IR-calc, a more direct simulation can be achieved by defining the resolvent strategy by removing the Set i variables (i.e. by considering them as always true).
14:18 5. Extended Frege+∀-Red p-simulates IRM-calc 5.1.IRM-calc.IRM-calc was designed to compress annotated literals in clauses in order to simulate LD-Q-Res [BCJ14].Like that system it uses the * symbol, but since universal literals do not appear in an annotated clause, the * value is added to the annotations, 0/u, 1/u, * /u being the first three possibilities in an extended annotation (we can consider the fourth to be when u does not appear in the annotation).
Axiom and instantiation rules as in IR-calc in Figure 3.
dom(τ ), dom(ξ) and dom(σ) are mutually disjoint.τ is a partial assignment to the universal variables with codomain(τ ) = {0, 1}.σ and ξ are extended partial assignments with codomain(σ) = codomain(ξ) = {0, 1, * }.The rules of IRM-calc as given in Figure 4, become more complicated as a result of the * /u annotations.In particular resolution is no longer done between matching pivots but matching is done internally in the resolution steps.* /u annotations are meant to represent ambiguous annotations so it could mean a pair of pivots literals that each have a * /u annotation do not actually match on u.The solution to this is to allow compatibility where one pivot has a * /u annotation where the other has no annotation in u.The idea is that the blank annotation is instantiated on-the-fly with the correct function for * /u so that the annotations truly match.The resolvent takes this into account by joining the instantiated clauses minus the pivot.
Additionally in order to introduce * annotations a merge rule is used.It is in IRM-calc where the positive Set literals introduced in the simulation of IR-calc become useful.In most ways Set i S asserts the same things as * /u i , that u i is given a value, but this value does not have to be specified.

Policies and Simulating IRM-calc.
5.2.1.Conversion.The first major change from IR-calc is that while anno S worked on three values in IR-calc, in IRM-calc we effectively run in four values Set i S , ¬ Set i S , Set i S ∧u i and Set i S ∧¬u i .Set i S is the new addition deliberately ambiguous as to whether u i is true or false.Readers familiar with the * used in IRM-calc may notice why Set i S works as a conversion of * /u i , as Set i S is just saying our policy has given a value but it may be different values in different circumstances.
5.2.2.Policies.Like in the case of IR-calc, most work needs to be done in the IRM-calc resolution steps, although here it is even more complicated.A resolution step in IRM-calc is in two parts.Firstly C 1 ∨ ¬x τ ⊔σ , C 2 ∨ x τ ⊔ξ are both instantiated (but by * in some cases), secondly they are resolved on a matching pivot.We simplify the resolution steps so that σ and ξ only contain * annotations, for the other constant annotations that would normally be found in these steps suppose we have already instantiated them in the other side so that they now appear in τ (this does not affect the resolvent).
Again we assume that there are policies L and R such that L → con L (C 1 ∨ ¬x τ ⊔σ ) and R → con R (C 2 ∨ x τ ⊔ξ ).We know that if L falsifies anno x,L (τ ⊔ σ) then con L (C 1 ) and likewise if R falsifies anno x,R (τ ⊔ ξ) then con R (C 2 ) is satisfied.These are the safest options, however this leaves cases when L satisfies anno x,L (τ ⊔ σ) and R satisfies anno x,R (τ ⊔ ξ) but L and R are not equal.This happens either when Set i L and ¬ Set i R both occur for * /u i ∈ σ or when ¬ Set i L and Set i R both occur for * /u i ∈ ξ.This would cause an issue if B had to choose between L and R to satisfy con B (C 1 ∨ C 2 ), as previously in IR-calc we would be able to be agreeable to both L and R and defer our choice later down the prefix (which could be necessary).Fortunately, we are not trying to satisfy con B (C 1 ∨ C 2 ) but con B (inst(ξ, C 1 ) ∨ inst(σ, C 2 )), so we have to choose between a policy that will satisfy con B (inst(ξ, C 1 )) and a policy that will satisfy con B (inst(σ, C 2 )).This is similar to doing the internal instantiation steps separately from the resolution steps, but the instantiation step need a slight bit more care as they instantiate by functions rather than constants.What this looks like is that in addition to L we will occasionally borrow values from R and vice versa.By borrowing values from the opposite policy we obtain a working new policy that does not have to choose between left and right any earlier than we would have for IR-calc.

Difference and Equivalence
Variables.We update our functions to take into account the 4 values.Note here again we assume σ and ξ only contain * annotations.Eq 0 5.2.4.Policy Variables.We define the policy variables Val i B and Set i B based on a number of cases, in all cases Val i B and Set i B are defined on variables left of u i .For For * /u i ∈ σ, otherwise.The idea for the policy B is to stick to τ ⊔ σ ⊔ ξ until either L or R differ, then commit to whichever strategy that is differing (and default to L when both start to differ at the same time).However there are cases where a Set L or Set R value may differ from τ ⊔ σ ⊔ ξ but it should not be counted as a true difference for L or R.An example is when * /u i ∈ σ and Set R is false, we should not commit to R here, but instead borrow the set and value pair from L for this case.Once we commit to L or R we may still have make sure B satisfies the instantiated resolvent so a few cases where we have force Set B to be true and we set Val B to be false.Finally if no difference is found along τ ⊔ σ ⊔ ξ we surely have to commit to either L or R depending on the value of the existential literal x.
Lemma 5.1.For 0 < j ≤ m the following propositions have short derivations in Extended Frege: Proof.The proof of Lemma 4.5 still works despite the modifications to definition.
Lemma 5.2.For 0 ≤ i ≤ j ≤ m the following propositions that describe the monotonicity of Dif and Eq have short derivations in Extended Frege: Proof.The proofs of Lemma 4.6 still work despite the modifications to definition.
In every case Dif i L = Dif i−1 L ∨(Eq i R=τ ⊔ξ ∧A) where A is a formula dependent on the domain of u i ¬ Dif i−1 L ∧ Dif i L means that Eq i R=τ ⊔ξ must be true.So we have Dif R=τ ⊔ξ in a constant size eFrege proof.
If we combine the above we have a linear size proof of Dif i L ∧¬ Dif i−1 L → Dif i−1 R .The same proofs symmetrically work for R.
Lemma 5.4.For any 0 ≤ i ≤ m the following propositions are true and have short Extended Frege proofs.
)), Set i τ and Val i τ But as anno x,L (τ ⊔ σ) insists on Set i L ∧u i , and literal in the clause.The proof size will be O(wm) where w is the width or number of literals in inst(ξ, C 1 ) ⊔ inst(σ, C 2 ) and m is the number of universal variables in the prefix.We detail all cases for L and R in the Appendix.
We observe first that Set This we can use to show B ∧ ¬ Dif i L ∧¬ Dif i R ∧x → L by taking a conjunction of all these.We then can derive (L → l) → (B ∧ ¬ Dif m L ∧¬ Dif m R ∧x → l) for existential literal l.We still have to show that (L → anno l,L (α)) ) for l's annotation α.We can do this via cases, but we have already done all cases when Set i L is true.We next show that . We can do this by simply observing the lines in Lemma 5.5 when ¬ Set i L is permitted.And in the final case we show Remembering that ¬ Dif m S → ¬ Dif i S for S ∈ {L, R} and 1 ≤ i ≤ m.We can now know that if L satisfies anno l,L (α) then ¬ Dif m L ∧¬ Dif m R ∧x will force B to satisfy anno l,L (α • ξ) and we can prove this in eFrege as and for every literal l α ∈ C 1 and annotation in C 1 we can assemble )) And symmetrically we can make a derivation of )) The proofs here are polynomial, in this proof section we argue for each literal in the clause, and for each universal variable, but also refer to Lemmas 5.5 and 5.2 which have linear proofs.So we have cubic size proofs in the worst case or more specifically O(wn 2 ), where w is the number of literals in the derived clause inst(σ, C 2 ) ∪ inst(ξ, C 2 ).
) and we can resolve on Dif m L and Dif m R .Theorem 5.9.eFrege + ∀red simulates IRM-calc.
variable must be an existential variable and allows resolution of universal variables.LQU + -Res [BWJ14] extends LD-Q-Res by allowing short and long distance resolution pivots to be universal, however, the pivot is never a merged literal z * .LQU + -Res encapsulates Q-Res, LD-Q-Res and QU-Res.
We consider two settings of the Res-rule: 6.2.Conversion to Propositional Logic and Simulation.LQU + -Res and IRM-calc are mutually incomparable in terms of proof strength, however both share enough similarities to get the simulation working.Once again we can use Set i S variables to represent an u * i , and a ¬ Set i S to represent that policy S chooses not to issue a value to u i .For any set of universal variables Y , let anno x,S (Y ) = u j / ∈Y u j <x ¬ Set j S ∧ u j ∈Y u j <x Set j S .Note that we do not really need to add polarities to the annotations, these are taken into account by the clause literals.Literals u and ū do not need to be assigned by the policy, they are now treated as a consequence of the CNF.Because they can be resolved we treat them like existential variables in the conversion.For universal variable u i , con S,C (u We reserve Set j S for starred literals as they cannot be removed.For existential literal x, con S,C (x) = x ∧ anno x,S ({u | u * ∈ C}).Finally, con S,C (u * ) = ⊥, because we do not treat u * as a literal but part of the "annotation" to literals right of it.Also, u * cannot be resolved but it is automatically reduced when no more literals are to the right of it.For clauses in LQU + -Res, we let con S (C) = l∈C con S,C (l).In summary, in comparison to IRM-calc the conversion now includes universal variables and gives them annotations, but removes polarities from the annotations.Policies still remain structured as they were for IR-calc, with extension variables Val i S and Set i S , where S = n i=1 Set i S → (u i ↔ Val i S ).We will once again focus on the resolution case, using the notation as given in Figure 5. Observation 6.1.V 1 ∩ V 2 = ∅ by definition of resolution in LQU + -Res (see Figure 5).
We use L to denote the local policy of C 1 ∪ U 1 ∪ {¬x}, R to denote the local policy of C 2 ∪ U 2 ∪ {x}, and B is intended to be the local policy for the resolvent One may notice there are a larger number of cases for i > m than in previous sections, this is because u and ¬u become u * and end up joining the annotation and policies.It should also be pointed out that there are cases resulting (0, 1) than to (1, 1) this a is simply matter of using 0 as the default value when some set has to be made.Lemma 6.2.For 0 < j ≤ m the following propositions have short derivations in Extended Frege: Proof.The proof of Lemma 4.5 still works despite the modifications to definition.
Lemma 6.3.For 0 ≤ i ≤ j ≤ m the following propositions that describe the monotonicity of Dif and Eq have short derivations in Extended Frege: Proof.The proofs of Lemma 4.6 still work despite the modifications to definition.
Lemma 6.4.For any 0 ≤ i ≤ m the following propositions are true and have short Extended Frege proofs.
) insists on Set i L .This is done similarly for R. Lemma 6.5.For any 0 ≤ j ≤ m the following propositions are true and have a short Extended Frege proof.
Lemma 6.6.The following propositions are true and have short Extended Frege proofs, given (L → con L (C 1 ∪ U 1 ∨ ¬x)) and (R → con R (C 2 ∪ U 2 ∨ x)) )) which we join by conjunction.We can do similarly for , you can cut out L using B ∧ Dif m L → L. Removing (L → con L (C 1 ∪ U 1 ∨ ¬x)), uses the premise (L → con L (C 1 ∪ U 1 ∨ ¬x)).
is true and as S will match T , S → con S (C).Suppose Set i T and con T (C) are both false.If S is true, then u i is false by construction.Moreover, since S agrees with T on every variable except u i , and T does not set u i , T must be true as well.But since con T (C) is false, we must have T → ¬ Set i T ∧u i .In particular, u i must be true, a contradiction.We conclude that the implication S → con S (C) holds in this case.Reduction (u * i ) If T → con T (C ∨ u * i ) and we reduce u * i we need to define the strategy S so that S → con S (C).Since u * i is the rightmost literal in the clause con T (C ∨ u * i ) = con T (C) so we define S the same way as T .Resolution See Lemma 6.8.Contradiction Just as in IR-calc we have to give a complete assignment to the missing values in the policy.We then have simply the negation of the strategy for which we can apply our same technique to reduce to the empty clause.

Conclusion
Our work reconciles many different QBF proof techniques under the single system eFrege + ∀red.Although eFrege + ∀red itself is likely not a good system for efficient proof checking, our results have implications for other systems that are more promising in this regard, such as QRAT, which inherits these simulations.In particular, QRAT's simulation of ∀Exp+Res is upgraded to a simulation of IRM-calc, and we do not even require the extended universal reduction rule.Existing QRAT checkers can be used to verify converted eFrege + ∀red proofs.Further, extended QU-resolution is polynomially equivalent to eFrege + ∀red [Che21], and has previously been proposed as a system for unified QBF proof checking [JBS + 07].Since our simulations split off propositional inference from a standardised reduction part at the end, another option is to use (highly efficient) propositional proof checkers instead.Our simulations use many extension variables that are known to negatively impact the checking time of existing tools such as DRAT-trim, but one may hope that they can be refined to become more efficient in this regard.
There are other proof systems, particularly ones using dependency schemes, such as Q(D rrs )-Res and LD-Q(D rrs )-Res that have strategy extraction [PSS19b].Local strategy extraction and ultimately a simulation by eFrege+ ∀red seem likely for these systems, whether it can be proved directly or by generalising the simulation results from this paper.

Figure 1 :
Figure 1: Hasse diagram for polynomial simulation order of QBF calculi [BCJ19, BWJ14, BBCP20, HSB17, Che21, BJ12, VG12, CH22, BBM18].In this diagram all proof systems below the first line are known to have strategy extraction, and all below the second line have an exponential lower bound.G and QRAT have strategy extraction if and only if P = PSPACE.
basic tautology with a constant-size Frege proof, Dif 0 L is false by definition so Frege can assemble Dif 1 L → Dif 1 L ∧¬ Dif 0 L .Inductive Step j + 1: ¬ Dif j L ∨ Dif j L and Dif j+1 L → Dif j+1 L are tautologies with a constantsize Frege proof.Putting them together we get Dif j+1 L → Dif j+1 L ∧(¬ Dif j L ∨ Dif j L ) and weaken to Dif j+1 L → (Dif j+1 L ∧¬ Dif j L ) ∨ Dif j L .Using the induction hypothesis, Dif j L → j i=1 Dif i L ∧¬ Dif i−1 L , we can change this tautology to Dif j+1 L a tautology with a constant-size Frege proof.Inductive Step j + 1: Dif j+1 L := Dif j L ∨A where A is an expression.Therefore in all cases Dif j L → Dif j+1 L is a straightforward corollary with a constant-size number of additional Frege steps.Using the induction hypothesis Dif i L → Dif j L we can get Dif i L → Dif j+1 L .The proof is symmetric for R.

•
i L and Dif i L means B and L are consistent on those i as proven in Lemma 4.10.• For indices greater than m, B ∧Dif m L falsifies ¬ Dif m L ∧(Dif m R ∨x), so B and L are consistent on those indices.With the second proposition Dif m R → m j=1 Dif j R ∨¬ Dif j−1 R once again.So there is some j where this is the case.Note that ¬ Dif m L → ¬ Dif i L for i ≤ m. • For 1 ≤ i < j, both ¬ Dif i L and ¬ Dif i R occur so then B and R are consistent for these values.For j ≤ i ≤ m, Dif j R → Dif i R and Dif i R ∧¬ Dif i L means B and R are consistent on those i as proven in Lemma 4.10.• For indices greater than m, B ∧ Dif m R ∧¬ Dif m L satisfies ¬ Dif m L ∧(Dif m R ∨x), so B and R are consistent on those indices.
Induction Hypothesis on i: Eq i L=τ ⊔σ → ¬ Dif i L in an O(i)-size eFrege proof.
describe the constituent parts of it.Equivalence.The notation for equivalence slightly changes due to the fact we are no longer working with annotations, but present starred literals.These work in much the same way.Let b be in {1, 2} Eq