- ATPBoost 1.0
- CSE 1.3
- CSE_E 1.2
- CVC4 1.8
- E 2.4
- E 2.5
- Enigma 0.5.1
- Etableau 0.2
- GKC 0.5.1
- iProver 3.3
- lazyCoP 0.1
- leanCoP 2.2
- LEO-II 1.7.0
- Leo-III 1.4
- Leo-III 1.5
- MaLARea 0.9
- Prover9 1109a
- PyRes 1.3
- Satallax 3.4
- Satallax 3.5
- Twee 2.2.1
- Vampire 4.4
- Vampire 4.5
- Zipperposition 2.0

University of Warsaw, Poland

https://github.com/BartoszPiotrowski/ATPboost

Southwest Jiaotong University, China

CSE 1.3 has been improved, compared with CSE 1.2, mainly from the following aspects:

- Optimization of contradiction separation algorithm based on full usage of clauses, which is able to increase the sufficiency of clauses participating in deduction.
- Optimization of contradiction separation algorithm based on optimized deduction path, which is able to effectively control the unifier complexity in the deduction process.
- Dynamic adjustment of clause and literal weight update in the deduction process.

- SCSs control strategy. Control the number of literals in SCSs generated in the deduction process, and the SCSs meet the set contradictions will be retained.
- Clause weight update strategy. Different update methods are applied in different clause deduction stages.
- Literal weight update strategy. Flexible update methods are used for different literal deduction stages.

Southwest Jiaotong University, China

- Lemma filtering mainly based on variable appearance.
- Dynamic time allocation scheme in different run stages.

University of Iowa, USA

Like other SMT solvers, CVC4 treats quantified formulas using a two-tiered approach. First, quantified formulas are replaced by fresh Boolean predicates and the ground theory solver(s) are used in conjunction with the underlying SAT solver to determine satisfiability. If the problem is unsatisfiable at the ground level, then the solver answers "unsatisfiable". Otherwise, the quantifier instantiation module is invoked, and will either add instances of quantified formulas to the problem, answer "satisfiable", or return unknown. Finite model finding in CVC4 targets problems containing background theories whose quantification is limited to finite and uninterpreted sorts. In finite model finding mode, CVC4 uses a ground theory of finite cardinality constraints that minimizes the number of ground equivalence classes, as described in [RT+13]. When the problem is satisfiable at the ground level, a candidate model is constructed that contains complete interpretations for all predicate and function symbols. It then adds instances of quantified formulas that are in conflict with the candidate model, as described in [RT+13]. If no instances are added, it reports "satisfiable".

CVC4 has native support for problems in higher-order logic, as described in recent work [BR+19]. It uses a pragmatic approach for HOL, where lambdas are eliminated eagerly via lambda lifting. The approach extends the theory solver for quantifier-free uninterpreted functions (UF) and E-matching. For the former, the theory solver for UF in CVC4 now handles equalities between functions using an extensionality inference. Partial applications of functions are handle using a (lazy) applicative encoding where some function applications are equated to the applicative encoding. For the latter, several of the data structures for E-matching have been modified to incorporate matching in the presence of equalities between functions, function variables, and partial function applications.

https://github.com/CVC4

DHBW Stuttgart, Germany

For the LTB divisions, a control program uses a SInE-like analysis to extract reduced axiomatizations that are handed to several instances of E. E will not use on-the-fly learning this year.

For CASC-27, E implements a strategy-scheduling automatic mode. The total CPU time available is broken into several (unequal) time slices. For each time slice, the problem is classified into one of several classes, based on a number of simple features (number of clauses, maximal symbol arity, presence of equality, presence of non-unit and non-Horn clauses, ...). For each class, a schedule of strategies is greedily constructed from experimental data as follows: The first strategy assigned to a schedule is the the one that solves the most problems from this class in the first time slice. Each subsequent strategy is selected based on the number of solutions on problems not already solved by a preceding strategy.

About 230 different strategies have been thoroughly evaluated on all untyped first-order problems from TPTP 7.2.0. In addition, we have explored some parts of the heuristic parameter space with a short time limit of 5 seconds. This allowed us to test about 650 strategies on all TPTP problems, and an extra 7000 strategies on all 1193 UEQ problems from TPTP 7.2.0. About 100 of these strategies are used in the automatic mode, and about 450 are used in at least one schedule.

https://www.eprover.org

DHBW Stuttgart, Germany

For the LTB divisions, a control program uses a SInE-like analysis to extract reduced axiomatizations that are handed to several instances of E. E will not use on-the-fly learning this year.

For CASC-J10, E implements a strategy-scheduling automatic mode. The total CPU time available is broken into several (unequal) time slices. For each time slice, the problem is classified into one of several classes, based on a number of simple features (number of clauses, maximal symbol arity, presence of equality, presence of non-unit and non-Horn clauses, possibly presence of certain axiom patterns, ...). For each class, a schedule of strategies is greedily constructed from experimental data as follows: The first strategy assigned to a schedule is the the one that solves the most problems from this class in the first time slice. Each subsequent strategy is selected based on the number of solutions on problems not already solved by a preceding strategy.

About 130 different strategies have been thoroughly evaluated on all untyped first-order problems from TPTP 7.3.0. We have also explored some parts of the heuristic parameter space with a short time limit of 5 seconds. This allowed us to test about 650 strategies on all TPTP problems, and an extra 7000 strategies on UEQ problems from TPTP 7.2.0. About 100 of these strategies are used in the automatic mode, and about 450 are used in at least one schedule.

https://www.eprover.org

Czech Technical University in Prague, Czech Republic

https://github.com/ai4reason/enigmatic

University of Florida, USA

https://github.com/hesterj/E-TAB

Tallinn University of Technology, Estonia

These standard inference rules have been implemented in GKC:

- Binary resolution with optionally the set of support strategy, negative or positive ordered resolution or unit restriction.
- Hyperresolution.
- Factorization.
- Paramodulation and demodulation with the Knuth-Bendix ordering.

GKC splits the multiple strategies it decides to try between several forked instances. For the competition the plan is to use eight forks. Each fork runs a subset of strategies sequentially.

We perform the selection of a given clause by using several queues in order to
spread the selection relatively uniformly over these categories of derived
clauses and their descendants: axioms, external axioms, assumptions and goals.
The queues are organized in two layers.
As a first layer we use the common ratio-based algorithm of alternating between
selecting *n* clauses from a weight-ordered queue and one clause from
the
FIFO queue with the derivation order.
As a second layer we use four separate queues based on the derivation history
of a clause.
Each queue in the second layer contains the two sub-queues of the first layer.

GKC only looks for proofs and does not try to show non-provability.

GKC can be obtained from

https://github.com/tammet/gkc/

University of Manchester, United Kingdom

Recent features in iProver include:

- Superposition calculus together with simplifications such as demodulation, light normalisation, subsumption, subsumption resolution and global subsumption. iProver's simplification set up [DK20] is tunable via command line options and generalises common architectures such as Discount or Otter.
- Heuristics used in iProver are learnt using dynamic clustering and hyper-parameter optimisation using SMAC [HHL12] as described in [HK19,HK19].
- In the LTB division iProver combines abstraction-refinement with axiom selection based on the SinE algorithm [HV11] as implemented in Vampire [KV13]. iProver will run in parallel most promising learnt heuristics.

http://www.cs.man.ac.uk/~korovink/iprover/

University of Manchester, United Kingdom

The system was originally conceived to efficiently accommodate a machine-learned heuristic guidance system: this system is not yet guided in this way, but learned heuristics are intended for a future version.

https://github.com/MichaelRawson/lazycop

University of Oslo, Norway

leanCoP can read formulae in leanCoP syntax and in TPTP first-order syntax. Equality axioms and axioms to support distinct objects are automatically added if required. The leanCoP core prover returns a very compact connection proof, which is then translated into a more comprehensive output format, e.g., into a lean (TPTP-style) connection proof or into a readable text proof.

The source code of leanCoP 2.2 is available under the GNU general public license. It can be downloaded from the leanCoP website at:

http://www.leancop.deThe website also contains information about ileanCoP [Ott08] and MleanCoP [Ott12, Ott14], two versions of leanCoP for first-order intuitionistic logic and first-order modal logic, respectively.

University of Luxembourg, Luxembourg

Unfortunately the LEO-II system still uses only a very simple sequential collaboration model with first-order ATPs instead of using the more advanced, concurrent and resource-adaptive OANTS architecture [BS+08] as exploited by its predecessor LEO.

The LEO-II system is distributed under a BSD style license, and it is available from

http://www.leoprover.org

University of Luxembourg, Luxembourg

Leo-III cooperates with external first-order ATPs which are called asynchronously during proof search; a focus is on cooperation with systems that support typed first-order (TFF) input. For this year's CASC, CVC4 [BC+11] and E [Sch02, Sch13] are used as external systems. However, cooperation is in general not limited to first-order systems. Further TPTP/TSTP-compliant external systems (such as higher-order ATPs or counter model generators) may be included using simple command-line arguments. If the saturation procedure loop (or one of the external provers) finds a proof, the system stops, generates the proof certificate and returns the result.

For the LTB division, Leo-III is augmented by an external Python3 driver which schedules Leo-III of the batches.

The term data structure of Leo-III uses a polymorphically typed spine term representation augmented with explicit substitutions and De Bruijn-indices. Furthermore, terms are perfectly shared during proof search, permitting constant-time equality checks between alpha-equivalent terms.

Leo-III's saturation procedure may at any point invoke external reasoning tools. To that end, Leo-III includes an encoding module which translates (polymorphic) higher-order clauses to polymorphic and monomorphic typed first-order clauses, whichever is supported by the external system. While LEO-II relied on cooperation with untyped first-order provers, Leo-III exploits the native type support in first-order provers (TFF logic) for removing clutter during translation and, in turn, higher effectivity of external cooperation.

Leo-III is available on GitHub:

https://github.com/leoprover/Leo-III

University of Luxembourg, Luxembourg

For the LTB division, Leo-III is augmented by an external Python3 driver which schedules Leo-III on the batches.

The term data structure of Leo-III uses a polymorphically typed spine term representation augmented with explicit substitutions and De Bruijn-indices. Furthermore, terms are perfectly shared during proof search, permitting constant-time equality checks between alpha-equivalent terms.

Leo-III's saturation procedure may at any point invoke external reasoning tools. To that end, Leo-III includes an encoding module which translates (polymorphic) higher-order clauses to polymorphic and monomorphic typed first-order clauses, whichever is supported by the external system. While LEO-II relied on cooperation with untyped first-order provers, Leo-III exploits the native type support in first-order provers (TFF logic) for removing clutter during translation and, in turn, higher effectivity of external cooperation.

Leo-III is available on GitHub:

https://github.com/leoprover/Leo-III

Also in the LTB mode, there are no major novelties: only some timing parameters have been changed compared to last year. Stemming from Leo-III's support for polymorphic HOL reasoning, we expect a reasonable performance compared to the other systems. On the other hand, Leo-III's LTB mode does not do any learning and/or analysis of the learning samples.

Czech Technical University in Prague, Czech Republic

https://github.com/JUrban/MPTP2/tree/master/MaLAReaThe metasystem's Perl code is released under GPL2.

University of New Mexico, USA

Prover9 has available positive ordered (and nonordered) resolution and paramodulation, negative ordered (and nonordered) resolution, factoring, positive and negative hyperresolution, UR-resolution, and demodulation (term rewriting). Terms can be ordered with LPO, RPO, or KBO. Selection of the "given clause" is by an age-weight ratio.

Proofs can be given at two levels of detail: (1) standard, in which each line of the proof is a stored clause with detailed justification, and (2) expanded, with a separate line for each operation. When FOF problems are input, proof of transformation to clauses is not given.

Completeness is not guaranteed, so termination does not indicate satisfiability.

Given a problem, Prover9 adjusts its inference rules and strategy according to syntactic properties of the input clauses such as the presence of equality and non-Horn clauses. Prover9 also does some preprocessing, for example, to eliminate predicates.

For CASC Prover9 uses KBO to order terms for demodulation and for the inference rules, with a simple rule for determining symbol precedence.

For the FOF problems, a preprocessing step attempts to reduce the problem to independent subproblems by a miniscope transformation; if the problem reduction succeeds, each subproblem is clausified and given to the ordinary search procedure; if the problem reduction fails, the original problem is clausified and given to the search procedure.

http://www.cs.unm.edu/~mccune/prover9/

DHBW Stuttgart, Germany

The saturation core is based on the DISCOUNT-loop variant of the
*given-clause* algorithm, i.e., a strict separation of active and
passive facts.
It implements simple binary resolution and factoring
[Rob65],
optionally with selection of negative literals
[BG+01].
Redundancy elimination is restricted to forward and backward subsumption and
tautology deletion.
There are no inference rules for equality - if equality is detected, the
necessary axioms are added.

One of the changes compared to last years version is the introduction of simple
indices for unification (which now uses top symbol hashing) and subsumption.
Subsumption now uses a new, simple technique we call *predicate abstraction
indexing*, which represents a clause as an ordered sequence of the
predicate symbols of its literals.
PyRes builds a proof object on the fly, and can print a TPTP-3 style proof or
saturation derivation.

The system source is available at:

https://github.com/eprover/PyRes

ENS Paris-Saclay, France

Proof search: A branch is formed from the axioms of the problem and the negation of the conjecture (if any is given). From this point on, Satallax tries to determine unsatisfiability or satisfiability of this branch. Satallax progressively generates higher-order formulae and corresponding propositional clauses [Bro13]. These formulae and propositional clauses correspond to instances of the tableau rules. Satallax uses the SAT solver MiniSat to test the current set of propositional clauses for unsatisfiability. If the clauses are unsatisfiable, then the original branch is unsatisfiable. Optionally, Satallax generates lambda-free higher-order logic (lfHOL) formulae in addition to the propositional clauses [VB+19]. If this option is used, then Satallax periodically calls the theorem prover E [Sch13] to test for lfHOL unsatisfiability. If the set of lfHOL formulae is unsatisfiable, then the original branch is unsatisfiable. Upon request, Satallax attempts to reconstruct a proof which can be output in the TSTP format.

http://cl-informatik.uibk.ac.at/~mfaerber/satallax.html

ENS Paris-Saclay, France

Satallax is available at:

http://cl-informatik.uibk.ac.at/~mfaerber/satallax.html

Chalmers University of Technology, Sweden

Twee's implementation of ground joinability testing performs case splits on the order of variables, in the style of [MN90], and discharges individual cases by rewriting modulo a variable ordering. The case splitting strategy chooses only useful case splits, which prevents the number of cases from blowing up.

Horn clauses are encoded as equations as described in [CS18]. The CASC version of Twee "handles" non-Horn clauses by discarding them.

The main loop is a DISCOUNT loop. The active set contains rewrite rules and unorientable equations, which are used for rewriting, and the passive set contains unprocessed critical pairs. Twee often interreduces the active set, and occasionally simplifies the passive set with respect to the active set. Each critical pair is scored using a weighted sum of the weight of both of its terms. Terms are treated as DAGs when computing weights, i.e., duplicate subterms are only counted once per term. The weights of critical pairs that correspond to Horn clauses are adjusted by the heuristic described in [CS18], section 5.

For CASC, to take advantage of multiple cores, several versions of Twee run in parallel using different parameters.

The passive set is represented compactly (12 bytes per critical pair) by storing only the information needed to reconstruct the critical pair, not the critical pair itself. Because of this, Twee can run for an hour or more without exhausting memory.

Twee uses an LCF-style kernel: all rules in the active set come with a certified proof object which traces back to the input axioms. When a conjecture is proved, the proof object is transformed into a human-readable proof. Proof construction does not harm efficiency because the proof kernel is invoked only when a new rule is accepted. In particular, reasoning about the passive set does not invoke the kernel. The translation from Horn clauses to equations is not yet certified.

Twee can be downloaded from:

http://nick8325.github.io/twee

University of Manchester, United Kingdom

There are no major changes to the main part of Vampire since 4.4, beyond some new proof search heuristics and new default values for some options. The biggest addition is support for higher-order reasoning via translation to applicative form and combinators, addition of axioms and extra inference rules, and a new form of combinatory unification.

A number of standard redundancy criteria and simplification techniques are used for pruning the search space: subsumption, tautology deletion, subsumption resolution and rewriting by ordered unit equalities. The reduction ordering is the Knuth-Bendix Ordering. Substitution tree and code tree indexes are used to implement all major operations on sets of terms, literals and clauses. Internally, Vampire works only with clausal normal form. Problems in the full first-order logic syntax are clausified during preprocessing. Vampire implements many useful preprocessing transformations including the SinE axiom selection algorithm.

When a theorem is proved, the system produces a verifiable proof, which validates both the clausification phase and the refutation of the CNF. Vampire 4.4 provides a very large number of options for strategy selection. The most important ones are:

- Choices of saturation algorithm:
- Limited Resource Strategy [RV03]
- DISCOUNT loop
- Otter loop
- Instantiation using the Inst-Gen calculus
- MACE-style finite model building with sort inference

- Splitting via AVATAR [Vor14]
- A variety of optional simplifications.
- Parameterized reduction orderings.
- A number of built-in literal selection functions and different modes of comparing literals [HR+16].
- Age-weight ratio that specifies how strongly lighter clauses are preferred for inference selection.
- Set-of-support strategy.
- For theory-reasoning:
- Ground equational reasoning via congruence closure.
- Addition of theory axioms and evaluation of interpreted functions.
- Use of Z3 with AVATAR to restrict search to ground-theory-consistent splitting branches [RB+16].
- Specialised theory instantiation and unification [RSV18].
- Extensionality resolution with detection of extensionality axioms

- For higher-order problems:
- Translation to applicative and combinator form.
- Addition of combinator axioms.
- Addition of shortcut inference rules that encode axioms.
- Proof search heuristics targetting the growth of combinator axioms.
- Restricted combinatory unification [RSV18].

University of Manchester, United Kingdom

- Choices of saturation algorithm:
- Limited Resource Strategy [RV03]
- DISCOUNT loop
- Otter loop
- Instantiation using the Inst-Gen calculus
- MACE-style finite model building with sort inference

- Splitting via AVATAR [Vor14]
- A variety of optional simplifications.
- Parameterized reduction orderings.
- A number of built-in literal selection functions and different modes of comparing literals [HR+16].
- Age-weight ratio that specifies how strongly lighter clauses are preferred for inference selection. This has been extended with a layered clause selection approach [GS20].
- Set-of-support strategy with extensions for theory reasoning.
- For theory-reasoning:
- Ground equational reasoning via congruence closure.
- Addition of theory axioms and evaluation of interpreted functions.
- Use of Z3 with AVATAR to restrict search to ground-theory-consistent splitting branches [RB+16].
- Specialised theory instantiation and unification [RSV18].
- Extensionality resolution with detection of extensionality axioms

- For higher-order problems:
- Translation to polymorphic first-order logic using applicative form and combinators.
- A new superposition calculus [BG20] utilising a KBO-like ordering [BG20] for orienting combinator equations. The calculus introduces an inference, narrow, for rewriting with combinator equations.
- Proof search heuristics targeting the growth of clauses resulting from narrowing.
- An extension of unification with abstraction to deal with functional and boolean extensionality.

- Various inferences to deal with booleans

for more information and access to the GitHub repository.https://vprover.github.io/

Vrije Universiteit Amsterdam, The Netherlands

https://github.com/sneeuwballen/zipperpositionand is entirely free software (BSD-licensed). Zipperposition can also output graphic proofs using graphviz. Some tools to perform type inference and clausification for typed formulas are also provided, as well as a separate library for dealing with terms and formulas [Cru15].