Selected Papers of the 23rd International Conference on Database Theory (ICDT 2020)

Editors: Jean Christoph Jung, Carsten Lutz

This special issue of Logical Methods in Computer Science contains extended versions of papers presented at the 23rd International Conference on Database Theory (ICDT 2020). ICDT is a major forum for research on all theoretical aspects of data management.

The meeting was scheduled to take place in Copenhagen 30th March-2nd April, but was one of the first conferences that had to be canceled as a physical event due to the beginning Covid-19 pandemic. It was instead held online on the same dates. This was quite an adventure because the move online had to be organized on short notice and, at that time, there was almost no experience with online conferences. Together with the organizers of the co-located EDBT conference, we have written a report about this [1]. We would like to express our gratitude to the participants and to everyone involved in the organization of EDBT/ICDT2020 for the great support they provided for making the conference a success in difficult times.

This special issue contains six selected papers that were presented at the conference. The article The Dichotomy of Evaluating Homomorphism-Closed Queries on Probabilistic Graphs by Antoine Amarilli and Ismail Ilkan Ceylan has also been selected by the program committee for the ICDT best paper award.

The articles included in this special issue underwent an additional and thorough reviewing and revision process. They constitute substantially extended and refined versions of the original conference papers. We thank both the authors and the reviewers for the hard work that they have put into this.

Jean Christoph Jung, Carsten Lutz
Guest Editors of the Special Issue

[1] Angela Bonifati, Giovanna Guerrini, Carsten Lutz, Wim Martens, Lara Mazilu, Norman W. Paton, Marcos Antonio Vaz Salles, Marc H. Scholl, and Yongluan Zhou. Holding a conference online and live due to COVID-19: Experiences and lessons learned from EDBT / ICDT 2020. SIGMOD Rec., 49(4):28–32, 2020.


1. The Shapley Value of Tuples in Query Answering

Ester Livshits ; Leopoldo Bertossi ; Benny Kimelfeld ; Moshe Sebag.
We investigate the application of the Shapley value to quantifying the contribution of a tuple to a query answer. The Shapley value is a widely known numerical measure in cooperative game theory and in many applications of game theory for assessing the contribution of a player to a coalition game. It has been established already in the 1950s, and is theoretically justified by being the very single wealth-distribution measure that satisfies some natural axioms. While this value has been investigated in several areas, it received little attention in data management. We study this measure in the context of conjunctive and aggregate queries by defining corresponding coalition games. We provide algorithmic and complexity-theoretic results on the computation of Shapley-based contributions to query answers; and for the hard cases we present approximation algorithms.

2. The Dichotomy of Evaluating Homomorphism-Closed Queries on Probabilistic Graphs

Antoine Amarilli ; İsmail İlkan Ceylan.
We study the problem of query evaluation on probabilistic graphs, namely, tuple-independent probabilistic databases over signatures of arity two. We focus on the class of queries closed under homomorphisms, or, equivalently, the infinite unions of conjunctive queries. Our main result states that the probabilistic query evaluation problem is #P-hard for all unbounded queries from this class. As bounded queries from this class are equivalent to a union of conjunctive queries, they are already classified by the dichotomy of Dalvi and Suciu (2012). Hence, our result and theirs imply a complete data complexity dichotomy, between polynomial time and #P-hardness, on evaluating homomorphism-closed queries over probabilistic graphs. This dichotomy covers in particular all fragments of infinite unions of conjunctive queries over arity-two signatures, such as negation-free (disjunctive) Datalog, regular path queries, and a large class of ontology-mediated queries. The dichotomy also applies to a restricted case of probabilistic query evaluation called generalized model counting, where fact probabilities must be 0, 0.5, or 1. We show the main result by reducing from the problem of counting the valuations of positive partitioned 2-DNF formulae, or from the source-to-target reliability problem in an undirected graph, depending on properties of minimal models for the query.

3. Integrity Constraints Revisited: From Exact to Approximate Implication

Batya Kenig ; Dan Suciu.
Integrity constraints such as functional dependencies (FD) and multi-valued dependencies (MVD) are fundamental in database schema design. Likewise, probabilistic conditional independences (CI) are crucial for reasoning about multivariate probability distributions. The implication problem studies whether a set of constraints (antecedents) implies another constraint (consequent), and has been investigated in both the database and the AI literature, under the assumption that all constraints hold exactly. However, many applications today consider constraints that hold only approximately. In this paper we define an approximate implication as a linear inequality between the degree of satisfaction of the antecedents and consequent, and we study the relaxation problem: when does an exact implication relax to an approximate implication? We use information theory to define the degree of satisfaction, and prove several results. First, we show that any implication from a set of data dependencies (MVDs+FDs) can be relaxed to a simple linear inequality with a factor at most quadratic in the number of variables; when the consequent is an FD, the factor can be reduced to 1. Second, we prove that there exists an implication between CIs that does not admit any relaxation; however, we prove that every implication between CIs relaxes "in the limit". Then, we show that the implication problem for differential constraints in market basket analysis also admits a relaxation with a factor […]

4. Weight Annotation in Information Extraction

Johannes Doleschal ; Benny Kimelfeld ; Wim Martens ; Liat Peterfreund.
The framework of document spanners abstracts the task of information extraction from text as a function that maps every document (a string) into a relation over the document's spans (intervals identified by their start and end indices). For instance, the regular spanners are the closure under the Relational Algebra (RA) of the regular expressions with capture variables, and the expressive power of the regular spanners is precisely captured by the class of VSet-automata -- a restricted class of transducers that mark the endpoints of selected spans. In this work, we embark on the investigation of document spanners that can annotate extractions with auxiliary information such as confidence, support, and confidentiality measures. To this end, we adopt the abstraction of provenance semirings by Green et al., where tuples of a relation are annotated with the elements of a commutative semiring, and where the annotation propagates through the positive RA operators via the semiring operators. Hence, the proposed spanner extension, referred to as an annotator, maps every string into an annotated relation over the spans. As a specific instantiation, we explore weighted VSet-automata that, similarly to weighted automata and transducers, attach semiring elements to transitions. We investigate key aspects of expressiveness, such as the closure under the positive RA, and key aspects of computational complexity, such as the enumeration of annotated answers and their ranked enumeration in […]

5. Infinite Probabilistic Databases

Martin Grohe ; Peter Lindner.
Probabilistic databases (PDBs) model uncertainty in data in a quantitative way. In the established formal framework, probabilistic (relational) databases are finite probability spaces over relational database instances. This finiteness can clash with intuitive query behavior (Ceylan et al., KR 2016), and with application scenarios that are better modeled by continuous probability distributions (Dalvi et al., CACM 2009). We formally introduced infinite PDBs in (Grohe and Lindner, PODS 2019) with a primary focus on countably infinite spaces. However, an extension beyond countable probability spaces raises nontrivial foundational issues concerned with the measurability of events and queries and ultimately with the question whether queries have a well-defined semantics. We argue that finite point processes are an appropriate model from probability theory for dealing with general probabilistic databases. This allows us to construct suitable (uncountable) probability spaces of database instances in a systematic way. Our main technical results are measurability statements for relational algebra queries as well as aggregate queries and Datalog queries.

6. A Near-Optimal Parallel Algorithm for Joining Binary Relations

Bas Ketsman ; Dan Suciu ; Yufei Tao.
We present a constant-round algorithm in the massively parallel computation (MPC) model for evaluating a natural join where every input relation has two attributes. Our algorithm achieves a load of $\tilde{O}(m/p^{1/\rho})$ where $m$ is the total size of the input relations, $p$ is the number of machines, $\rho$ is the join's fractional edge covering number, and $\tilde{O}(.)$ hides a polylogarithmic factor. The load matches a known lower bound up to a polylogarithmic factor. At the core of the proposed algorithm is a new theorem (which we name the "isolated cartesian product theorem") that provides fresh insight into the problem's mathematical structure. Our result implies that the subgraph enumeration problem, where the goal is to report all the occurrences of a constant-sized subgraph pattern, can be settled optimally (up to a polylogarithmic factor) in the MPC model.