Invited speakers
the invited keynotes and the panelist of this workshop edition
Property Graph Transformations in Action: From Data Integration to Causal Analysis
Property graphs are key components of modern graph databases and graph analytics systems. They support highly expressive data models consisting of multi-labeled nodes and edges, together with properties represented as key-value pairs. Property graphs serve as versatile data integration paradigms, enabling data in virtually any format to be seamlessly transformed into this model. Moreover, they are at the core of an active standardization effort led by ISO/IEC, which aims to establish standardized declarative graph query languages such as GQL and SQL/PGQ. In addition to these data manipulation language standards, complementary languages for property graph schemas and constraints are emerging as part of future data definition languages. In this talk, I will present novel declarative paradigms for expressing property graph transformations that support both graph-based data integration and data cleaning tasks. Beyond being declarative, these transformations are designed to achieve efficiency and scalability. Furthermore, they are sufficiently flexible to be applied in other contexts, such as causal inference and causal analysis, where declarative graph languages enable complex, path-based causal operations.
About the Speaker
Angela Bonifati is a Distinguished Professor of Computer Science at Lyon 1 University and at the CNRS LIRIS research laboratory, where she leads the Database Group. She is also an Adjunct Professor at the University of Waterloo, Canada since 2020, and a Senior Member of the French University Institute (IUF) since 2023. Her current research interests span several aspects of data management, including graph databases, knowledge graphs, and data integration, as well as their applications to data science and artificial intelligence. She has co-authored numerous publications in top venues in the data management field, including five Best Paper Awards, two books, and an invited paper in ACM SIGMOD Record (2018). She is the recipient of an ERC Advanced Grant (2024) and an ACM Fellow. Her work has been recognized with the VLDB Women in DB Research Award (2025), the IEEE TCDE Impact Award (2023), and an ACM SIGMOD Research Highlights Award (2023). She is the General Chair of VLDB 2026 and has previously served as Program Chair of IEEE ICDE 2025, ACM SIGMOD 2022, and EDBT 2020. She is currently an Associate Editor for the Proceedings of the VLDB (Volume 19), IEEE TKDE, and ACM TODS. She is also the current President of ACM SIGMOD (2025–2029), a member of the IEEE Technical Committee on Data Engineering (2024–2029), and a member of the PVLDB Board of Trustees (2024–2029).
Trigger Graphs & Probabilistic Equivalence: Towards Scalable and Efficient Neurosymbolic Learning and Inference.
It has been a common belief that symbolic reasoning does not scale. However, is this still true? In this talk, I will present trigger graphs, a symbolic reasoning technique that supports exact Datalog reasoning in the order of seconds over graph stores with billions of edges. Unlike the majority of commercial and open source reasoning engines, trigger graphs avoid redundant computation during reasoning by organizing the computation in a graph-like structure. The latter allows trigger graphs to support probabilistic reasoning that is more efficient even than approximate techniques. This year, trigger graphs became the driving force behind a new probabilistic logic program semantics, the equivalence semantics. In the equivalence semantics, a probabilistic logic program induces a probability distribution over all possible equivalence relations between symbols, instead of a probability distribution over all possible subsets of probabilistic facts, as is standard in the relevant literature. We show that the equivalence semantics overcomes the limitations in learning and inference of state-of-the-art neurosymbolic techniques for link prediction, rule mining, and symbolic grounding by up to 42%.
About the Speaker
Efthymia Tsamoura is a Technical Expert at Huawei Labs. From 2019 to 2025, she was a Senior Researcher at Samsung AI, Cambridge, UK. In 2016, she was awarded a prestigious early-career fellowship from the Alan Turing Institute, UK, for her work on logic and databases, and before that, she was a Postdoctoral Researcher in the Department of Computer Science of the University of Oxford. Her main research interests lie in the areas of logic, knowledge representation and reasoning, and neurosymbolic learning, while her recent outcomes involve scaling symbolic reasoning to billions of triples, as well as addressing open problems in neurosymbolic learning. Her research has been published in top-tier AI and database venues (NeurIPS, ICML, SIGMOD, VLDB, PODS, AAAI, IJCAI, etc.). In 2024, Efi was invited by the Royal Society, UK, to the Frontiers of Science on AI meeting to discuss the risks of AI and ways to address them. More details can be found at https://tsamoura.github.io