Keynotes/tutorials

We are happy to announce the following keynotes and tutorials presenting at the conference:

Mark Burgess
Mark Burgess

Mark Burgess is a theoretician and practitioner in the area of information systems, whose work has focused largely on distributed information infrastructure. He wrote an early popular book on C programming, which is now open and available through the Free Software Foundation. He was an early contributor to the Free Software Foundation in 1993 with CFEngine, which remains GPL. He is known particularly for his work on Configuration Management and Promise Theory. He was the principal Founder of CFEngine, co-founder at Aljabr, and is now the founder of ChiTek-i. He is Emeritus Professor of Network and System Administration from Oslo University College. He is the author of numerous books, articles, and papers on topics from Physics, Networks and Systems, to fiction. He also writes a blog on issues of science and IT industry concerns. Today, he works as an advisor on science and technology matters all over the world.

More information can be found on his personal web page

Time of talk: October 9, 09:10 – 10:10
Title: Promise Theory and Semantic Spacetime – A Theory for Agent Systems

Abstract:
Promise Theory was first presented at DSOM Barcelona in 2005, then again at Google Santa Monica in 2008. It has just turned 20 years old. PT originally arose by looking for a “physics of autonomous systems in order to understand the globally adopted CFEngine configuration system and why it was more successful than traditional deontic “obligation logic” models used by academia. Autonomous agents are the natural “atoms” from which systems are built. Since then, the IT industry has embraced promise-compatible ideas in SOA, microservices, and systems like Kubernetes famously employ promise theoretic ideas. Academia has all but ignored Promise Theory, clinging onto traditional deontic logics–but now, with the sudden popularity of “AI”, the industry seems to be trying to reinvent Promise Theory. In this talk, Mark Burgess will explain what Promise Theory is, how it led to deeper understanding of knowledge representation with Semantic Spacetime, and what we need to do to develop it today.

Harald Martens
Harald Martens, Norwegian University of Science and Technology
Bio: Harald Aagaard Martens (b. 1946 in Kristiansand, Norway) has an MSc in industrial biochemistry and a Dr.techn in chemometrics and multivariate calibration from NTNU. He has written over two hundred research papers and several books on multivariate data modelling, which in total have been cited over 28 000 times (Google Scholar 2025), and is a member of the Norwegian Academy of Technological Sciences NTVA.

He is presently Professor emeritus,  Big Data Cybernetics group / Dept. of Engineering Cybernetics, Norwegian U. of Science and Technology /  NTNU,  Trondheim Norway. He is also founder and research leader, Idletechs AS (quantitative, interpretable machine learning for Big Data from science and technology).

Recent overviews of his modelling activities and views on today’s and tomorrow’s AI :
H. Martens (2023) Causality, machine learning and human insight Analytica Chimica Acta, 9 October 2023, 341585

H. Martens (2025) A Greener, Safer and More Understandable AI for Natural Science and Technology
Wiley Analytical Science, 18 January 2025

For more information, see https://www.ntnu.edu/employees/harald.martens, https://idletechs.com/                 https://scholar.google.com/citations?hl=no&user=60HNWsYAAAAJ&view_op=list_works&sortby=pubdate

Time of talk: October 9, 15:30-16:30
Title: Green, Safe, Understandable and Cost-effective AI for well-structured BIG DATA

Abstract:
Two science cultures, shared visions? As chemometrician, it is an honor to address your Tsetlin machine community. Our cultures have different scientific origins, we approach machine learning with different perspectives and modelling tools, and solve somewhat different problems. But I believe that we complement each other, and share several values and goals, e.g. to reduce the energy footprint and interpretation challenges in today’s mainstream Machine Learning. Moreover, we have faced some of the same obstacles vis-à-vis today’s mainstream AI (impressive, but demanding and over-sold black box ANN/CNN/DL; dominating commercial actors; a paralyzed, uninformed public; initially slow academic and market acceptance). I see several potentials for mutually beneficial cooperation towards a new and better AI industry.

Knowledge needed: In industry, medicine, environmental science, economy and in Real-World society at large, more environmentally friendly, safe and commercially competitive processes and products are needed. This requires existing knowledge to be put to practical use – but also be challenged, corrected and expanded as needed, based on new observations.

The material domain is simpler than the immaterial domain of e.g. natural language: Today’s torrents of modern measurements from the “material” domain of the physical world are too complex for humans to handle without mathematical modelling and statistical validation. But compared to data from the “immaterial” domain of natural language, continuous measurements from spectrometers, thermal-, RGB- and hyperspectral cameras etc.in e.g. industrial process monitoring or space-based Earth Observation – are relatively well-structured. The reason is that they are strongly constrained – by natural laws, geological history, biological evolution and human experience. Moreover, for each measurement type a lot is known already. Therefore, they are well suited for minimalistic, hybrid machine learning.

CIM-ML: Simple, fast, flexible and powerful: This lecture is about the hybrid modelling tools that we have developed for BIG DATA from Real World science and technology. Goal: To combine the best – and avoid the worst – of classical mechanistic modelling, statistical learning and mainstream black box ML. Our hybrid Continuous, Interpretable, Minimalistic Machine Learning (CIM-ML) combines extensions of well-proven, interpretable subspace modelling tools from chemometrics with well-proven dynamics modelling tools from cybernetics and data compression techniques from signal processing – i.e. pragmatic, mathematizing fields outside mainstream AI.

Streams of high-dimensional measurements: In each application type, we employ available domain-specific knowledge to LINEARIZE and SIMPLIFY the input data and quantify KNOWN variation pattern types and extract them. Then we continue to look for, quantify and extract unexpected, but clear UNKNOWN variations patterns and anomalies, until the unmodelled residuals show “random” noise only. This minimalistic, self-correcting, hybrid approach gives compact and comprehensive data modelling. The quantified knowns and unknowns are displayed and used for classification, prediction and control, with minimalistic models, fast computations, strong file compression, good anomaly warnings, and new human insight.

Moshe Vardi
Moshe Vardi (remotely), Rice University, Houston Texas
Bio: Moshe Y. Vardi is a University Professor, and the George Distinguished Service Professor in Computational Engineering at Rice University.  He is the author and co-author of over 800 papers, as well as two books.  He is the recipient of several scientific awards, is a fellow of several societies, and a member of several honorary academies. He holds ten honorary titles.  He is a Senior Editor of Communications of the ACM, the premier publication in computing, focusing on societal impact of information technology.

Time of talk: October 10, 15:30-16:30
Title: A New Paradigm – A New Computer Science?

Abstract:
75 years after the birth of computing as a discipline with the founding of the Association for Computing Machinery, we seem to be witnessing a Kuhnian paradigm shift in computer science. The old paradigm of computer science as a science of formal models seems to be out, and a new paradigm of computer science as a data-driven discipline is in.

I argue that the paradigm-shift paradigm has been overplayed. In reality, scientific paradigms glide rather than shift.  Good old formal computer science is as important as ever.

But there has been a paradigm shift in how computing research is being carried out. The center of gravity in computing research used to be in academia, where its goal was to contribute to the common good. Today this center of gravity moved to industry, where its goal is to maximize corporate profits.

Tutorials

Ole-Christoffer Granmo, University of Agder
Bio: Prof. Ole-Christoffer Granmo is the Founding Director of the Centre for Artificial Intelligence Research (CAIR) at the University of Agder, Norway. He obtained his master’s degree in 1999 and his PhD degree in 2004, both from the University of Oslo, Norway. In 2018, he created the Tsetlin machine, for which he was awarded the AI research paper of the decade by the Norwegian Artificial Intelligence Consortium (NORA) in 2022. Dr. Granmo has authored and co-authored 180+ refereed papers with nine paper awards in machine learning, encompassing learning automata, bandit algorithms, Tsetlin machines, Bayesian reasoning, reinforcement learning, and computational linguistics. He has further coordinated 7+ research projects and graduated 55+ master- and ten PhD students. Dr. Granmo is also a co-founder of NORA. Apart from his academic endeavors, he co-founded Anzyz Technologies AS and is the Chair of the Technical Steering Committee at Literal Labs.

Time of talk: October 8, 14:00-15:00
Title of talk: Introduction to the Graph Tsetlin Machine

Tutorial abstract:
Pattern recognition with concise and flat AND-rules makes the Tsetlin Machine (TM) both interpretable and efficient, while the power of Tsetlin automata enables accuracy comparable to deep learning on an increasing number of datasets. This tutorial gives an introduction to the Graph Tsetlin Machine (GraphTM), which learn interpretable deep clauses from graph-structured input. Moving beyond flat, fixed-length input, the GraphTM gets more versatile, supporting sequences, grids, relations, and multimodality. Through message passing, the GraphTM builds nested deep clauses to recognize sub-graph patterns with exponentially fewer clauses, increasing both interpretability and data utilization. The tutorial also covers various applications. For image classification, GraphTM preserves interpretability and achieves 3.86%-points higher accuracy on CIFAR-10 than a convolutional TM. For tracking action coreference, faced with increasingly challenging tasks, GraphTM outperforms other reinforcement learning methods by up to 20.6%-points. In recommendation systems, it tolerates increasing noise to a greater extent than a Graph Convolutional Neural Network (GCN), e.g., for noise ratio 0.1, GraphTM obtains accuracy 89.86% compared to GCN’s 70.87%. Finally, for viral genome sequence data, GraphTM is competitive with BiLSTM-CNN and GCN accuracy-wise, training 2.5x faster than GCN. The GraphTM’s application to these varied fields demonstrates how graph representation learning and deep clauses bring new possibilities for TM learning.

Vladimir Zadorozhny, University of Pittsburg
Bio: Prof. Vladimir Zadorozhny is a Professor in DINS and also a Core Faculty Member at the University of Pittsburgh Biomedical Informatics Training Program and Adjunct Professor at Faculty of Engineering and Science of the University of Agder. He received his Ph.D. in 1993 from the Institute for Problems of Informatics, Russian Academy of Sciences in Moscow. Before coming to USA he was a Principal Research Scientist in the Institute of System Programming, Russian Academy of Sciences. Since 1998 he worked as a Research Associate in the University of Maryland Institute for Advanced Computer Studies at College Park. He joined University of Pittsburgh in 2001. His research interests include information integration, data fusion, complex adaptive systems and crowdsourcing, query optimization in resource-constrained distributed environments, sensor data management, and scalable architectures for wide-area environments with heterogeneous information servers. His research has been supported by NSF, EU and Norwegian Research Council. Vladimir is a recipient of Fulbright Scholarship for 2014-2015.  He has received several best paper awards and has chaired and served on program committees of multiple Database and Distributed Computing Conferences and Workshops.  His specific interests within CAIR are related to application of scalable data fusion methods to enable efficient data mining and machine learning in complex domains, such as large-scale monitoring of social dynamics and reliability assessment in biomedical data.

Time of talk: October 8, 15:30-16:30
Title of talk: Bridging Quantum Computing and the Tsetlin Machine

Tutorial abstract:
Implementation of quantum computing is based on quantum processes, and it is commonly explained using quantum mechanics. However, a deep understanding of quantum mechanics requires significant effort, which often becomes a deterrent for exploring the synergy between quantum computing and other advanced computational methods. It is therefore useful to separate the concept and the implementation of quantum computing, much like it has been done for classical computing. For instance, deep knowledge of electronics and flip-flop schemes is not required to understand how classical computers operate.

In this tutorial, Vladimir Zadorozhny explained the nature of quantum computing and how quantum computers process information without requiring participants to delve deeply into quantum mechanics. He introduced an approach that interprets quantum computing as a form of information fusion, focused on reconstructing objects from multiple observations and projections. Since information fusion typically involves resolving uncertainties caused by redundancy and inconsistencies between observations, this perspective places quantum computing within a clear logical framework. Such a framework makes it possible to explore the potential synergy between quantum computing and the Tsetlin Machine, while also outlining an interesting research agenda for the TM community.