Keynotes

We are happy to announce the following keynotes presenting at the conference:

Erol Gelenbe, Institute of Theoretical and Applied Informatics, Polish Academy of Sciences

Time of talk: October 9, 09:10 – 10:10
Title: TBA

Harald Martens, Norwegian University of Science and Technology
Bio: Harald Aagaard Martens (b. 1946 in Kristiansand, Norway) has an MSc in industrial biochemistry and a Dr.techn in chemometrics and multivariate calibration from NTNU. He has written over two hundred research papers and several books on multivariate data modelling, which in total have been cited over 28 000 times (Google Scholar 2025), and is a member of the Norwegian Academy of Technological Sciences NTVA.

He is presently Professor emeritus,  Big Data Cybernetics group / Dept. of Engineering Cybernetics, Norwegian U. of Science and Technology /  NTNU,  Trondheim Norway. He is also founder and research leader, Idletechs AS (quantitative, interpretable machine learning for Big Data from science and technology).

Recent overviews of his modelling activities and views on today’s and tomorrow’s AI :
H. Martens (2023) Causality, machine learning and human insight Analytica Chimica Acta, 9 October 2023, 341585

H. Martens (2025) A Greener, Safer and More Understandable AI for Natural Science and Technology
Wiley Analytical Science, 18 January 2025

For more information, see https://www.ntnu.edu/employees/harald.martens, https://idletechs.com/                 https://scholar.google.com/citations?hl=no&user=60HNWsYAAAAJ&view_op=list_works&sortby=pubdate

Time of talk: October 9, 15:30-16:30
Title: Green, Safe, Understandable and Cost-effective AI for well-structured BIG DATA

Abstract:
Two science cultures, shared visions? As chemometrician, it is an honor to address your Tsetlin machine community. Our cultures have different scientific origins, we approach machine learning with different perspectives and modelling tools, and solve somewhat different problems. But I believe that we complement each other, and share several values and goals, e.g. to reduce the energy footprint and interpretation challenges in today’s mainstream Machine Learning. Moreover, we have faced some of the same obstacles vis-à-vis today’s mainstream AI (impressive, but demanding and over-sold black box ANN/CNN/DL; dominating commercial actors; a paralyzed, uninformed public; initially slow academic and market acceptance). I see several potentials for mutually beneficial cooperation towards a new and better AI industry.

Knowledge needed: In industry, medicine, environmental science, economy and in Real-World society at large, more environmentally friendly, safe and commercially competitive processes and products are needed. This requires existing knowledge to be put to practical use – but also be challenged, corrected and expanded as needed, based on new observations.

The material domain is simpler than the immaterial domain of e.g. natural language: Today’s torrents of modern measurements from the “material” domain of the physical world are too complex for humans to handle without mathematical modelling and statistical validation. But compared to data from the “immaterial” domain of natural language, continuous measurements from spectrometers, thermal-, RGB- and hyperspectral cameras etc.in e.g. industrial process monitoring or space-based Earth Observation – are relatively well-structured. The reason is that they are strongly constrained – by natural laws, geological history, biological evolution and human experience. Moreover, for each measurement type a lot is known already. Therefore, they are well suited for minimalistic, hybrid machine learning.

CIM-ML: Simple, fast, flexible and powerful: This lecture is about the hybrid modelling tools that we have developed for BIG DATA from Real World science and technology. Goal: To combine the best – and avoid the worst – of classical mechanistic modelling, statistical learning and mainstream black box ML. Our hybrid Continuous, Interpretable, Minimalistic Machine Learning (CIM-ML) combines extensions of well-proven, interpretable subspace modelling tools from chemometrics with well-proven dynamics modelling tools from cybernetics and data compression techniques from signal processing – i.e. pragmatic, mathematizing fields outside mainstream AI.

Streams of high-dimensional measurements: In each application type, we employ available domain-specific knowledge to LINEARIZE and SIMPLIFY the input data and quantify KNOWN variation pattern types and extract them. Then we continue to look for, quantify and extract unexpected, but clear UNKNOWN variations patterns and anomalies, until the unmodelled residuals show “random” noise only. This minimalistic, self-correcting, hybrid approach gives compact and comprehensive data modelling. The quantified knowns and unknowns are displayed and used for classification, prediction and control, with minimalistic models, fast computations, strong file compression, good anomaly warnings, and new human insight.

Moshe Vardi (remotely), Rice University, Houston Texas
Bio: Moshe Y. Vardi is a University Professor, and the George Distinguished Service Professor in Computational Engineering at Rice University.  He is the author and co-author of over 800 papers, as well as two books.  He is the recipient of several scientific awards, is a fellow of several societies, and a member of several honorary academies. He holds ten honorary titles.  He is a Senior Editor of Communications of the ACM, the premier publication in computing, focusing on societal impact of information technology.

Time of talk: October 10, 15:30-16:30
Title: A New Paradigm – A New Computer Science?

Abstract:
75 years after the birth of computing as a discipline with the founding of the Association for Computing Machinery, we seem to be witnessing a Kuhnian paradigm shift in computer science. The old paradigm of computer science as a science of formal models seems to be out, and a new paradigm of computer science as a data-driven discipline is in.

I argue that the paradigm-shift paradigm has been overplayed. In reality, scientific paradigms glide rather than shift.  Good old formal computer science is as important as ever.

But there has been a paradigm shift in how computing research is being carried out. The center of gravity in computing research used to be in academia, where its goal was to contribute to the common good. Today this center of gravity moved to industry, where its goal is to maximize corporate profits.