Tutorial: Prof. Ole-Christoffer Granmo
3:00-5:00 PM 28 August 2024 SENSQ 5317@University of Pittsburgh
Tutorial Title: An Introduction to the Tsetlin Machine
Professor Ole-Christoffer Granmo is the Founding Director of the Centre for Artificial Intelligence Research (CAIR), University of Agder, Norway. He obtained his master’s degree in 1999 and his PhD in 2004, both from the University of Oslo, Norway. In 2018, he created the Tsetlin machine, for which he was awarded the AI research paper of the decade by the Norwegian Artificial Intelligence Consortium (NORA) in 2022. Prof. Granmo has authored over 180 refereed papers with eight paper awards in machine learning, encompassing learning automata, bandit algorithms, Tsetlin machines, Bayesian reasoning, reinforcement learning, and computational linguistics. He has further coordinated 7+ research projects and graduated 55+ master- and nine PhD students. Prof. Granmo is also a co-founder of NORA. Apart from his academic endeavors, he co-founded Anzyz Technologies AS and Tsense Intelligent Healthcare AS, and is an advisor at Literal Labs.
Tutorial Abstract:
The Tsetlin machine is a new universal artificial intelligence (AI) method that learns simple logical rules to understand complex things, similar to how an infant uses logic to learn about the world. Being logical, the rules become understandable to humans. Yet, unlike all other intrinsically explainable techniques, Tsetlin machines are drop-in replacements for neural networks by supporting classification, convolution, regression, reinforcement learning, auto-encoding, language models, and natural language processing. They are further ideally suited for cutting-edge hardware solutions of low cost, enabling nanoscale intelligence, ultralow energy consumption, energy harvesting, unrivaled inference speed, and competitive accuracy. In this tutorial, I cover the basics and recent advances of Tsetlin machines, including inference and learning, advanced architectures, and applications.
Keynote I: Dr. Selmer Bringsjord
9:00 am 29 August 2024 @University of Pittsburgh
Keynote title: Tsetlin Machines and the Stunning Logical Power of Human Minds
Dr. Selmer Bringsjord specializes in the logico-mathematical and philosophical foundations of artificial intelligence (AI) and cognitive science (CogSci), in collaboratively building AI systems/robots on the basis (primarily) of computational logic, and in the logic-based and theorem-guided modeling and simulation of rational, human-level-and-above cognition. He suspects that the vast majority of what he has learned that is truly valuable has been acquired via logic-based learning. Here is the long version of Dr. Selmer Bringsjord’s bio.
Keynote I Abstract:
The human mind is not only — as many say — the smartest thing in the known universe; it’s specifically a logical marvel. We now know, for example, courtesy of reverse mathematics, that members of our species are able to reason rigorously over complex formal content expressed in third-order logic (TOL). (Some of the results of Gödel from long ago anticipated this.) Moreover, since some of us have knowledge of and beliefs about others who reason in such ways, our logical power extends to reasoning in robust “theory-of-mind” fashion. How can this be? How can a helpless infant whose brain is anemic rise to these logical heights in the span of a few short years? The answer has two intertwined parts: One, we have innate, i.e. unlearned, logical capacity. Two, this capacity supports learning of the kind that Tsetlin machines provide. This two-part answer is (a) diametrically at odds with such mistaken views as that statistical/numerical deep learning is the foundation of human-level logical power, and (b) explains why such agents as those in the GPT-k series are comically bad reasoners.
Keynote II: Dr. Dilip Vasudevan & Dr. Christoph Kirst
2:00 pm 29 August 2024 @University of Pittsburgh
Keynote title: Super-conducting Tsetlin Machines and Neuromorphic Computing
Dr. Dilip Vasudevan is a career research scientist in the Computer science department of the Applied Mathematics and Computational Research (AMCR) division at the Lawrence Berkeley National Laboratory (LBNL) in Berkeley, California . His current research interests include the design space exploration of Beyond Moore computing architectures, Neuromorphic Computing, superconducting digital electronic system design and hardware/software co-design and the development of new computing models and systems to deliver for future supercomputing systems. His previous work was in ultra-low power subthreshold system design for Internet of Things (IoT). He obtained his Ph.D. in Informatics (Computer Engineering) from the University of Edinburgh, Scotland, UK. Before joining LBNL, he was an R&D engineer with Synopsys, Inc. working on DDR controller IPs used in smartphones. He also had a stint at the University College Cork, Ireland, University of Chicago and University of Charlottesville, Virginia, where he worked on reversible computing system design, was one of the architects who designed the 10×10 microarchitecture for DARPA PERFECT program and one of the designers of Batteryless System-on-Chip (SoC) for NSF ASSIST program respectively. He received the best paper award in 2020 from the ASPLOS conference. He has received several grants from DoE, ARO, IARPA and LBNL LDRD as a PI and co-PI for his research and holds several US patents and publications in IEEE, ACM and Nature journals and conferences.
Dr. Christoph Kirst is Assistant Professor for theoretical neuroscience at the University of California San Francisco and Faculty Scientist at the Lawrence Berkeley National Laboratory studying the mechanisms underlying flexible brain computation and function, its use in artificial neuronal networks and hardware implementations. He studied mathematics and theoretical physics at the Universities of Oxford, Cambridge, Berlin, and Göttingen and obtained a PhD in theoretical physics from the Max Plank Institute for Dynamics and Self-organization. He was named Fellow for Physics and Biology at Rockefeller University and Kavli Fellow at the Kavli Neural Systems Institute. His recent interest include developing analysis tools for mapping whole brain structures and activity at cellular resolution, studying the dynamics and information processing in high-dimension neuronal activity recordings, as well as quantitative behavioral analysis, in order to better understand self-organizing and adaptive brain function. His theoretical work on flexible information routing mechanisms has recently been transferred to neuromorphic computing applications, including super-conducting technology.
Keynote II Abstract:
Current trends in foundational models of AI have unleashed new challenges in system design to handle new generative AI applications. These systems when scaled will lead to highly memory intensive and communication congesting challenges which current Von-Neumann architectures cannot handle efficiently, leading to highly energy consuming systems. Alternative paradigms in computing, logic, architectures and devices are needed to tackle this energy crisis. Superconducting logic based systems are one of the promising venues to develop new directions to lower the energy consumption by several orders of magnitude.
In this presentation, we will take a look at recent new and promising computing paradigms developed using superconducting electronics (SCE) and their advantages towards energy efficiency and scalability. After introducing super-conducting technologies, we will present our new computing model called Super-Tsetlin, a Superconducting Tsetlin Machine designed using superconducting RSFQ technology and demonstrate some applications. We will then discuss the superconducting Temporal Design for a set of hard compute problems and their benefits. Finally, we will introduce innovative meromorphic computing frameworks for high-performance and energy efficient computations, including neuromorphic oscillator networks and their implementations and applications in superconducting technology.
We will conclude with a future vision towards building energy efficient systems for foundational models for AI and neuromorphic computing paradigms.