Tutorials

Tutorial 1

Robust Certificates for Neural Networks

Venue: Bali

Time: TBD

Deep neural networks face the critical challenge of adversarial vulnerability: imperceptible perturbations can drastically alter predictions. Empirical defenses, such as adversarial training, improve robustness against known attacks but often fail under stronger or adaptive adversaries. In contrast, certified robustness provides provable guarantees that a model’s prediction remains stable within a prescribed perturbation region. This tutorial introduces two complementary approaches: Lipschitz-constrained networks, which enforce global sensitivity bounds via spectral norm regularization, convex potentials, and contractive architectures, yielding deterministic certificates; and randomized smoothing, a probabilistic method that transforms any classifier into a certifiably robust model by adding Gaussian noise and averaging predictions. We will also highlight applications across natural language processing, datacentric AI, and foundation models, illustrating how certification principles extend to robustness against diverse perturbations, data quality issues, and large-scale multimodal systems.

Speakers

Yang Cao

Yang Cao

Institute of Science Tokyo

Yang Cao is an Associate Professor at the Department of Computer Science, Institute of Science Tokyo (Science Tokyo, formerly Tokyo Tech), and directing the Trustworthy Data Science and AI (TDSAI) Lab. He is passionate about studying and teaching on algorithmic trustworthiness in data science and AI. Two of his papers on data privacy were selected as best paper finalists in top- tier conferences IEEE ICDE 2017 and ICME 2020. He was a recipient of the IEEE Computer Society Japan Chapter Young Author Award 2019, Database Society of Japan Kambayashi Young Researcher Award 2021. His research projects were/are supported by JSPS, JST, MSRA, KDDI, LINE, WeBank, etc.

Blaise Delattre

Blaise Delattre

Institute of Science Tokyo

Blaise Delattre is a Postdoctoral Researcher in the TDSAI Lab, working with Prof. Yang Cao. He received his PhD in Computer Science from PSL University, where his research focused on certified adversarial robustness of deep neural networks, with contributions on Lipschitz-constrained architectures and randomized smoothing. His current interests focus on robustness and reliability of large language, vision–language, and other foundation models.

Tutorial 2

Question Answering over Knowledge Bases in the Era of Large Language Models

Venue: Sydney

Time: TBD

Knowledge Base Question Answering (KBQA) has emerged as a crucial paradigm for providing accurate, explainable, and domain-specific answers to natural language queries. With the rapid advancement of Large Language Models (LLMs), QA systems can leverage their powerful language understanding and generation capabilities. However, LLMs often struggle with hallucination, static knowledge, and limited interpretability. Integrating structured Knowledge Bases (KBs) addresses these limitations by providing explicit, updatable, and domain-specific factual knowledge. This tutorial provides an overview of KBQA in the era of LLMs. We introduce fundamental concepts of KBQA, discuss the strengths and limitations of LLMs and KBs, and survey state-of-the-art methods, including retrieval-augmented generation and knowledge integration approaches. Practical considerations, applications, and limitations are highlighted throughout.

Speaker

Jianzhong Qi

Jianzhong Qi

The University of Melbourne

Jianzhong Qi is an Associate Professor at The University of Melbourne and an ARC Future Fellow. His research concerns fundamental algorithms for management of and knowledge discovery from structured and semi-structured data. He has served as a PC Chair for the Australasian Database Conference in 2020, and he has served as an Area Chair, Senior PC member, and PC member for top machine learning and database venues such as ICML, NeurIPS, ICLR, SIGMOD, ICDE and WWW.

Tutorial 3

Neural Network Reprogrammability: A Unified Framework for Parameter-Efficient Foundation Model Adaptation

Venue: Bali

Time: TBD

The goal of this tutorial is to provide machine learning researchers and practitioners with a clear guideline for adapting Foundation Models in the context of parameter-efficient fine-tuning (PEFT). This tutorial moves beyond a simple catalog of PEFT techniques to introduce Neural Network Reprogrammability as a unifying framework that explains how and why modern PEFT methods work. The audience will learn to view techniques, e.g., prompt tuning, in-context learning, and model reprogramming, not as isolated methodologies, but as principled instances of a shared underlying idea: repurposing a fixed pre-trained model by strategically manipulating information at its interfaces. Attendees will walk away with a structured understanding of the adaptation lifecycle: from input manipulation to output alignment. The tutorial will synthesize existing methodologies and practical applications with a cohesive principle, enabling attendees to better analyze, choose, and design adaptation strategies for their own projects, without incurring substantial costs when fine-tuning Foundation Models.

Speaker

Feng Liu

Feng Liu

The University of Melbourne

Feng Liu is a Senior Lecturer in Machine Learning and ARC DECRA Fellow at The University of Melbourne, where he directs the Trustworthy Machine Learning and Reasoning Lab. He is also a Visiting Scientist at RIKEN AIP. His research focuses on hypothesis testing and trustworthy machine learning. He has served as an area chair for ICML, NeurIPS, ICLR, and AISTATS, and as an editor or action editor for several leading journals. His work has been recognized with the NeurIPS 2022 Outstanding Paper Award and multiple Outstanding Reviewer Awards.