Tutorials

Tutorial 1

Robust Certificates for Neural Networks

  • Venue: Bali (in-person)
  • Bali Time: Fri, 5 Dec 2025 13:30 - 14:30 WITA (UTC+8)

Deep neural networks face the critical challenge of adversarial vulnerability: imperceptible perturbations can drastically alter predictions. Empirical defenses, such as adversarial training, improve robustness against known attacks but often fail under stronger or adaptive adversaries. In contrast, certified robustness provides provable guarantees that a model’s prediction remains stable within a prescribed perturbation region. This tutorial introduces two complementary approaches: Lipschitz-constrained networks, which enforce global sensitivity bounds via spectral norm regularization, convex potentials, and contractive architectures, yielding deterministic certificates; and randomized smoothing, a probabilistic method that transforms any classifier into a certifiably robust model by adding Gaussian noise and averaging predictions. We will also highlight applications across natural language processing, datacentric AI, and foundation models, illustrating how certification principles extend to robustness against diverse perturbations, data quality issues, and large-scale multimodal systems.

Speakers

Yang Cao

Yang Cao

Institute of Science Tokyo

Yang Cao is an Associate Professor at the Department of Computer Science, Institute of Science Tokyo (Science Tokyo, formerly Tokyo Tech), and directing the Trustworthy Data Science and AI (TDSAI) Lab. He is passionate about studying and teaching on algorithmic trustworthiness in data science and AI. Two of his papers on data privacy were selected as best paper finalists in top- tier conferences IEEE ICDE 2017 and ICME 2020. He was a recipient of the IEEE Computer Society Japan Chapter Young Author Award 2019, Database Society of Japan Kambayashi Young Researcher Award 2021. His research projects were/are supported by JSPS, JST, MSRA, KDDI, LINE, WeBank, etc.

Blaise Delattre

Blaise Delattre

Institute of Science Tokyo

Blaise Delattre is a Postdoctoral Researcher in the TDSAI Lab, working with Prof. Yang Cao. He received his PhD in Computer Science from PSL University, where his research focused on certified adversarial robustness of deep neural networks, with contributions on Lipschitz-constrained architectures and randomized smoothing. His current interests focus on robustness and reliability of large language, vision–language, and other foundation models.

Tutorial 2

Question Answering over Knowledge Bases in the Era of Large Language Models

  • Venue: Sydney (in-person), Bali (broadcast)
  • Sydney Time: Sat, 6 Dec 2025 13:00 - 14:00 AEDT (UTC+11)
  • Bali Time: Sat, 6 Dec 2025 10:00 - 11:00 WITA (UTC+8)

Knowledge Base Question Answering (KBQA) has emerged as a crucial paradigm for providing accurate, explainable, and domain-specific answers to natural language queries. With the rapid advancement of Large Language Models (LLMs), QA systems can leverage their powerful language understanding and generation capabilities. However, LLMs often struggle with hallucination, static knowledge, and limited interpretability. Integrating structured Knowledge Bases (KBs) addresses these limitations by providing explicit, updatable, and domain-specific factual knowledge. This tutorial provides an overview of KBQA in the era of LLMs. We introduce fundamental concepts of KBQA, discuss the strengths and limitations of LLMs and KBs, and survey state-of-the-art methods, including retrieval-augmented generation and knowledge integration approaches. Practical considerations, applications, and limitations are highlighted throughout.

Speaker

Jianzhong Qi

Jianzhong Qi

The University of Melbourne

Jianzhong Qi is an Associate Professor at The University of Melbourne and an ARC Future Fellow. His research concerns fundamental algorithms for management of and knowledge discovery from structured and semi-structured data. He has served as a PC Chair for the Australasian Database Conference in 2020, and he has served as an Area Chair, Senior PC member, and PC member for top machine learning and database venues such as ICML, NeurIPS, ICLR, SIGMOD, ICDE and WWW.

Tutorial 3

Neural Network Reprogrammability: A Unified Framework for Parameter-Efficient Foundation Model Adaptation

  • Venue: Bali (in-person), Sydney (broadcast)
  • Sydney Time: Sat, 6 Dec 2025 14:00 - 15:00 AEDT (UTC+11)
  • Bali Time: Sat, 6 Dec 2025 11:00 - 12:00 WITA (UTC+8)

The goal of this tutorial is to provide machine learning researchers and practitioners with a clear guideline for adapting Foundation Models in the context of parameter-efficient fine-tuning (PEFT). This tutorial moves beyond a simple catalog of PEFT techniques to introduce Neural Network Reprogrammability as a unifying framework that explains how and why modern PEFT methods work. The audience will learn to view techniques, e.g., prompt tuning, in-context learning, and model reprogramming, not as isolated methodologies, but as principled instances of a shared underlying idea: repurposing a fixed pre-trained model by strategically manipulating information at its interfaces. Attendees will walk away with a structured understanding of the adaptation lifecycle: from input manipulation to output alignment. The tutorial will synthesize existing methodologies and practical applications with a cohesive principle, enabling attendees to better analyze, choose, and design adaptation strategies for their own projects, without incurring substantial costs when fine-tuning Foundation Models.

Speaker

Feng Liu

Feng Liu

The University of Melbourne

Feng Liu is a Senior Lecturer in Machine Learning and ARC DECRA Fellow at The University of Melbourne, where he directs the Trustworthy Machine Learning and Reasoning Lab. He is also a Visiting Scientist at RIKEN AIP. His research focuses on hypothesis testing and trustworthy machine learning. He has served as an area chair for ICML, NeurIPS, ICLR, and AISTATS, and as an editor or action editor for several leading journals. His work has been recognized with the NeurIPS 2022 Outstanding Paper Award and multiple Outstanding Reviewer Awards.

Tutorial 4

Optimizing Quality and Efficiency in Federated Learning

  • Venue: Bali (in-person), Sydney (broadcast)
  • Sydney Time: Sat, 6 Dec 2025 15:30 - 16:30 AEDT (UTC+11)
  • Bali Time: Sat, 6 Dec 2025 12:30 - 13:30 WITA (UTC+8)

Federated Learning (FL) faces critical challenges in model generalization and Non-IID data, which impact both the quality and efficiency of the learned models. This talk will present some advancements to address these issues. We first introduce a reinforcement federated domain generalization method, which uses a reinforcement learning agent to dynamically optimize feature representation for superior performance on unseen data domains. Next, we present a method to create privacy-preserving client data, effectively mitigating data heterogeneity. Finally, we describe a classifier debiased federated learning framework that directly corrects classifier bias from Non-IID data. Together, these approaches form a cohesive strategy for building more robust, accurate, and efficient FL systems.

Speaker

Zhe Xue

Zhe Xue

Beijing University of Posts and Telecommunications

Zhe Xue is a Professor at Beijing University of Posts and Telecommunications, specializing in data mining, multimodal learning, and federated learning. He has published many papers in top-tier journals and conferences such as ICML, NeurIPS, CVPR, AAAI, IJCAI, MM and WWW. His major honors include the IEEE CCIS Best Paper Award, IEEE BigComp Best Paper Award Runner-up, CCFAI Best Paper Award, and ChinaMM Best Student Paper Award.