30-minute open office (general questions & project discussions)
Time Commitment: 20 hours (self-study)
Schedule
Where:
What
Time
Location (week 1)
Location (week 2)
lecture
9.30–10.15am
D0.03
D1.14
discussion
10.15–10.30am
D0.03
D1.14
lab
10.30–11.15am
D0.03
D1.14
open office
11.30am–12pm
D0.03
D1.14
Prerequisites
Python programming
Basic mathematical concepts (relevant to machine learning)
Machine learning techniques for tabular data
⭐ Prior experience with machine learning approaches for images and text (e.g., deep learning) or other forms of data modelling (e.g., time series forecasting, reinforcement learning) if you decide to pursue a project in this direction
Computational elements and figures coded in Python (with matplotlib)
Source can be compiled into Jupyter Notebooks (to experiment, modify, adapt and reuse the code chunks)
About the Course
Motivation
Wealth of XAI and IML learning resources…
…but mostly limited to
summary descriptions
code examples
explanation examples
interpretation tips
Motivation
Deconstruct each method
Inspect its assumptions and operationalisation
Learn to tune explainers for the problem at hand
Learn to interpret explanations in view of their theoretical properties and (limitations of) algorithmic implementation
Develop critical thinking about XAI and IML techniques
General Learning Objectives
Understand the landscape of AI and ML explainability techniques
Identify explainability needs of data-driven machine learning systems
Recognise the capabilities and limitations of explainability approaches, both in general and in view of specific use cases
⭐ Apply these skills to real-life AI and ML problems
⭐ Communicate explainability findings through interactive reports and dashboards
Practical Learning Objectives
Identify self-contained algorithmic components of explainers and understand their functions
Connect these building blocks to the explainability requirements unique to the investigated predictive system
Select appropriate algorithmic components and tune them to the problem at hand
Evaluate these building blocks (in this specific context) independently and when joined together to form the final explainer
Interpret the resulting explanations in view of the uncovered properties and limitations of the bespoke explainability algorithm
Scope
Introduction to explainability
History of explainability
Types of explanations
Ante-hoc vs. post-hoc discussion, and information lineage (endogenous and exogenous sources of explanatory information)
Multi-class explainability
Taxonomy and classification of explainability approaches
Defining explainability
Human-centred perspective
Evaluation of explainability techniques
Models and data used for this course
Scope
A brief overview of data explainability
Data as an (implicit) model
Data summarisation and description
Dimensionality reduction (e.g., t-SNE)
Exemplars, prototypes and criticisms
Scope
Transparent modelling
Linear models
Logistic models
Generalised additive models
Decision trees
Rule lists and sets; scoped rules
\(k\)-nearest neighbours and \(k\)-means
Scope
Feature importance
Permutation Importance
Partial Dependence-based feature importance
Meta-approaches
LIME-based feature importance
SHAP-based feature importance
Scope
Feature influence
Individual Conditional Expectation
Partial Dependence
Marginal Effect
Accumulated Local Effect
Meta-approaches
LIME (linear surrogate)
SHAP
Scope
Meta-explainers
surrogate explainers
local, cohort and global
linear and tree-based
rules
ANCHOR
RuleFit
SHAP
Scope
Instance-based explanations
Exemplar explanations
Counterfactuals
Prototypes and criticisms
Coursework
Bring-your-own-project
Explain a predictive model (you are working with)
develop a bespoke explainability suite for a predictive model of your choice (e.g., for a project you are currently working on, or a model accessible via an API)
use multiple explainability techniques and identify the sources of explanation (dis)agreements
Coursework
Dissect an explainability method – choose an explainability method, identify its core (algorithmic) building blocks and articulate its assumptions, exploring how these different aspects affect the explanations
Build a model-specific or model-agnostic explainer or a transparent model
new explainability technique (from existing building blocks)
new composition of an existing explainability technique
new visualisation of an explanation type
Coursework
Individual or in small groups
Projects to be presented or demoed in the last class