Example use case: explanatory debugging (Kulesza et al. 2015)
Explanations arise to
Delivered at a request of a human explainee
An explanation is an answer to a “Why?” question (Miller 2019)
Humans expect the explanations to be (Miller 2019)
Counterfactual explanations are specific to a data point
Have you been 5 years older, your loan application would be accepted.
Had you been 10 years younger,
your loan application would be accepted.
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.
Had you been 10 years younger,
your loan application would be accepted.
Had you paid back one of your credit cards,
your loan application would be accepted.
Dialogue-based personalisation
Why was my loan application denied?
Instead of increasing my income. Is there anything I can do about my outstanding debt to get this loan approved?
Because of your income. Had you earned £5,000 more, it would have been granted.
If you cancel one of your three credit cards, you will receive the loan.
Interactivity is insufficient, e.g., static explanation + dynamic user interface
Vehicle to personalise content (and other aspects)
Bespoke explanatory experience driven by context
Producing explanations is necessary but insufficient for human-centred explainability
These insights need to be relevant and comprehensible (context) to explainees
Explainers are socio-technical constructs, hence we should strive for seamless integration with humans as well as technical correctness and soundness
Each (real-life) explainability scenario is unique and requires a bespoke solution
Social Process