Prof. Alan Dix
A job candidate has been pre-selected for shortlist by a neural net; an autonomous car has suddenly changed lanes almost causing an accident; the intelligent fridge has ordered an extra pint of milk. From the life changing or life threatening to day-to-day living, decisions are made by computer systems on our behalf. If something goes wrong, or even when the decision appears correct, we may need to ask the question, “why?” In the case of failures we need to know whether it is the result of a bug in the software,; a need for more data, sensors or training; or simply one of those things: a decision correct in the context, that happened to turn out badly. Even if the decision appears acceptable, we may wish to understand it for our own curiosity, peace of mind, or for legal compliance. In this talk I will pick up threads of research dating back to early work in the 1990s on gender and ethnic bias in black-box machine-learning systems, as well as more recent developments such as deep learning and concerns such as those that gave rise to the EPSRC human-like computing programme. In particular I will present nascent work on an AIX Toolkit (AI explainability): a structured collection of techniques designed to help developers of intelligent systems create more comprehensible representations of the reasoning. Crucial to the AIX Toolkit is the understanding that human-human explanations are rarely utterly precise or reproducible, but they are sufficient to inspire confidence and trust in a collaborative endeavour.
Computational Foundry, Swansea University, Wales