Explainable AI
Aric Whitewood
1
Introduction
Explainable AI or XAI for short, is not a new problem, with examples of research going back to the 1970s, but it is one which has received a great deal of attention recently. An XAI system is one whose actions are understandable by humans, and ideally where such transparency is obtained with no reduction in prediction accuracy, or performance, of the underlying algorithms.
The idea of an AI that is not explainable – a so-called ‘black box’ – which could potentially produce decisions and behaviours that are both unpredictable and may or may not be beneficial (or as the designer originally intended), is an unsettling one, particularly if that system is driving a car or performing some other task where human lives are potentially at stake. The reality is that actual systems may be placed on a continuous scale of explainability – they are grey rather than black boxes, and so in many cases we have a correspondingly imperfect understanding of their operation. Aside from trust and ethics, which are the most common reasons quoted for pursuing this topic, there are a number of elements within XAI, including the fundamental, philosophical definitions of explanations themselves, transparency, informativeness, and transferability [1].
As an asset management and research firm, the application of AI to investment decision making is our focus, and I’ll describe here, and in subsequent posts, how we approach the problem of designing explainability into our particular system.
2
Our System and the XAI Context
The AI system we’ve created since starting the firm, called the Global AI Allocator, or GAIA, can effectively be used in two ways:
-
Running a systematic (100% computer driven) strategy, and/or
-
Providing predictions which can be used by a human being to help them in their discretionary investment decision making.
The latter, a blend of human and machine, has the potential to be the most powerful mode of operation, but marrying the two sides effectively is very challenging. Both modes of operation require XAI, in two main forms:
-
Members of our team have to be able to interrogate the machine: to understand its current and potential future states, and why it is making particular decisions, all to a potentially high degree of detail.
-
We need to be able to provide more concise, focused explanations on decisions, either to someone using the predictions, or to investors. It is feasible that this will also be required for regulators at some future time.
Looking at past and current research on XAI, there are a number of different papers on aspects ranging from topics such as variable transformation techniques, to specific implementations of visualisation systems, interpretable approximations to complex models (the LIME system), and various ways of analysing the state of deep neural networks, to name but a few (see [1] for some examples). One of the fundamental parts of an overall XAI system, which is often only considered implicitly, is the human being. To design an XAI system in the investment context requires taking into account how people think about trading decisions (and their mental models of the decision or supporting forecast), how they view technology and interact with it (a somewhat related topic of user interfaces and mental models of the underlying technology), and how we can translate AI decision processes into artefacts which are more easily digestible by users (transparency and interpretability). In this sense, the recent paper by Hoffman et. al. [2] is interesting (and is part of the DARPA funded initiative on XAI), as the authors mention similar ideas and aim to create a ‘naturalistic decision making’ model of explanation.
3
Mental Models
Mental models are representations in a persons mind of processes, devices, technology and other things that we interact with in the world. They can be representations of a forecasting process leading to a trading decision for example, or more specifically how, say, a person forecasts interest rates on the basis of inflation rates. Another way of looking at this is how humans learn functional relationships between continuous variables, a topic of research in the experimental psychology field. This process is commonly separated into one of two methods: estimating explicit functions or rules, or associative learning (similarity) [3]. Indeed, the latter – the ability to judge one thing as being similar in some respects to another – is thought to be a key part of cognition, supporting generalisation and inference, [4] and [5].
Model interpretability includes the level of transparency of the system (the opposite of a ‘black box’ is a ‘glass box’), as well as the ability to produce, for example, visualisations, natural language explanations, explanations by example (similarity), and so allow a user to develop understanding in the form of causal models, and trust in the system itself. Combining this with functional relationship learning, it makes sense to present the AI system state to users in a similar way to their own mental models, so effectively in terms of both learned rules (relationships) and similar examples (support for those rules). This implies the development of an abstraction, or explanation, layer which provides concise, focused explanations along these dimensions, and which hides much of the underlying complexity of the underlying models themselves. We retain the ability to interrogate the system below this layer in more detail of course, but the explanation layer allows communication of key drivers and decisions in a more condensed way to external parties.
4
Summary
Bringing all the above together means our XAI implementation allows users to investigate which market conditions tend to lead to particular outcomes (rules allowing the formation of causal relationships, with an element of probability), to examine market regimes in the past that were similar to the current regime, and to more generally understand the both linear and non-linear drivers behind the current set of predictions. There are associated topics which I haven’t covered here, but which are relevant, such as biases in human cognition (and how we can potentially compensate for those), the characteristics of ‘superforecasters‘ and how these influence some of the design choices for XAI, and so on. In subsequent articles, I’ll go into some more detail on these additional topics, give example outputs of our own XAI system, and present case studies for particular market events in the past.