Artificial Intelligence

XAI: Explainable Artificial Intelligence

24 January 2019

XAI: Explainable Artificial Intelligence

With the increasing use of Artificial Intelligence (AI) and the growing learning capability of solutions, there is a growing risk that systems will one day know more than their developers and expert trainers, and banks will no longer be able to understand the technologies they use. There is an upcoming demand for a type of technology that explains the result AI systems output.

The use of AI in banks is growing, for instance to provide customers with capital accumulation advisory services. Another field for AI is the automation of compliance processes. With AI, it should be easier and more effective to detect fraudulent transactions such as money laundering and credit card fraud. The demands in this respect are becoming more complex: in compliance with the MaRisk and MiFID II legislative frameworks, it is not enough that banks identify fraud with security transactions. Institutes are also expected to monitor communications for any market manipulations – for example for any indications of insider trading. AI systems are extremely useful for analysing vast quantities of data.

On the other hand, it is also important that the person can always track how a colleague – Cobot or the other background self-learning systems – makes their decisions and recommendations. When institutes teach artificial intelligence today, for instance as a Robo Advisor for securities consulting or to open a new account in compliance with the KYC method, there is no mechanism set yet to make the results understandable. This is anything but trivial with systems that are becoming “cleverer” and improving their self-learning ability. For example, neural networks comprise millions of neurons that are linked to zillions of connections. This AI system is trained gradually over a longer period of time with the end result that it has the right input and output configuration.

[READ MORE] Artificial intelligence: decisions should be understandable

The risk with these types of complex computing machines is that customers or bank employees are left with a series of questions after a consultancy or decision which the banks themselves cannot answer: “Why did you recommend this share?”, “Why was this person rejected as a customer?“, “How does the machine classify this transaction as terror financing or money laundering?”.

It is essential for banks to have answers to all these questions. A particularly sensitive issue is monitoring the flow of finance transactions for any signs of money laundering and terror financing. In the worst case, an AI recommendation can result in a criminal prosecution and in no way should this type of decision be based on a fully automated process. To take the example of Robo Advisory, a customer has the right to find out why the recommended investment was rated as particularly lucrative at the time and why it has not achieved the desired return on investment.

XAI – the four-eye principle for AI systems

Companies are more and more focusing on the transparency and understanding of AI when deploying artificial intelligence and complex learning systems. The notion of being able to easily track and understand AI decisions is referred to as Explainable AI (XAI). In the US for example, DARPA, an agency of the United States Department of Defense, is researching models to make AI decisions more transparent. The idea of XAI is to create a type of quality gate between the learning process of AI solutions and the later area of application – for example when investigating suspicions of money laundering or credit rating – in order to find a direct solution.

In the case of neural networks, the quality gate must be rooted in the model from the start because it is often not possible to verify results afterwards. To put it in an extremely simplified way:

  • When a bank trains a model, based on every available attributes of a customer, that predicts whether the person is more interested in a high-risk investment with high return opportunities or a safer investment with lower return prospects so the end result is a binary one. However, the bank cannot explain afterwards if the rating was high-risk or risk-averse since the institute does not know which combination of attributes for this specific person had an impact on the result.
  • If, on the other hand, the bank has various parameters calculated based on the deployed AI algorithm which helps a person take a decision – for example, “How impulsive is the person?”, “In which income group is the person classified? or even “To what extent does his/her monthly payment history vary?” – the expert department or auditor has the chance to present a plausible explanation for the investment to third parties. It essentially weighs up the AI decision with the human action.

A balancing act between performance and explanation

Banks require an explanation model with decision parameters and an interface that make explanations more transparent. This so-called explanation framework should be integrated into the entire process between the training of the AI systems and the action to be carried out. Tools and plugins are available to visualise machine learning data in order to simplify the process.

In a concrete scenario, self-learning algorithms are implemented as part of a proof of concept for identifying money laundering activities in a bank. Besides classifying the algorithm, the auditor is also provided with the output of the XAI framework. This not only helps to understand the model’s decision, it also increases its efficiency due to the presentation of decision-related parameters. With this new information, the money laundering officer in the bank can prioritise the processing of suspected cases and in each suspected case, can examine the most important parameters (based on the algorithm) for potential money laundering activities.

One challenge in XAI is achieving the right balance between performance and explanation. Depending on the complexity of the deployed model, its understandability can suffer badly since the artificial intelligence differs considerably from the binary classification and the matter is made “unnecessarily” complicated from the AI perspective. The objective of XAI systems is to bring performance and explainability up to the same high level.

Master your AI decisions

It can be assumed that banking supervision will be taking the matter of AI and compliance more seriously in the future. Penalties could be imposed on institutes that are unable to evaluate monetary damage due to a lack of documentation and transparency in automated processes. The more banks use artificial intelligence in business transactions and self-learning solutions, the more important it will be to ensure the transparency and explainability of automated decisions and recommendations.

Martin Stolberg, Director Banking at Sopra Steria Consulting, has ten years of experience in analysing customer information. The graduate business engineer focuses on the digital enabling of financial service providers. Claudio Ceccotti is member of the AI Lab at Sopra Steria Consulting. Together with customers, he identifies AI-based applications and assists in the development up to the productive prototype stage.
Leave a comment

Your email address will not be published. Required fields are marked *