XAI: Explainable Artificial Intelligence
With the increasing use of Artificial Intelligence (AI) and the growing learning capability of solutions, the risk rises that systems will one day know more than their developers and expert trainers and banks will no longer be able to understand the technologies they use. There is an upcoming demand for a type of technology that explains the result AI systems output.
The use of AI in banks is growing, for instance for providing customers with capital accumulation advisory services. Another field for AI is the automation of compliance processes. With AI, it should be easier and more effective to detect fraudulent transactions such as money laundering and credit card fraud. The demands in this respect are becoming more complex: In compliance with the MaRisk and MiFID II legislative frameworks, it is not enough that banks identify fraud with security transactions. Institutes are also expected to monitor communications for any market manipulations – for example for any indications of insider trading. AI systems are extremely useful for analysing vast quantities of data.
On the other hand, it is also important that the person can always track how his colleague – Cobot or the other background self-learning systems – arrive at their decisions and recommendations. When institutes teach artificial intelligence today, for instance as a Robo Advisor for securities consulting or for opening a new account in compliance with the KYC method, there are no mechanisms in place which make the results understandable. This is anything but trivial with systems that are becoming “cleverer” and improving their self-learning ability. For example, neural networks comprise millions of neurons that are linked to zillions of connections. This AI system is trained gradually over a longer period of time with the result that it has the right input and output configuration at the end.
The risk with this type of complex computing machines is that customers or bank employees are left with a series of questions after a consultancy or decision which the banks themselves cannot answer: “Why did you recommend this share?”, “Why was this person rejected as a customer?“, “How does the machine classify this transaction as terror financing or money laundering?”.
It is essential for banks to have answers to all these questions. A particularly sensitive issue is monitoring the flow of finance transactions for any signs of money laundering and terror financing. In the worst case, an AI recommendation can result in a criminal prosecution and in no way should this type of decision be based on a fully automated process. To take the example of Robo Advisory, a customer has the right to find out why the recommended investment was rated as particularly lucrative then and why it has not achieved the desired return on investment.
XAI – the four-eye principle for AI systems
Companies are more and more focusing on the transparency and understanding of AI when deploying artificial intelligence and complex learning systems. The notion of being able to easily track and understand AI decisions is referred to as Explainable AI (XAI). In the U.S. for example, DARPA, an agency of the United States Department of Defence, is researching models of making AI decisions more transparent. The idea of XAI is to create a type of quality gate between the learning process of AI solutions and the later area of application – for example when investigating a suspicion of money laundering or credit rating – in order to find a direct solution.
In the case of neural networks, the quality gate must be rooted in the model from the start because it is often not possible to verify results afterwards. To put it in an extremely simplified way
- When a bank trains a model, based on all the available attributes of a customer, that predicts whether the person is more interested in a high-risk investment with high return opportunities or a safer investment with lower return prospects, the result, at the end of the process, is a binary one. However, the bank cannot explain afterwards if the rating was high risk or risk-averse since the institute does not know which combination of attributes for this specific person was decisive on the result.
- If, on the other hand, the bank has various parameters calculated based on the deployed AI algorithm which help a person take a decision – for example “How impulsive is the person?”, “In which income group is the person classified? and “To what extent does his/her monthly payment history vary?” – the expert department or auditor has the chance to present a plausible explanation for the investment to third parties. It basically weighs up the AI decision with the human action.
Balancing act between performance and explanation
Banks require an explanation model with decision parameters and an interface that make explanations more transparent. This so-called explanation framework should be integrated into the entire process between the training of the AI systems and the action to be carried out. Tools and plugins are available to visualize machine learning data in order to simplify the process.
In a concrete scenario, self-learning algorithms are implemented as part of a proof of concept for identifying money laundering activities in a bank. Besides classifying the algorithm, the auditor is also provided with the output of the XAI framework. This not only helps to understand the model’s decision, it also increases its efficiency due to the presentation of decision-related parameters. With this new information, the money laundering officer in the bank can prioritise the processing of suspected cases and in each suspected case, can examine the most important parameters (based on the algorithm) for potential money laundering activities.
One challenge in XAI is achieving the right balance between performance and explanation. Depending on the complexity of the deployed model, the understandability can suffer badly since the artificial intelligence differs considerably from the binary classification and the matter is made “unnecessarily” complicated from the AI perspective. The objective of XAI systems is to bring performance and explainability up to the same high level.
Master of your AI decisions
It can be assumed that the banking supervision will be taking the matter of AI and compliance more seriously in the future. Penalties could be imposed on institutes who are unable to evaluate monetary damage due to a lack of documentation and transparency in automated processes. The more banks use artificial intelligence in business transactions and self-learning solutions, the more important it will be to ensure the transparency and explainability of automated decisions and recommendations.