‘Artificial intelligence’ (AI) no longer conjures up images from the early Noughties film Minority Report, but instead is vying to become the most powerful weapon in the fight against financial crime. What does AI mean in the legal and regulatory context? Data analysis technology is not new, but it is being newly applied to areas previously entrusted to human judgement, notably compliance.
According to the Financial Conduct Authority (FCA), UK banks spend around £5bn every year combating financial crime – £1bn more than the country spends on prisons. It was only a matter of time before banks would try to make their fraud and money laundering detection processes speedier, cheaper and more effective, and this is where AI can help. Speaking at the FinTech Innovation in Anti-Money Laundering (AML) and Digital ID regional event in London last year, Rob Gruppetta, Head of the Financial Crime Department at the FCA, was confident that AI can be particularly useful in monitoring a bank’s transactions to detect suspicious activity in real time, an area where there is significant potential for human error. The Financial Stability Board (FSB) also published a report last year on the impact of AI on financial services which identified potential benefits and risks to be monitored in the near future.
The financial services industry is increasingly turning to machine-based data analysis for their compliance processes. HSBC has recently partnered with UK start-up Quantexa and Silicon Valley-based Ayasdi to use their AI technology as part of a drive to automate compliance and process transactional data to identify potential money laundering activity. Other banks, from Singapore-based OCBC to Denmark’s Danske Bank, are rapidly following suit and innovating in fraud-detecting AI. Defence experts BAE Systems now provide financial crime technology solutions which allow banks and the National Crime Agency to share information, thus facilitating both compliance and law enforcement.
Beyond the perceived benefits that AI can bring, regulators such as the FCA and the FSB are rightly wondering if there are any drawbacks. Notably, can – and will – AI replace human-based compliance systems? Gruppetta’s speech suggests the FCA does not believe this to be the case. Rather, the FCA thinks AI will complement, but not replace human judgement in order to refine the decision-making model over time. The FSB’s report highlights the importance of appropriate risk management and oversight of AI, including adherence to data privacy and issues around cybersecurity. However, while AI may not fully replace human intervention, it may require tailor-made supervision by a new specialist regulatory body, as suggested by evidence given to the parliamentary Science and Technology Committee’s ‘Algorithms in decision-making’ inquiry.