Regulators must define a “minimum set of data” to be shared between banks that would still “serve the purpose” of fraud detection and prevention, said Michele Gentile, head of group correspondent banking, APAC, UniCredit, at Sibos today.
Identifying new ways to fight fraud is critical. Financial criminals are now wielding cutting-edge technology, such as generative AI (GenAI), and exploiting real-time payments, at an estimated cost to the global economy of $485 billion in 2023 alone.
On the second day of Sibos, a session was held on the power of collaborative data sharing in fighting fraud trends. The panel – comprising Sergio Antonio Dalla Riva, head of GTB product development solutions, Intesa San Paolo; Ute Kohl, managing director, head of cash management, Deutsche Bank; and Gentile – considered the cutting edge of financial defenses, with a focus on Swift’s Federated Learning AI initiative. Launched at last year’s Sibos, the initiative brings together banks and technology providers to explore how data collaboration, powered by AI, can significantly reduce fraud while addressing key issues related to data governance and privacy.
One year on, moderator Amit Mahajan, head of innovation AI, SWIFT, guided a discussion on the progress made and the next steps in building an even more robust banking network to combat financial criminals.
Lateral agreements vs. regulatory obligations
To demonstrate how crucial it is for regulators to define a minimum level of fraud data sharing, Gentile gave the example of money mules. In such a scenario, he explained, verification of payee is reinforced if the issuer can go to the acquirer and ask for more information on the account – or vice versa.
Under privacy law, however, “[the bank] is not obliged to tell us,” he said. “Now, this could be overcome by lateral agreements, but there are over 10,000 banks on Swift, so this would be a nightmare.”
This thought experiment laid the foundations for Gentile’s call on regulators to make exchange mandatory – and define a minimum level of data that must be shared, which still meets objectives.
Moderator Mahajan noted that Swift is currently working with “synthetic data” to test how this would work in practice. He said: “We will not start with real data because rigor needs to be behind it before we are confident on the direction.” He added that it is “not just what but how” fraud information is shared, which is critical.
The pitfalls of data sharing
Despite the sheer potential of a systematic approach to fraud data exchange, the panel acknowledged some hesitancy within the financial community. This was primarily attributed to privacy concerns.
Riva said: “First of all is our aim to protect customers.”
Gentile agreed: “The skeleton in the room is the concern that this data falls into the wrong hands…privacy is a key element”. However, he did stress the need for a compromise.
Once the primary aim of protecting customers’ data is achieved, Riva said positive action must be the next move: “Hesitation is causing fragmentation and the fraudsters are laughing.” He called on regulators to make the “sharing of relevant information between different [payment service providers] mandatory.”
But this may only be the start. Kohl pointed out that the financial community’s cause for hesitation could be alleviated by the fact that “fraud patterns and ideas can be exchanged without actually sharing the underlying data.”
Talk of shared solutions developed into calls for intra and inter-sector co-operation – especially with local authorities. Riva highlighted that “central authorities manage different kinds of data” and examine it in ways that contrast banks. Therefore, “collaboration would mean a new way of acting and analysing.” He labeled fraud a “mighty, criminal financial force,” and argued that in order “to combat it, we must co-operate.”
Kohl said, simply: “Yes. A holistic approach would be wonderful.”
AI: A role for ‘immature’ technologies?
The panel then explored the role that artificial intelligence (AI) will play in fighting financial fraud. While AI is often touted as the panacea to all the industry’s ills, Mahajan asked how wise it is to place so much stock into a relatively immature, risky, and underdeveloped innovation.
“If AI is not clever, it could go in the wrong direction,” Kohl returned. “AI can only be as clever as you make it. Real people need to conduct sanity checks along the way.”
Riva agreed: “Machine learning and neural networks are constantly changing. Those who develop the algorithms hold the keys to its safety.”
Kohl added: “No, AI is not at Star Wars level yet, but it is enhancing. It’s here, it’s getting quicker, and it’s getting better. All use cases are having a big impact. We should not trust it totally, but we are on the right path.”
Gentile underlined the utility of federated machine learning – a way to train AI models without anyone seeing or touching the data, which provides a means to unlock information that feeds new applications. He stressed the need for “consent among the banking community,” in such areas. “If we don’t agree there is no point. We need a shared solution.”
Asked for final comments, Gentile opined: “Two things are at play here: technology and law. We had $500 billion scams last year. We need to be bold. These guys don’t ask twice. We need to move fast, or we will lose the game.”
By on Tue, 22 Oct 2024 12:20:00 GMT
Original link