Someone came up with the idea of running a series of events called Dinner with Skinner … no idea who would have thought that up … and so I run and moderate many discussions with fintech firms and bankers around the topics chosen by sponsors. The latest one was looking at a key question: how can AI minimise fraud , bringing together a group of payments, fraud and risk folks, and a few people on the end-game, as in retailers.
It was a fascinating discussion, and my walk away is that AI is a double-edged sword. On the positive side, it can bring much more analytics to every customer interaction, augment staff engagement and alert any possible suspicious activities in real-time. On the negative side, it’s given the criminal fraternity a whole new set of tools from deep fake to pig-butchering scams (full explanation in the post-script).
Anyways, here a few notes from the evening, and thanks to those attending for giving me their insights.
How can AI minimise fraud?
- Predictive AI based on machine learning is already absolutely essential to detect fraud on the volume of electronic payments flowing through modern payment systems.
- AI systems are the only ones capable of making sense of the volume of data passing through the system – indeed the greater volumes of data on which AI can be trained the better its overall performance will be.
- Meanwhile, AI is a slam dunk for those perpetrating fraud, with its ability to “spin a web of lies” more effectively than ever before.
- On the “defending” side we are faced with more challenges – establishing trust that enables people to “buy in” to our use of data, protecting that data from breaches when in our custody and compliance to laws and regulation.
- This requires investment to work with lawmakers and regulators to ensure we have fit for purpose laws and safeguards that enable us to deploy AI effectively to protect customers.
- A key part of this is that, by removing people from operational or process related activities, it reduces operational risk from the ecosystem and the probability of insider related collusion is minimised.
- AI could reduce operational overheads, for example capturing notes from a conversation between a customer and call centre when reporting fraud, pulling out information that a person may not think to record and cross referencing to other reports to see trends or pick up vulnerable customers.
- Because you have a higher level of transparency and potential data collection you can implement large-scale pattern recognition, predictive analytics and automated verification, which gives you the ability to detect the use of AI itself in patterns applied to other areas, like identifying false documentation or synthetic data.
- Ever better payment and transaction monitoring systems, based on machine learning, as more types of information, data sources and contextual information can be gathered, stored and modelled.
- Some current fraud protections will come under serious threat from customer service – both human and automated – to voice authentication, liveness checks, etc. This will create a greater need to evolve to differentiate genuine human activity from machine generated responses/interactions.
- Disruption using AI to automate red team activity where we can use our data to flood criminal networks with spurious data or false lead type information.
- Targeted attacks by defenders on certain types of fraud such as automated interaction with false social media posts to "test" series of potentially fake add, or identifying common links
- Assisting those having conversations with potential victims about complex scenarios or situations (guided conversation).
- AI will be able to see trends in data and predict behaviour both for valid customers and potential fraudsters. This will have the benefit of reducing false positives, allowing humans to focus on genuine interactions and have more meaningful discussions with good data to support them.
- Integrating AI with social media and using this for pattern tracking could enable a bank to spot a long conversation, where a fraud is being built up such as pig farming or “hi mom” frauds. The technology could be spotted automatically over the length of a conversation that might have taken place over months.
- The challenge with this is that the messaging apps have end-to-end encryption so we, as service providers, can’t read the messages. This means it would need to be deployed into the messaging app client directly to work. Would people accept them?
- AI could also be used to look for vulnerabilities in systems, code and outdated software.
- There are risks however, such as considerations around bad data and data bias, and the environmental considerations of processing such large amounts of data.
- The emergence of (population-scale) generative AI tools - text, image/video, voice - will form the basis of a new set of threats and opportunities in fraud.
- AI used to commit fraud versus AI used to defend fraud, who wins? Ultimately, it's a data problem. Having good data, real time, reliable, etc, is a critical prerequisite for AI. Can the industry do more to share data to fight fraud rings, to identify patterns etc.
- Advanced machine learning will move fraud analytics in from rules-based engines to live, transaction specific models built on vastly more data and intelligence. It’s machines vs machines.
- AI will rapidly accelerate the move to a machine-vs-machine world of fraud and needs to be considered in any fraud strategy wherever you sit in the financial value chain.
I was thinking afterwards that a lot of the discussion was on data sharing but we didn’t really cover the consequences of sharing that data, particularly the thorny issue of trust
- if I use AI to determine someone is a bad actor and I share that with peers, who is liable or even responsible if my determination is incorrect and a peer acts on that information? What happens if this inadvertently takes someone out of the banking system?
- If I am given data and I don’t use it, am I responsible if I don’t pick up potential fraud? Does sharing that data absolve the sender of their responsibility?
There was also quite a bit of discussion around using AI on your phone and digitally alerting people to potential fraud and not needing to speak to people. I think this significantly underplays human behaviour and is taking on the issue from the perspective of people who are heavily involved in the industry
It is hard to break the fraudster’s spell, not only at a consumer level but even with Corporates. If we are only interacting digitally, how do we ensure the victim doesn’t just ignore these warnings? A human touch is often needed.
A lot of people aren’t comfortable with the amount of information that companies have about them, and do not like the controls that have been put in place to help protect them already, such as two-factor authentication, which becomes problematic if you lose your phone or change numbers or lose your signal.
Postscript from NordVPN explaining the pig-butchering scam:
The scam starts with a text. It can be on social media sites, messaging apps like WhatsApp, or dating apps like Tinder.
Usually, the message sounds like its intended recipient is someone else and the sender just got the number wrong. Something like “Hey, Tom, let’s catch up tomorrow?” but when you tell them they have the wrong person, they keep texting you to build a connection. The second stage of the scam begins.
Now, it’s all about gaining the victim’s trust. Scammers will build a close personal relationship that can go on for weeks or even months.
Then they casually start mentioning how much money they have — fast cars, fancy vacations, even private planes. All thanks to an investment app you’ve never heard of. Maybe you should invest in cryptocurrency as well? They can even give you investment advice.
This is where the third phase of the scam begins — the “fattening up.” The scammer will recommend a specific trading platform for them and you to try together.
To build trust, the scammers will offer making small investments first. You’ll immediately see returns on your investments. The scammer might even encourage you to take out some money, to be sure the platform is legit. But, unfortunately, these are all fraudulent cryptocurrency trading platforms.
Related: Many Financial Industry Climate Initiatives Are Authentic … They’re Just Not Working