You’re out of town, and your bank has stranded you. Instead of assuming that you had just traveled, the bank thinks someone is swindling you, so it locks your card. A good idea in theory, but in practice, kind of a pain.
Happened to you? If so, you’re not alone — a 2015 analysis found that 15 percent of all cardholders had a transaction wrongly flagged as fraudulent over the previous year.
Credit card providers use algorithms to flag suspicious charges, but to some, innocent activity can look like fraud — hence that infuriating lock. Now MIT data scientists say a new approach could cut those false positives in half — a sign that machine learning could protect ordinary consumers from financial fraud.
Today’s algorithms watch for simple warning signs that could signal fraud, especially unusually expensive purchases.
A new paper, which the MIT researchers presented last week at the European Conference for Machine Learning, describes how they used a machine learning algorithm to examine a database of 900 million transactions. An unnamed multinational bank provided the transactions, some of which were marked as fraudulent.
The researchers’ algorithm, they found, was able to identify subtler relationships between variables by looking at detailed information that financial institutions now log, like the location, time-stamp, and nitty-gritty technical attributes of the terminal where the payment was processed. Consecutive purchases in different cities might not trip its alarm if the data shows that one was made in-person and the other online, for instance, or the algorithm might find suspicious relationships between certain types of payment terminals and a user’s financial history.
The result, according to the paper, was that this algorithm detected a slightly higher proportion of actual fraudulent transactions than existing algorithms, and — critically — reduced the number of false positives by 54 percent.
False positives for credit card fraud don’t just annoy cardholders and waste the resources of issuers — they also cut into the bottom line for merchants. According to the 2015 analysis, nearly a third of customers who experienced a false positive security alert abandoned the seller where the snafu took place.
If adopting a new algorithm means never getting stranded at an airport again, we’re all for it.