As a general rule, anyone who posts pictures of money and talks about investment opportunities is probably trouble. Those are the hallmarks of the "money flipper" scam, a criminal scheme that’s been troubling Instagram for years. The accounts boast a mysterious investment system, posting cash and other luxury goods as proof that it works. Then, in a direct message, they’ll offer to cut followers in on the deal. Sometimes the offer is to split a money order, other times it’s for access to an empty debit card account — but either way, the scammer abruptly walks off with a few hundred dollars and the mark is left to pick up the tab.
It’s a simple scam, but it’s become remarkably popular on Instagram. A report released today by the threat intelligence firm ZeroFox found a total of 4,574 unique instances of the scam on Instagram since 2013, spread across 1,386 different accounts. That’s just a fraction of the 2 million posts scanned by ZeroFOX, and an even smaller fraction of the 30 billion posts on the platform itself. Still, it suggests the scam has found a persistent niche on Instagram, and according to ZeroFox, it could present a long-term problem for any financial companies looking to use Instagram for more than just marketing.
In particular, the company’s researchers found that scammers seem to congregate around the official accounts for banks and other financial institutions. Within 48 hours of following a few dozen banks with a dummy account, ZeroFox researchers were followed by 23 different scam accounts. One of those accounts then sent the researchers a direct message asking for the bank card and PIN for a blank account, presumably in aid of a check-kiting scheme.
To find those accounts more systematically, the researchers built a machine learning algorithm, reading descriptions for trigger words like "money" and scanning pictures for bank logos, receipts or bundles of cash. The resulting algorithm was able to predict scams with over 98 percent accuracy. Most notably, the algorithm found that overuse of hashtags was a strong indicator that a given account was a scammer. The model also found that military accounts were uniquely likely to be targeted by scammers, confirming a number of previous anecdotal reports.
Instagram has a number of systems in place to stop fraud, according to a Facebook representative, who spoke in general terms because the company did not have prepublication access to the report. "This kind of activity is pretty low volume on Instagram," said the representative. Many of Instagram’s anti-fraud systems use a machine learning approach similar to the techniques employed by ZeroFox, but "the challenge is doing it in a robust way so that it still works after bad actors change their approach a few times," the representative said.
Still, Instagram’s current systems had little success in taking down the scam posts identified by ZeroFox’s report. Forty-five days after the initial reading, researchers found only one in five scam posts had been taken down. Over the same period, three times as many new scam posts had been uploaded.
"With the scale of these social media networks, to be able to continuously update a model that deals with every single variation and nuance on a network of billions of users is almost impossible," says Philip Tully, one of the researchers responsible for the report. "Organizations that talk to their customers over social media have a responsibility to secure that platform, and unfortunately, it’s not as easy as controlling access. This is an open platform. You don’t control it. And all the things that make it valuable also make it risky."