Facebook’s fight against fake news, hoaxes, and spammers will never be over for good, but the company is doing its best to keep up the pressure on bad actors on its network. Today, it announced a number of new measures, including using machine learning to spot hoax articles that are copied and pasted by different accounts.
This isn’t the first time Facebook has announced it’s using AI to target misleading content, and it’s clear that the company exaggerates AI’s power to sort good content from bad. But it does seem to be taking small, sensible steps in applying the technology. Machine learning can’t automatically fact-check stories or make nuanced judgments about misleading headlines, but it can recognize easily identifiable signals that suggest an account is up to no good. Like, for example, spotting copies of stories that human fact-checkers have already identified as fake.
Per Facebook’s blog post today:
Machine learning helps us identify duplicates of debunked stories. For example, a fact-checker in France debunked the claim that you can save a person having a stroke by using a needle to prick their finger and draw blood. This allowed us to identify over 20 domains and over 1,400 links spreading that same claim.
In an interview with BuzzFeed News, Facebook product manager Tessa Lyons went into more detail on this use of machine learning, saying its filters are now trying to predict which pages are “likely” to share bad content. This includes looking for page admins that live in one country but target users in another — a common way for spammers in Eastern European countries to make money. “These admins often have suspicious accounts that are not fake but are identified in our system as having suspicious activity,” Lyons said.
Lyons admitted that it was possible that these automated systems could target legitimate sites by accident, but she said the company felt “pretty good” it was hitting the right marks.Facebook says it takes action against pages and sites that spread fake news and hoaxes by “reducing their distribution and removing their ability to monetize.” That last part is vital, as most of these sites wouldn’t even exist without the promise of financial reward.
In its blog post, Facebook detailed a number of other new steps it’s taking. These include working with third-party fact-checkers in more countries (it now operates in 14 nations and “plans to scale to more ... by the end of the year”) and expanding a trial fact-checking individual photos and videos presented out of context (for example, presenting images of an older war as if they show a current-day conflict).
Ultimately, it’s schemes like these — which involve human fact-checkers — that are most effective in identifying fake news. But AI can still provide a useful backup.