Skip to main content

Facebook is patenting a tool that could help automate removal of fake news

Facebook is patenting a tool that could help automate removal of fake news

/

A project in the works since 2015

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

As Facebook works on new tools to stop the spread of misinformation on its network, it’s seeking to patent technology that could be used for that purpose. This month the US Trademark and Patent Office published Facebook’s application for Patent 0350675: “systems and methods to identify objectionable content.” The application, which was filed in June 2015, describes a sophisticated system for identifying inappropriate text and images and removing them from the network.

As described in the application, the primary purpose of the tool is to improve the detection of pornography, hate speech, and bullying. But last month, Zuckerberg highlighted the need for “better technical systems to detect what people will flag as false before they do it themselves.” The patent published Thursday, which is still pending approval, offers some ideas for how such a system could work.

A Facebook spokeswoman said the company often seeks patents for technology that it never implements, and said this patent should not be taken as an indication of the company’s future plans. The spokeswoman declined to comment on whether it was now in use.

The system described in the application is largely consistent with Facebook’s own descriptions of how it currently handles objectionable content. But it also adds a layer of machine learning to make reporting bad posts more efficient, and to help the system learn common markers of objectionable content over time — tools that sound similar to the anticipatory flagging that Zuckerberg says is needed to combat fake news.

Adding a layer of machine learning to help the system learn

The move comes at a time when Facebook is under increasing public pressure to reduce the spread of propaganda through its network. The company has expressed commitment to making improvements, but has so far responded with caution to the idea that machine learning can separate fact from fiction. Facebook has more or less dismissed the idea of letting the workers who review pornography and bullying content assess the truthfulness of news articles, largely because of the difficulty in establishing clear standards.

There are financial incentives for the company’s caution. Facebook is a primary source of news and information for nearly half of Americans, but presents itself as a neutral platform. Making editorial judgment calls in the News Feed risks alienating users across the ideological spectrum, even if Facebook did it effectively. The company fired its editorial team responsible for identifying trending topics earlier this year amid concerns that team members brought a political bias to the project.

Historically, a piece of content has generally been removed from Facebook after a two-step process. A user reports the link as containing objectionable content, and that content is then reviewed by one or more people who work for Facebook. If moderators decide the content is objectionable, they remove it from Facebook.

Items with higher scores would be expedited for faster review

The system that Facebook seeks to patent augments that process with machine learning, gathering signals about the likelihood that the reported content is in fact objectionable and assigning it a score. Items with higher scores would be expedited for faster review. And over time, the system would identify characteristics of reports that are deemed to be valid, helping it make smarter guesses about which content should likely be removed.

Bullying, hate speech, and pornography are much easier to identify than false news stories. Bullying and hate speech typically involve a limited, if ever-evolving, set of words. Pornography involves naked body parts that today’s machine-vision systems are very good at identifying. (Sometimes a little too good, if you post a picture of yourself breastfeeding.)

But false news stories have shared characteristics, too. By false news, I’m referring to stories with a premise that is demonstrably false: like the fake viral Facebook story alleging an FBI agent investigating Hillary Clinton’s private email server had killed himself and his wife, for example, or the one that said Donald Trump won the popular election, which became Google’s top answer to “who won the popular vote” in the United States despite being false.

Signals used to identify hate speech could be helpful in identifying fake news

The system described in Facebook’s patent application appears ripe for adaptation to the fake news problem. The application identifies several signals used to identify objectionable content that could be useful in separating good links from bad. Here are a few:

The record of the person reporting the content. If a user’s reports of objectionable content are generally validated, their reports are taken more seriously. Asking users to judge the accuracy of news is fraught with technical and philosophical problems, and by itself could generate more problems than it solves. (What happens when you set the Make America Great Again crowd loose on the New York Times’ Facebook page?) But Facebook should still explore the possibility that its best users could help identify the worst offenders in the false-news racket.

Profile verification. Facebook takes complaints from VIPs more seriously than it does from average people, because it wants as many celebrities hanging out there as possible. It seems possible that some of these verified users, which include many members of the media would be really good at sniffing out hoaxes, and their reports would make for a powerful signal in identifying the worst offenders.

Number of people reporting the content as objectionable. On one hand, this seems like an obvious signal that something is wrong. On the other, it could be gamed by Facebook users for ideological reasons. And users might. So it can’t be the only signal, but it could still be a signal — a story that is continuously being reported as false is worthy of more scrutiny from Facebook, if nothing else.

Account age. If you created your Facebook account the same day you report someone’s content as objectionable, the odds that you are harassing them are much higher than if you’ve been on the platform for years. One solution to a problem in user-reported fake news — that it could be manipulated by malicious mobs — is to take that in mind when evaluating a fake news report from a user.

Many of 2016’s biggest fake viral hits originated at sites born in the past year

Facebook would need to augment the system described in the patent with other signals common to fake news content farms. There’s the age of the “news” site in question — many of 2016’s biggest fake viral hits originated from sites born in the past year. Or Facebook could also try to build something like Google’s PageRank — a system that evaluates the quality of shared links by analyzing how often those articles are linked to by other credible outlets.

It remains unclear what role Facebook is willing to play in reducing the spread of misinformation, in large part because of how difficult it is to identify. “Identifying the ‘truth’ is complicated,” Mark Zuckerberg wrote last month. “While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted.” Moreover, most people believe the fake news that they say is true, undermining efforts to use them to report bad links.

It’s also easy to overstate the importance of reducing misinformation on Facebook. It would do nothing to eliminate the problem of confirmation bias, which leads us to ignore information we disagree with, or the filter bubbles we create on Facebook with our clicks. A Facebook where credible news organizations compete for space with inflammatory, partisan, but not-totally-factually-wrong pieces might still result in a user base where everyone is trapped in their respective echo chamber. Still, to do nothing about the problem reeks of defeatism. And as coverage of the fake news epidemic continues, Facebook’s reputation suffers. Last month Zuckerberg laid out a series of steps the company would take to reduce the spread of misinformation. The steps included creating a better tool for reporting bad links, working with third-party fact-checking organizations, and potentially putting warning labels on links thought to contain misinformation. (A test of a feature that asks users to rate the accuracy of news was spotted in the wild on Tuesday.)

Even if Facebook does label a link as false, the company has said little about what the consequences will be. Today the company says it “penalizes” links that many users have reported as false, but the penalty appears to be minor in many cases: it hasn’t prevented fake stories from racking up tens of thousands of shares and millions of page views. Facebook’s patent application lists only one consequence for objectionable material: removal. One one hand, removing links to stories would subject Facebook to significant, possibly justified criticism — and yet it’s difficult to imagine how the company can shed the stigma of being a swamp of misinformation unless it does.