In Facebook’s protracted efforts to be remembered as something other than the largest misinformation megaphone in history, it’s employed a number of strategies, from spinning its own misleading PR narratives to actual UI changes. Today, it announced a new tactic: not only will posts with misinformation in them be made less visible, but so will the individual users who share them.
For several years, the social giant has plugged away at fact-checking partnerships meant to disincentivize the spread of viral misinformation, using the results of those checks to label offending posts rather than removing them. In some cases, it’s taken small steps toward hiding things that are found to be false or polarizing — ending recommendations for political groups, for instance, during the 2020 election. Users, however, were free to post whatever they wanted with no consequences to speak of. No longer!
“Starting today, we will reduce the distribution of all posts in News Feed from an individual’s Facebook account if they repeatedly share content that has been rated by one of our fact-checking partners,” the company wrote in a press release. While demonstrably false posts are already demoted in the News Feed rankings, users who share misinformation regularly will now see all of their content pushed down the dashboard’s endless scroll.
It remains to be seen exactly what the tangible impact of this expanded enforcement will be. While individual Facebook users were previously immune to this sort of scrutiny, Instagram users were not. Nevertheless, vaccine misinformation has proliferated on the photo-sharing app. No matter how sophisticated its systems, as I’ve argued before, Facebook is simply too large to monitor.