Facebook is expanding its playbook for tackling “coordinated inauthentic behavior” to cover similar campaigns that don’t involve impersonation. Reuters first reported the new policy, which Facebook apparently says is in its early stages. Practically speaking, it means moderators can take broader actions than banning individual accounts or posts when they break hard rules, aiming to stop coordinated attempts to harass users or get them banned.
As Reuters points out, Facebook acknowledged in January that its coordinated inauthentic behavior (or CIB) rules had limits. “We have little policy around coordinated authentic harm,” an internal report published by BuzzFeed admitted, referring to the Stop the Steal campaign to overturn the US presidential election. “What do we do when a movement is authentic, coordinated through grassroots or authentic means, but is inherently harmful and violates the spirit of our policy?” The report suggested things like adding “friction” to the growth of movements like Stop the Steal — ultimately, it ended up banning the phrase from Facebook and Instagram.
Reuters cites two types of incidents where Facebook might target “groups of coordinated real accounts that systemically break its rules”: mass reporting, where people falsely report another user for policy violations, and brigading, or a coordinated campaign to target someone for harassment. Some of these incidents could involve state-sponsored groups that are similar to government-backed “troll farms” but use participants’ real Facebook accounts. Others could be independent — coordinated by political movements or groups of fans.
Facebook expanded on the news in a blog post, discussing a category it dubbed “coordinated social harm” and planned to crack down on. As an example it cited actions against the conspiratorial Querdenken anti-vaccination movement in Germany, whose members it said “used authentic and duplicate accounts to post and amplify violating content” including health misinformation. “While we aren’t banning all Querdenken content, we’re continuing to monitor the situation and will take action if we find additional violations to prevent abuse on our platform and protect people using our services.”
Update 2:10PM ET: Added further detail from a Facebook blog post.