Skip to main content

The Boogaloo movement has successfully hijacked social networks to spread

The Boogaloo movement has successfully hijacked social networks to spread

/

Facebook says further enforcement is planned — but some of the damage is already done

Share this story

A member of the far-right militia, Boogaloo Bois, walks next to protestors demonstrating outside Charlotte Mecklenburg Police Department Metro Division 2 just outside of downtown Charlotte, North Carolina, on May 29, 2020
A member of the far-right militia, Boogaloo Bois, walks next to protestors demonstrating outside Charlotte Mecklenburg Police Department Metro Division 2 just outside of downtown Charlotte, North Carolina, on May 29, 2020
Photo by LOGAN CYRUS/AFP via Getty Images

Lately a question I have been asking myself is: how worried do we need to be about the Boogaloo groups?

The Boogaloo movement, if you’ve been sitting this one out so far, refers to a loosely knit group of right-wing extremists, some of whom advocate for second civil war. (The name derives from the camp classic breakdancing movie Breakin’ 2: Electric Boogaloo. In the delightfully dry phrasing of Wikipedia authors, “2: Electric Boogaloo became a verbal template appended to a topic as a signal of pejorative parody.”)

While the use of the term in this way dates to at least 2012, it has gained new prominence after a series of violent incidents linked to its adherents. An Air Force staff sergeant was charged with the murder of a Federal Protective Service officer and a Santa Cruz sheriff’s sergeant earlier this month, and authorities have said they found paraphernalia connected with the movement in the suspect’s van. And in Las Vegas, three men were indicted “for allegedly conspiring to destroy government and private property during protests in that city on May 30, and for allegedly possessing Molotov cocktails,” the Wall Street Journal reported.

Also of note is where Boogaloo adherents are organizing themselves: social networks, most notably Facebook.

Earlier this month, Facebook removed Boogaloo groups from its recommendations. If you join a group discussing a Boogaloo-adjacent subject, including militias and Second Amendment rights, you should not see recommendations to join more explicitly Boogaloo groups. But in the Washington Post on Wednesday, Tonya Riley found that Boogaloo recommendations are still plentiful.

Riley writes (emphasis hers):

Researchers at the global nonprofit group Avaaz found nearly two dozen Facebook pages affiliated with the “boogaloo” movement, a generally anti-government and anti-law enforcement ideology. Despite the amorphous nature of the online movement, members have become a notable physical presence at both rallies against pandemic shutdowns and more recently Black Lives Matter protests against police brutality. 

Posts on pages flagged by Avaaz between May 28 and June 18 included explicit calls for armed violence as well as more borderline content such as anti-government memes employing euphemisms for violence. Some pages also shared misinformation about the protests, such as memes claiming police were placing bricks to cause riots and conspiracy theories about Bill Gates and George Soros. The majority of the pages were created within the past six months and had a collective following of tens of thousands of users. 

The fear is that, in an already volatile moment when Americans are confronting a newly resurgent pandemic, record unemployment, and a negligent federal government, extremist sentiments that take root on social networks could boil over into more widespread violence. The question is what steps those networks are prepared to take to stop that from happening.

Why are there so many Boogaloo groups on Facebook and elsewhere? (The movement has also found to be active on Twitter, Discord, and Reddit, among other social sites.) One reason is that what qualifies as Boogaloo is unusually ambiguous. Craig Timberg, Elizabeth Dwoskin and Souad Mekhennet attempted a definition earlier this month in the Post:

These groups have displayed a flexible ideology, espousing gun rights in Richmond in January, opposition to government public health restrictions in several state capitals in March and April and, over the past week, resistance to police brutality against African Americans, though the goal in some cases may be mainly to distract attention from those causes, according to recent research.

Some far-right groups have purposefully sown confusion by impersonating left-wing activists, adding chaos to already turbulent days of protests in which local officials have blamed unnamed outsiders and left-wing groups for the mayhem.

If Boogaloo was a single, defined group with a designated leader and a stated commitment to violently overthrowing the American state, dealing with it would be pretty straightforward: you ban its social accounts and all the posts praising it.

Instead, though, Boogaloo is an unruly collection of ideas. Some, like Second Amendment rights and opposition to shelter-in-place orders, are within mainstream American political discourse. Others, such as militant white supremacy and violence against the state, are not. Still other elements, like some adherents’ support for Black Lives Matter, have confounded researchers over whether they are legitimate or simply meant to distract from the movement’s true aims.

At least two other issues have complicated the act of removing violent Boogaloo posts. One is that many of the posts are deeply ironic in the favored right-wing mode, making it difficult for moderators to determine which posts are credible threats and which are simply “a joke.” The other is that the terminology around Boogaloo has rapidly transformed to include more than 50 terms, with posts connected to the movement using similar-sounding terms such as “big igloo” and “big luau.” (The latter is why some adherents are believed to wear Hawaiian shirts when participating in public demonstrations.) That’s not unique to Boogaloo, of course — street drug names transform in similar ways — but it has made the cat-and-mouse game more difficult.

Still, Facebook has been gradually ratcheting up enforcement. If you claim allegiance to Boogaloo and attempt to commit violence, Facebook will boot you off the platform, and the company will remove any posts that praise you. And since May, if you post about Boogaloo with statements and images depicting armed violence, Facebook will remove those.

For the Boogaloo adherents arrested in connection with the Santa Cruz and Las Vegas incidents, their Facebook accounts have been terminated, and so have the groups that they belonged to.

These are all good steps — and yet, as the Post notes, Boogaloo content calling for armed violence can still be found readily on Facebook. Partly this is an internet problem: bringing the world online makes it easier for bad people to find each other and make common cause. And partly it’s a platform problem: recommendation algorithms recruit followers for the bad guys, sometimes even after those algorithms have been tuned to prevent those introductions from being made.

Facebook told me that it is taking a fresh look at its policies related to Boogaloo and that those policies would likely evolve to include additional enforcement. By now it’s clear to me that the company is worried about the escalation of the group’s violent ideology on and off its platforms. And that’s enough for me to be worried, too.

The Ratio

Today in news that could affect public perception of the big tech platforms.

🔼 Trending up: A new report from the European Commission shows Facebook assessed 95.7 percent of hate speech notifications in less than 24 hours, compared to 81.5 percent for YouTube and 76.6 percent for Twitter. (Facebook)

🔽 Trending down: Facebook has reportedly created a fact-checking loophole for climate science deniers. Now, staffers can overrule climate science moderators and make any climate disinformation ineligible for fact-checking by deeming it “opinion.” (Popular Information)

Governing

A new bipartisan Senate bill is taking aim at the liability protections enjoyed by platforms like Facebook and YouTube. The Platform Accountability and Consumer Transparency (PACT) Act would require online platforms to reveal their content moderation practices through a range of mandatory disclosures. The bill would also create a new avenue for holding these companies responsible for hosting illegal content by making changes to Section 230 of the Communications Decency Act. Here’s Makena Kelly at The Verge:

If approved, the bill would force large tech platforms to explain how they moderate content in a way that is easily accessible to users and release quarterly reports including disaggregated statistics on what content has been removed, demonetized, or had its reach algorithmically limited. Platforms would then be required to roll out a formal complaint system for users that processes reports and explains their moderation decisions within 14 days. Users would then be allowed to appeal those moderation decisions within a company’s internal reporting systems, something that already exists on platforms like Facebook.

Politicians on both sides of the aisle seem to hate Section 230. But they have vastly different reasons for why they think the law must change. (Adi Robertson / The Verge)

Joe Biden’s presidential campaign is calling on Facebook and Twitter to remove posts from President Donald Trump about voter fraud and a rigged election. Biden’s campaign said the President’s tweets “creates the misimpression that the tens of millions of Americans who will vote by mail may have their votes drowned out by fraud.” (Sarah Mucha and Donie O’Sullivan / CNN)

Trump’s campaign is considering pushing supporters toward smaller social networks that are less likely to regulate the president’s speech than the big social networks. Since few other platforms have the reach of Facebook and Twitter, it’s unclear what this strategy would accomplish. (Emily Glazer and Michael C. Bender / The Wall Street Journal)

President Donald Trump’s crackdown on Twitter may have the unintended consequence of hampering his administration’s efforts to market some of its signature efforts, from army recruiting to anti-vaping. Trump pushed to curb government advertising on Facebook, Twitter and Google last month as part of an executive order that also targeted Section 230 protections. (Nancy Scola / Politico)

Twitter permanently suspended the pro-Trump meme-maker “CarpeDonktum” for copyright violations. The move comes after Twitter added a “manipulated media” warning to one of his videos that Trump shared last week. (Will Sommer / Daily Beast)

Twitter also suspended the account of an activist group called Distributed Denial of Secrets for violating its policy about the distribution of hacked data. Last week, @DDoSecrets published a data dump with links to documents stolen from US law enforcement agencies. (Catalin Cimpanu / ZDNet

India’s antitrust watchdog approved Facebook’s $5.7 billion deal with Reliance Jio. The announcement comes a week after the watchdog said it was reviewing the deal for potential misuses of user data. (Manish Singh / TechCrunch)

Inside the deal between Facebook and Reliance Jio, the company that brought the internet to millions of people in India through low prices. (Wayne Ma and Juro Osawa / The Information)

More companies are joining the ad boycott against Facebook to protest the platform’s handling of misinformation and hate speech. The list now includes Eddie Bauer, Magnolia Pictures, Ben & Jerry’s, Patagonia, the North Face, and REI. No word yet from the company’s 7 million other advertisers. (Tiffany Hsu / The New York Times)

Opponents of shelter-in-place rules are organizing protests on Facebook targeting public health officials. Some of the protests are taking place in-person, outside the homes of public health experts. (Jeff Horwitz / The Wall Street Journal)

Brazil’s Central Bank and antitrust regulator suspended WhatsApp’s payment features in the country, citing antitrust concerns. It’s a setback for Facebook, which introduced WhatsApp’s payments system in Brazil earlier this month. (Mario Sergio Lima and Kurt Wagner / Bloomberg)

The Boston City Council voted unanimously to ban the city government, including police, from using facial recognition technology. The law makes it illegal for Boston officials to “obtain, retain, possess, access, or use” facial recognition technology. It’s also now illegal for the city government to enter into contracts that permit the use of facial recognition technology. (Caroline Haskins and Ryan Mac / BuzzFeed)

A man in Michigan was wrongfully arrested due to a flawed algorithmic facial recognition match. It’s the first known case of its kind and points to the terrifying implications of algorithmic bias. (Kashmir Hill / The New York Times)

The Open Technology Fund, a small US organization devoted to protecting digital speech, has helped support many prominent encryption projects, including Signal and Tor. But after a Trump appointee abruptly fired the fund’s entire leadership team, current recipients say their funding is at risk. (Russell Brandom / The Verge)

Industry

Tech leaders have often pointed to a “pipeline problem” to explain away the lack of Black hiring and promotion. But the industry’s reliance on personal relationships to grant access and opportunity might be more to blame. This piece takes a deep look at why black and Latino workers are kept out of the tech industry. Here are Sam Dean and Johana Bhuiyan from the Los Angeles Times:

The problem is not a lack of qualified candidates, but the companies’ unwillingness to open the door, said Bari Williams, the head of legal at Human Interest, a financial services startup.

Companies are reluctant to broaden the schools they recruit from to include historically Black colleges and universities, said Williams, who advocates for diversity in Silicon Valley. “It always comes down to some semblance of seeing it as lowering the bar,” she said. Williams, who used to work at StubHub and Facebook, said she’s seen candidates get passed over because they attended an HBCU.

Many tech companies also rely heavily on referrals from current employees, a system that is not unusual in business but which can reinforce the network effects. “Who do you typically refer? People that look and act and dress and speak and do the same things that you do,” Williams said.

Google will now automatically delete location and search history by default for new users. The changes to the default data settings are a significant expansion of the company’s privacy policies. Here’s Russell Brandom at The Verge:

Google’s auto-delete feature applies to search history (on web or in-app), location history, and voice commands collected through the Google Assistant or devices like Google Home. Google logs that data in its My Activity page, where users can see what data points have been collected and manually delete specific items. Historically, Google has retained that information indefinitely, but in 2019, the company rolled out a way to automatically delete data points after three months or 18 months, depending on the chosen setting.

Starting today, those settings will be on by default for new users.

Black and brown tech workers are sharing their experiences of racism on the job. The stories come as part of a survey of 68 tech workers from the Los Angeles Times. (Johana Bhuiyan, Sam Dean and Suhauna Hussain / Los Angeles Times)

Black Amazon employees are urging Jeff Bezos to match his statements about racial justice with actions to address the “systemic pattern of racial bias” that permeate the company. (Karen Weise / The New York Times)

The Black Lives Matter movement finally pushed Instagram into politics. The shift was intentional, helped in part by civil rights groups that are using the platform to mobilize followers. It has completely taken over my own Instagram experience, in the best way — have been waiting for someone to write this story. (Emily Stewart and Shirin Ghaffary / Recode)

Facebook is testing a new project called Forecast, an app for making predictions about world events like COVID-19. Users can ask questions and then use in-app points to make predictions about what might happen in the future. (Sarah Perez / TechCrunch)

Facebook CTO Mike Schroepfer spoke with Protocol about the company’s commitment to creating more diverse teams, and how it’s been trying to moderate content with employees working remote. (Issie Lapowsky / Protocol)

Digital advertising on platforms like Facebook and Google is set to overtake spending on traditional media for the first time this year. It’s a historic shift in market share, accelerated by the coronavirus pandemic. (Alex Barker / Financial Times)

Instagram is expanding its TikTok competitor known as “Reels” to new markets, following its launch in Brazil. The new app is rolling out more broadly in France and Germany. (Sarah Perez / TechCrunch)

Inside The DayLife Army, a social media cult that convinces young people to relinquish their personal property and leave their lives behind in order to build a supposedly alternate society. (Emilie Friedlander and Joy Crane / OneZero)

Things to do

Stuff to occupy you online during the quarantine.

Watch Twitch Prime’s new Crown Channel. It has reality shows, comedy, and tournaments, but mostly I just wanted you all to laugh at the organizational hierarchy described here: “Twitch is owned by Amazon, and Amazon’s Prime subscription service has a section called Twitch Prime that offers benefits for Twitch viewers. The Crown Channel comes out of that group’s Live team, which is specifically focused on making live content with an eye toward Prime users.” And you thought Google’s messaging strategy was complicated.

Those good tweets

Talk to us

Send us tips, comments, questions, and wild Boogaloo sightings: casey@theverge.com and zoe@theverge.com.