It’s been four months now since Facebook announced its intention to invest more heavily in private groups and messaging, and recently that effort has gotten a major marketing push. Walk through the Montgomery BART station in San Francisco and you’ll see ads for Facebook Groups plastering every wall, each emblazoned with the anodyne slogan “more together.”
In years past, such a launch might have been greeted with a collective shrug from the press. (The launch of Facebook Live in 2016 also involved a takeover of Montgomery Station, and passed with little fanfare — at least until a rash of violent live streams drew their attention.) But the increased focus on groups this year has come with energetic scrutiny from journalists — a sign of how even seemingly mundane Facebook launches now meet with deep skepticism around the world.
And judging from the awful groups that journalists keep discovering, that skepticism is warranted. Last week, ProPublica found a group of Border Patrol agents joking about migrant deaths and making other racist and offensive comments. (The Intercept posted an archive of the group’s awful posts.) Over the weekend, CNN found another:
At least one other social media group with an apparent nexus to Customs and Border Protection has been discovered to contain vulgar and sexually explicit posts, according to screenshots shared by two sources familiar with the Facebook pages.
The secret Facebook group, “The Real CBP Nation,” which has around 1,000 members, is host to an image that mocks separating migrant families, multiple demeaning memes of Rep. Alexandria Ocasio-Cortez, a New York Democrat, and other derisive images of Asians and African Americans.
The same day, Le Monde found a group with 56,000 members devoted largely to making misogynist comments. (I’m relying on Google Translate here, so let me know if I get this wrong, French speakers.) The group actively solicited revenge porn before Facebook shut it down, according to the report.
And just today, a Twitter user who stumbled across a Facebook TV ad investigated one of the featured groups, and found a rash of ugly posts.
All this bad behavior is worrying some observers, Elizabeth Dwoskin reports in the Washington Post:
“Large private groups remain unmoderated black boxes where users can freely threaten vulnerable populations,” said Jonathan Greenblatt, chief executive of the Anti-Defamation League. “Without any AI or human moderators, it’s easy to orchestrate harassment campaigns — at minimum, this environment contributes to the normalization of bigotry and discrimination. As Facebook moves to more and more private communication, we’re concerned about this delinquency.”
Facebook Groups offer us yet another chance to think about the difference between internet problems and platform problems. There have always been online forums where awful people congregate — that’s an internet problem. It’s plausible that, in the absence of Facebook Groups, racist Border Patrol agents would have found another place to hang out online and spout bigotry.
But Facebook’s size and recommendation algorithms change that calculation. Its size enables connections between many Border Patrol agents who may not otherwise have met. And its recommendation algorithms work to introduce them to each other — just as new moms were introduced to anti-vaccine groups through recommendations, so are Border Patrol agents introduced to groups like Real CBP Nation.
These algorithms operate opaquely, and their recommendations can rarely be predicted in advance. No one knew Facebook would recommend that new moms join anti-vax groups — its algorithm just suggested that they join, and found that new moms acted on the suggestion, and so started suggesting it more.
Facebook can’t solve racism or misogyny. But it can examine more closely the way it unwittingly recruits allies for racists and misogynists. That’s a platform problem through and through — and in the early days of Facebook’s pivot to privacy, it doesn’t seem to be getting much better.
Is the General Data Protection Regulation making it harder to fight crime? Natalia Drozdiak explores the issue:
The WHOIS directory, which previously displayed both technical and personal data related to registered domain names, has been redacted to scrub out names, email addresses and other personal information due to Europe’s privacy law.
“Since May 2018, we have more and more cases of investigations that are just dropped or severely delayed because we can’t have direct access to WHOIS registration data information,” said Gregory Mounier, head of outreach and internet governance at Europol’s cybercrime center. “Overall you can say that the internet has become less safe because of an overly conservative interpretation of the GDPR by the ICANN community.”
Matt Apuzzo says a European effort to fight misinformation is off to a slow start:
The European Union launched an ambitious effort earlier this year to combat election interference: an early-warning system that would sound alarms about Russian propaganda. Despite high expectations, however, records show that the system has become a repository for a mishmash of information, produced no alerts and is already at risk of becoming defunct.
Indeed, even before the European Parliament elections this spring, an inside joke was circulating in Brussels about the Rapid Alert System: It’s not rapid. There are no alerts. And there’s no system.
Chris Hamby looks into fears that the Census could be corrupted by bad actors:
The government has ambitious plans to use new digital methods to collect data. But the Census Bureau has had to scale back testing of that technology because of inadequate funding — raising the risk of problems ranging from software glitches to cyberattacks.
Also new is the threat of online disinformation campaigns reminiscent of the 2016 presidential cycle. The heated political discourse about the citizenship question has supplied ample fuel, and researchers say they are already beginning to see coordinated online efforts to undermine public trust in the census and to sow chaos and confusion.
Mark Bergen and Kurt Wagner write about Facebook’s efforts to manage public opinion and contain the spread of misinformation. (The tools invented for these purposes do not seem to have fared particularly well, and Stormchaser was apparently retired at some point last year.)
Since 2016, Facebook employees have used Stormchaser to track many viral posts, including a popular conspiracy that the company listens to users through their phone’s microphone, according to three former employees. Other topics ranged from bitter protests (the #deleteFB movement) to ludicrous jokes (that Facebook Chief Executive Officer Mark Zuckerberg is an alien), according to one former employee. In some cases, like the copy-and-paste hoax, the social network took active steps to snuff them out. Staff prepared messages debunking assertions about Facebook, then ran them in front of users who shared the content, according to documents viewed by Bloomberg News and four people familiar with the matter. They asked not to be identified discussing private initiatives.
Many companies monitor social media to learn what customers are saying about them. But Facebook’s position is unique. It owns the platform it’s watching, an advantage that may help Facebook track and reach users more effectively than other firms. And Facebook has been saddled with so many real problems recently that sometimes misinformation can stick.
The White House is not inviting social media companies to the event on Thursday where it will complain that social media companies are censoring them, Oliver Darcy reports:
The White House has not extended invitations to Facebook and Twitter to attend its social media summit on Thursday, people familiar with the matter said.
The people, who spoke to CNN Business on the condition of anonymity, suggested it was not surprising. They said they believe the summit would amount to a right-wing grievance session and was not aimed at seriously discussing some of the issues facing large technology companies.
Noam Cohen profiles the law, which requires that bots identify themselves as such:
California’s bot-disclosure law is more than a run-of-the-mill anti-fraud rule. By attempting to regulate a technology that thrives on social networks, the state will be testing society’s resolve to get our (virtual) house in order after more than two decades of a runaway Internet. We are in new terrain, where the microtargeting of audiences on social networks, the perception of false news stories as genuine, and the bot-led amplification of some voices and drowning-out of others have combined to create angry, ill-informed online communities that are suspicious of one another and of the government.
Regulating bots should be low-hanging fruit when it comes to improving the Internet. The California law doesn’t even ban them outright but, rather, insists that they identify themselves in a manner that is “clear, conspicuous, and reasonably designed.”
Drew Harwell reports that federal agents are using state DMV databases to create a powerful new infrastructure for surveillance:
Agents with the Federal Bureau of Investigation and Immigration and Customs Enforcement have turned state driver’s license databases into a facial-recognition gold mine, scanning through millions of Americans’ photos without their knowledge or consent, newly released documents show.
Thousands of facial-recognition requests, internal documents and emails over the past five years, obtained through public-records requests by researchers with Georgetown Law’s Center on Privacy and Technology and provided to The Washington Post, reveal that federal investigators have turned state departments of motor vehicles databases into the bedrock of an unprecedented surveillance infrastructure.
And speaking of surveillance, satellites are getting really good at it, Christopher Beam reports:
Every year, commercially available satellite images are becoming sharper and taken more frequently. In 2008, there were 150 Earth observation satellites in orbit; by now there are 768. Satellite companies don’t offer 24-hour real-time surveillance, but if the hype is to be believed, they’re getting close. Privacy advocates warn that innovation in satellite imagery is outpacing the US government’s (to say nothing of the rest of the world’s) ability to regulate the technology. Unless we impose stricter limits now, they say, one day everyone from ad companies to suspicious spouses to terrorist organizations will have access to tools previously reserved for government spy agencies. Which would mean that at any given moment, anyone could be watching anyone else.
Adam Satariano profiles Germany’s top antitrust official, Andreas Mundt, who argues that world-scale data collection is anti-competitive.
The companies have strongly fought against his argument. But it is gaining traction in antitrust circles, as Mr. Mundt, who has led Germany’s antitrust agency for almost a decade, urges officials in other nations to make the same point.
After the Facebook ruling, Mr. Mundt received calls from regulators and lawyers around the world to discuss the idea. He helped organize a meeting of fellow antitrust officials in Colombia, where they spent four days discussing tech regulation. Joseph Simons, chairman of the Federal Trade Commission, and Makan Delrahim, head of the Justice Department antitrust division, were among those attending.
Just as irony has been essential in the rise of right-wing extremism online, it’s proving useful to the revival of old conspiracy theories, Amanda Hess reports:
The internet’s biggest stars are using irony and nonchalance to refurbish old conspiracies for new audiences, recycling them into new forms that help them persist in the cultural imagination. Along the way, these vloggers are unlocking a new, casual mode of experiencing paranoia. They are mutating our relationship to belief itself: It is less about having convictions than it is about having fun.
Once-hot HQ Trivia appears to be entering its last days. Josh Constine reports:
Downloads per month are down 92% versus last June according to Sensor Tower. And now four sources confirm that HQ laid off staff members this week. One said about 20% of staff was let go, and another said six to seven employees were departing. That aligns with Digiday reporter Kerry Flynn’s tweet that 7 employees were let go, bringing HQ to fewer than 30 (shrinking from 35 to 28 staffers would be a 20% drop).
That will leave the company short-handed as it attempts to diversify revenue with the upcoming launch of monthly subscriptions.
Lucinda Southern reports on the Wall Street Journal’s effort to identify synthetic media:
To combat the growing threat of spreading misinformation ahead of the U.S. 2020 general election, The Wall Street Journal has formed a committee to help reporters navigate fake content.
Last September, the publisher assigned 21 of its staff from across its newsroom to form the committee. Each of them is on-call to answer reporters’ queries about whether a piece of content has been manipulated. The publisher has issued criteria to committee members which help them determine whether the content is fake or not. After each query from a reporter, members write up a report with details of what they learned.
The creator of a deepfake-making app called DeepNude took it offline, but not before the code spread all over the internet, James Vincent reports:
The Verge was able to find links that ostensibly offer downloads of DeepNude in a variety of places, including Telegram channels, message boards like 4chan, YouTube video descriptions, and even on the Microsoft-owned code repository GitHub.
The report from Motherboard found that the app was being sold on a Discord server (now removed) for $20. The anonymous sellers said they had improved the stability of the software, which was prone to crashing, and removed a feature that added watermarks to the fake images (supposedly to stop them from being used maliciously).
Oh no, I’m a Twitter celebrity! Here’s Jon Porter:
The UK’s Advertising Standards Authority has ruled that 30,000 is the magic number of followers that makes you a celebrity. The decision means that if you have such a following then you have to obey the same advertising rules as traditional celebrities like David Beckham or Stephen Fry, particularly when it comes to product endorsements.
The regulator came to the decision after an Instagram user with 32,000 followers, ThisMamaLife, posted an ad for Phenergan Night Time sleeping tablets. Although they disclosed that the post was an ad at the beginning of its description, the ASA ruled that their follower count made them a celebrity, and thus banned from drug endorsements in the UK.
Julia Alexander reports on the phenomenon of scandal-prone influencers posting their apologies to secondary and tertiary accounts in an effort to minimize the attention they get:
By using their main channels to post apologies, those creators confront their issues head-on and show a willingness to accept responsibility for whatever happened. But other creators may not want their core fans to see them apologize. Posting on alternate platforms allows creators like Paul and Beahm to acknowledge an issue and say they’ve addressed it while largely sweeping things under the rug.
Here’s a disturbing and tragic story from Kim Suarez about a YouTube engineer:
Betai Koffi, a software engineer at YouTube and a San Francisco resident, perhaps consumed more LSD than he should have while on vacation in Bodega Bay — downing an extra couple hits, according to friends, after initially freaking out on his first two. And he is now charged with multiple counts of attempted murder, and suffered a life-threatening gunshot wound from police.
The 4th of July holiday brought 32-year-old Koffi and five of his friends to Bodega Bay to rent a house on the beach and enjoy the long weekend. As the Press Democrat reports, it wasn’t until around 8 p.m. Thursday when things got out of control. Around that time, Koffi had consumed two additional hits of LSD, after already appearing delusional after consuming two hits earlier in the day.
Sarah Manavis writes about the monks becoming famous on social media apps:
Searches for posts with the hashtags #monklife and #monklifestyle on Instagram yield over 20,000 results in all. The hashtag #monk has been used nearly 800,000 times. And while accounts such as Shayamal’s may not be at the level of social media’s most popular personalities, others have accrued hundreds of thousands of followers. A few have made it into the millions.
Of these, the most prominent monkfluencer is Jay Shetty, a former monk whose stated aim is to “make wisdom go viral”. Shetty’s Facebook page is one of the most popular on the platform, racking up over 24 million followers. His videos have been watched more than a billion times. His other channels, YouTube and Instagram, have an impressive 2.5 million and 3.9 million followers respectively, and in 2018, he had the single most-watched Facebook video of the entire year with over 363 million views.
And speaking of religious influencers, Mormons are doing great business on YouTube lately, Jordan Julian reports:
Of all of the girls, Marla Henry seems to be the most openly religious. Her channel, which she runs with her older sister, has racked up 1.4 million subscribers and in the description she provides a link to the LDS website. The channel is fashion-focused, a sort of virtual guidebook in stylish modest dressing.
In 2017, Allure published a story about the disproportionate number of popular Mormon beauty bloggers. Back in 2011, a Salon essay titled “Why I can’t stop reading Mormon housewife blogs” sought to understand why it seemed like so many bloggers were Mormon, peppered with witticisms about their homes that “look like Anthropologie catalogs” and “elaborate astronaut-themed birthday parties for their kids.” There are Reddit threads devoted to answering the same question. The most direct connection between Mormonism and blogging seems to be the longstanding value in the church of journaling and keeping written records.
Ashley Carman reports on Instagram’s latest anti-bullying initiatives. (Elsewhere, Adam Mosseri talks to Time about the launch.)
Instagram’s next big fix for online bullying is coming in the form of artificial intelligence-flagged comments and the ability for users to restrict accounts from publicly commenting on their posts.
The team is launching a test soon that’ll give users the power to essentially “shadow ban” a user from their account, meaning the account holder can “restrict” another user, which makes their comments visible only to themselves. It also hides when the account holder is active on Instagram or when they’re read a direct message.
Here is a user-generated map of every known US Customs & Border Protection facility in the South Border states:
It ranks each according to the potential for abusive conditions at the location. This map is intended to be a tool for politicians, journalists, doctors, and activists interested in inspecting these facilities. The locations in dark purple are confirmed concentration camps where significant human rights abuses have been documented. The red markers are locations which have a very high likelihood of having conditions which would designate them as concentration camps. Yellow items are facilities which have a significant chance of being concentration camps.
John Herrman laments the phenomenon of “half-deleting” an app — introducing ever more hurdles to using an app, while feeling bad about it all the while:
About two years ago, I turned off most of my phone’s notifications, including Twitter’s. I still checked. I started uninstalling the app on weekends, then keeping it uninstalled until I needed to post something for work. (The idea that Twitter is necessary or even helpful for work is, probably, the self-deception at the core of this problem.)
I constantly backslid. I started logging out after using the app and gave myself passwords I could never remember; eventually they ended up in my password manager. I started using Twitter’s mobile site, which I assumed would feel more deliberate, or worse, but that process disappeared into a subconscious routine as well.
Andrew Przybylski and Amy Orben say there is little evidence to support the idea that lots of screen time is bad for children:
Each year, teens and preteens rated their social media use and told us how satisfied they were with aspects of their life. We were interested in testing both whether changes in social media use over time actually preceded shifts in life satisfaction and whether such changes influenced subsequent social media use. In simple terms, are you more likely to “use” if you’re happy or sad?
What did we find? Well, mostly nothing! In more than half of the thousands of statistical models we tested, we found nothing more than random statistical noise. In the remainder, we did find some small trends over time – these were mostly clustered in data provided by teenage girls. Decreases in satisfaction with school, family, appearance and friends presaged increased social media use, and increases in social media use preceded decreases in satisfaction with school, family, and friends. You can see then how, if you were determined to extract a story, you could cook up one about teenage girls and unhappiness.
And finally ...
Talk to me
Send me tips, comments, questions, and your favorite Facebook groups: email@example.com.