Who shares fake news? Until today, conventional wisdom held that posting misinformation to Facebook and other platforms was largely a byproduct of ideology. The more conservative you were in 2016, the argument goes, the more likely you were to share hoaxes.
But a fascinating new paper published today in the journal Science Advances suggests that something else is afoot. I wrote about it today at The Verge:
11 percent of users older than 65 shared a hoax, while just 3 percent of users 18 to 29 did. Facebook users ages 65 and older shared more than twice as many fake news articles than the next-oldest age group of 45 to 65, and nearly seven times as many fake news articles as the youngest age group (18 to 29).
“When we bring up the age finding, a lot of people say, ‘oh yeah, that’s obvious,’” co-author Andrew Guess, a political scientist at Princeton University, told The Verge. “For me, what is pretty striking is that the relationship holds even when you control for party affiliation or ideology. The fact that it’s independent of these other traits is pretty surprising to me. It’s not just being driven by older people being more conservative.”
Why do older users share fake news more often? There are two competing theories, for which we still lack good evidence:
The first is that older people, who came to the internet later, lack the digital literacy skills of their younger counterparts. The second is that people experience cognitive decline as they age, making them likelier to fall for hoaxes.
Regardless of age, the digital literacy gap has previously been blamed on users’ willingness to share hoaxes. Last year, WhatsApp began developing a program to promote digital literacy in India — where many of its 200 million users are relatively new to the internet — after a series of murders that may have been prompted by viral forwarding in the app. That program is aimed at users of all ages.
At the same time, elderly Americans are prone to falling for so many scams that the Federal Bureau of Investigations has a page devoted to them. It seems likely that a multi-pronged approach to reducing the spread of fake news will be more effective than trying to solve for only one variable.
I’ll resist the temptation to quote my entire article, and instead ask you once again to read the full thing here.
Two more notes about this study, drawn from my conversation with a researcher who did not work it: Matthew Gentzkow, who has researched the efforts of Facebook’s efforts to slow the spread of fake news.
First: Gentzkow, a senior fellow at the Stanford Institute for Economic Policy Research, noted that this paper is unusual for recording actual Facebook user behavior, rather than self-reported survey data. Researchers were able to do this because — with users’ consent — they scraped user timeline posts to see which links they had actually shared. It’s one reason why the paper’s findings are more compelling than previous work that has been done on the subject.
Second: who exactly were these Facebook-obsessed seniors? Gentzkow noted that despite Facebook’s near-total penetration of the North American market, in 2016 it was still somewhat unusual to see hyperactive social media use among septuagenarians.
”What’s also true very, very strongly true is the likelihood of using social media declines with age,” Gentzkow told me. “Elderly people use social media at low rates. Who is the 70-year-old who is spending a lot of time on Facebook in October of 2016? That’s not your typical 70-year-old.” Gentzkow speculated that older Facebook users might have a disproportionately strong interest in partisan politics.
In any case, the study has some good news for Facebook and democracy. As researcher Andrew Guess explained to me, the narrower an explanation you can find for a problem, the easier it is to design an effective solution. Assuming that future studies prove out the idea that sharing fake news is primarily a consequence of old age — or the digital illiteracy that old age effectively serves as a proxy for — then we have a good starting point for a fix.
A day after Motherboard’s superb investigation into telecoms’ sale of our real-time location data, several senators called for an investigation:
“The American people have an absolute right to the privacy of their data, which is why I’m extraordinarily troubled by reports of this system of repackaging and reselling location data to unregulated third party services for potentially nefarious purposes. If true, this practice represents a legitimate threat to our personal and national security,” Senator Kamala Harris told Motherboard in a statement.
Remember Google-bombing? Catalin Cimpanu reports on its latest evolution, which exploits a sharing feature inside Google to make incorrect “knowledge panels” show up alongside search results:
While sharing search result page URLs for queries like “Who invented sliced bread” with an incorrect knowledge panel passes as an innocent prank, sharing malformed URLs for search queries like “Who’s responsible for 9/11” and highlighting results like Judaism can have serious consequences in today’s complicated political climate. Just imagine the damage you can do with manipulated Google URLs like these [1, 2, 3].
Link sharing is an important part of today’s web and the way in which Google appears to have structured its URL parameters allows threat actors a way to essentially edit search results, which is a dangerous issue.
Last May, I wrote about a new law moving through the Vietnamese legislature that among other things would require big tech companies like Facebook to store data locally and open an office in the country. That law took effect January 1st, and the country has already moved to put pressure on Facebook, Jon Russell reports.
This story is important because it shows that one outcome of regulating social networks can be the suppression of political dissent — which Vietnam has been rather transparent about.
The U.S. social network stands accused of allowing users in Vietnam to post “slanderous content, anti-government sentiment and libel and defamation of individuals, organisations and state agencies,” according to a report from state-controlled media Vietnam News. The content is said to have been flagged to Facebook which, reports say, has “delayed removing” it.
That violates the law which — passed last June — broadly forbids internet users from organizing with, or training, others for anti-state purposes, spreading false information, and undermining the nation state’s achievements or solidarity, according to reports at the time. It also requires foreign internet companies to operate a local office and store user information on Vietnamese soil. That’s something neither Google nor Facebook has complied with, despite the Vietnamese government’s recent claim that the former is investigating a local office launch.
A bizarre truth about internet protocol mapping — the technology that lets websites and others track devices to roughly where they are logging on to the internet — is that the science is so inexact that companies will sometimes map millions of addresses onto individual locations. Often these locations are residential addresses, wreaking havoc on their lives. Kashmir Hill, the foremost documentarian, has a wild new study about a couple in South Africa:
The downside of this delay is that John and Ann continued to get visits over the last couple of years, as recently as last month when police showed up looking for a kidnapping victim. The upside is that John sought other help. He saw on Facebook that a classmate of his from Pretoria Boys High, the same English-speaking high school Elon Musk attended, was a computer science lecturer at the University of Pretoria. John sent him a message.
“I’m not the guru, this guy is,” the classmate responded, sending John contact information for Martin Olivier, a professor at the University. Within three days of John contacting him, Olivier discovered that MaxMind didn’t choose to put the target on John and Ann’s home on its own. It got help from the U.S. military.
Two notes here. One, Bezos announced the split on Twitter — a fairly amazing endorsement of that platform’s cultural relevance in 2019. (Think of all the other ways he could have done this!) Two, MacKenzie Bezos is going to have a lot of room to make tech and media investment. The world will be watching closely.
Kurt Wagner reports on a frankly bizarre-sounding deal between Twitter and basketball executives:
Twitter doesn’t stream NBA games, but soon it will stream parts of NBA games — just not the parts you usually watch on TV from the NBA’s traditional broadcast partners like Turner and ESPN.
Instead of streaming a full game with all the players, graphics, and announcers, starting in February Twitter will stream the second half of some NBA games — yes, only the second half — but the camera will focus on a single player.
Every large platform encourages the creation of a particular content, whether it’s Google (“what time is the Super Bowl”), Facebook (“Pope endorses Donald Trump”), or YouTube (“Watch Peppa Pig DESTROY a feminist”). Mark Di Stefano writes about the continuing influence of another large platform — the Drudge Report — and how it has affected one Rupert Murdoch-owned tabloid:
The Sun has been targeting important parts of its news agenda to court the attention of right-wing aggregator the Drudge Report, with some staff concerned that it is beginning to change the way the British outlet presents stories on Drudge’s favourite UK targets, such as London mayor Sadiq Khan.
According to emails seen by BuzzFeed News and interviews with Sun staff, the Sun’s website editors have become increasingly insistent on creating what they have internally referred to as “Drudge-bait” in an effort to secure huge amounts of traffic from the aggregator.
Pranav Dixit examines the phenomenon of women in India joining under assumed names to escape the watchful, judgmental eyes of their communities:
For women living in these parts of the country, using social networks like Facebook comes with real risks of being socially outcast. While Facebook may have an image problem in most parts of the world for handling data carelessly, spreading fake news, and inciting violence and genocide, male leaders in these parts of India dislike it for an entirely different reason: It gives young women a platform to post pictures, put themselves out there, and meet young men.
Across rural India, young women are accessing Facebook under false identities, using the names of Bollywood actors or other made-up monikers, and sometimes even posing as men — violating Facebook’s policy against “pretending to be anything or anyone” — as they seek a place in modern digital life. (Facebook declined to comment on such apparent violations.) Their discretion doesn’t stem from an everyday eye for privacy but from a fear of the harsh social consequences of being outed as a woman who uses Facebook.
Some fairly significant changes are coming to Twitter in the next few weeks as part of a test, Julia Alexander reports. The company tells me I’ll get to be part of it, so stay tuned for impressions!
A green bubble will appear beside beta users’ names when they’re actively online and using the app, similar to Instagram’s status indicator, according to new screenshots released by Twitter and first reported by Engadget. Twitter hopes that by seeing someone is online, you’ll be more likely to respond to their tweets and start a conversation.
The other feature Instagram is introducing is “ice breaker” tweets, which are supposed to help start a conversation about a specific topic. Users will be able to post their own ice breakers for others to respond to; screenshots provided by Twitter show weekly television series like Scandal and events like CES as an example of how “ice breaker” tweets will appear.
I’m not quite sure that being able to post a single photo to multiple accounts simultaneously counts as a “regram,” as Josh Constine suggests here, but perhaps it’s a step in that direction. And I agree with him here on the risk of these efforts, which are invariably greenlighted based on their potential to boost sharing and screen time:
Simplifying publishing sounds obviously better, but it could also dilute the quality of Instagram. Luckily, the feed’s algorithm can simply demote generic content that doesn’t resonate with people. But if the feed becomes full of stale cross-posted promotional spam, it could send younger users fleeing toward the next generation of social apps trying to spice it up.
Sometimes a big tech platform will put something in front of your face constantly in hopes that it will become a hit by sheer statistical odds. Facebook Marketplace comes to mind, as does Snapchat Discover. It works less often than you might think — but as MG Siegler notes here, Netflix seems to have pulled this off perfectly with Bird Box:
In my case, I watched Bird Box because so many people were talking about it on Twitter. It was word-of-mouth, but not necessarily in the normal sense — again, the movie is decent, but not great and the chatter I saw was largely meme-based! — and yet there it was, staring me in the face when I opened Netflix. It was basically begging me to click-to-watch. And so I did. And so this shows yet again why Netflix is lightyears ahead of traditional Hollywood.
This isn’t the first attempt Netflix has had in this arena. During last year’s Super Bowl, they attempted to do the same basic thing with a commercial for The Cloverfield Paradox. It was clever because unlike so many other movie trailers shown during the event, the kicker was that you could watch the movie right now. Again, all of the friction had been compressed to time.
But ultimately, Netflix likely figured out two things from that experiment. First, you don’t even need to spend tens of millions of dollars to promote this type of movie. In fact, a simple in-app promotion likely reaches as many people as a Super Bowl ad — and it undoubtedly leads to much higher conversion! Second, for the virality to truly work, the movie has to be at least halfway decent. Cloverfield Paradox was not. Bird Box is.
And finally ...
The internet is positively littered with fake puppies, and you don’t have to be elderly to fall for one of these scams. Here’s dogged reporter Jane Lytvynenko:
There are a few ways to determine whether a website is legitimate, like doing a reverse image search on the pets or googling a phrase that doesn’t mention the breed of the animal to see if other sites are using the exact same language. But the most surefire way to prevent being scammed is to meet the future pet in person, Baker said. That’s what Dowden did when she tried to adopt again after losing her money. She said being scammed took an emotional toll on her, and she was embarrassed to tell her husband what happened.
To make sure an internet dog is real, it seems, you have to get off the internet.
Talk to me
Send me tips, comments, questions, and links likely to confuse the elderly: email@example.com