Skip to main content

Amazon’s sales platform may be too big to police

Amazon’s sales platform may be too big to police


Recent investigations find that bad actors are running amok

Share this story

Illustration of several frowning faces made using an upside-down version of the Amazon logo.
Illustration by Alex Castro / The Verge

When a tech platform grows beyond a certain size, a now-familiar phenomenon begins to unfold. The creators no longer have a view of — or control over — all the day-to-day activity on the platform, allowing bad actors to manipulate it to their own ends.

On Twitter, this led to unchecked harassment and abuse, much of it focused on women and minorities. On Facebook, it led to Cambridge Analytica and Russian interference in the 2016 election. On YouTube, it led to a surge in extremist content, boosted by algorithmic recommendations.

Last week, thanks to a terrific investigation in the Wall Street Journal, we saw how size has blinded Amazon to a host of dangerous products that third-party sellers have made available. Alexandra Berzon, Shane Shifflett and Justin Scheck found 4,152 items for sale on Amazon that have been declared unsafe or banned by federal regulators, or have deceptive labels. This included items that Amazon has said it bans — and dozens of them were labeled “Amazon’s Choice,” which is based on an item’s ratings, pricing, and shipping time, but likely strikes many Amazon customers as a seal of approval.

The report’s findings include:

• 116 products were falsely listed as “FDA-approved” including four toys—the agency doesn’t approve toys—and 98 eyelash-growth serums that never undertook the drug-approval process to be marketed as approved. [...]

• 80 listings matched the description of infant sleeping wedges the FDA has warned can cause suffocation and Amazon has said it banned.

• 52 listings were marketed as supplements with brand names the FDA and Justice Department have identified as containing illegally imported prescription drugs. [...]

• The Journal analyzed 3,644 toy listings for federally required choking-hazard warnings. Regulators don’t provide databases of toys requiring the warning, so the Journal compared the Amazon listings with the same toys on and found that 2,324, or 64%, of the Amazon listings lacked the warnings found on the Target listings.

Why are so many bad products on the platform? According to a former employee interviewed by the Journal, Amazon’s desire to sell as many products as possible has historically been prioritized over safety. In perhaps the most striking anecdote in the report, a former employee says the company actively avoided testing some products for lead content:

At one point in 2013, some Amazon employees began scanning randomly selected third-party products in Amazon warehouses for lead content, say people familiar with the tests. Around 10% of the products tested failed, one says. The failed products were purged, but higher-level employees decided not to expand the testing, fearing it would be unmanageable if applied to the entire marketplace, the people familiar with the tests say. Amazon declined to comment on the episode.

“Amazon will always default to allowing more stuff to be available to the customer,” says Ms. Greer. 

Amazon posted a huffy non-response to the Journal’s story, patting itself on the back for spending $400 million on safety programs in 2018 and calling its compliance programs “industry leading.”

That wasn’t enough for a group of Democratic senators, who wrote a letter to the company today calling for an investigation. (Something to add to the “burn book,” perhaps.) The same trio of Journal reporters report:

In their letter, the senators said: “Unquestionably, Amazon is falling short of its commitment to keeping safe those consumers who use its massive platform...We believe it is essential for consumers to fully understand the safety of products they bring into their homes.”

And as my colleague Josh Dzieza reported today, Amazon can struggle to protect even its own products from being hijacked by scammers. He writes at The Verge:

Amazon’s marketplace is so chaotic that not even Amazon itself is safe from getting hijacked. In addition to being a retail platform, Amazon sells its own house-brand goods under names like AmazonBasics, Rivet furniture, Happy Belly food, and hundreds of other labels. Sellers often complain that these brands represent unfair competition, and regulators in Europe and the United States have taken an interest in the matter. But other sellers appear to have found a way to use Amazon’s brands for their own ends. Amazon promotes them heavily, racking up thousands of reviews on listings that the company then abandons when it stops production or comes out with a new version. Enterprising sellers then hijack these pages to hawk their own wares.

Take this listing, formerly for an AmazonBasics HDMI cable. Amazon removed it and other listings after being contacted by The Verge, but before it was taken down, it was being used to sell two completely different alarm clocks: a “Warmhoming 2019 Updated Wooden Digital Alarm Clock with 7 Levels Adjustable Brightness, Display Time Date Week Temperature for Bedroom Office Home,” and a white wake-up light clock, which was out of stock. Strangely, that clock was listed as a second variety, color “Blackadaafgew,” yet the listing’s copy referred to binoculars that “can help you see a clear face from more than 650 feet away.” Many of the Amazon listings appear to undergo multiple hijackings.

The takeaway here is that some reviews on Amazon branded products don’t even refer to the actual product being sold. It’s an effect of the company allowing sellers to edit listings — a move intended to improve their accuracy, but one that in practice can result in deception.

To date, there has been little fallout from Amazon’s lapses in platform integrity. The Journal recounts some serious issues — including a man who died after purchasing a faulty helmet — but so far we haven’t seen the kind of large-scale abuse that could trigger outrage on the scale of Cambridge Analytica or YouTube radicalization.

Still, I’m struck by other ways in which Amazon’s experience mirrors Facebook’s. In both cases, the top priority was growth, and lax security proved to be a powerful accelerant to that growth. Two, the initial response to journalist reports was to dismiss them. Three, those reports piqued the attention of lawmakers, raising the prospect of more serious intervention.

Amazon’s institutional tone-deafness makes me wonder whether if the company is ready for what’s next. I suspect reporters will be paying more attention to Amazon’s platform problems over the next year, rather than less — and at this point it’s not at all clear that the company could guess what they might find.


On Tuesday I wrote about a study that attempted to demonstrate a “radicalization pipeline” on YouTube. The authors tracked commenters over time and showed that a significant group of them migrated from garden-variety conservatism to far-right channels over time. YouTube got in touch to say it doesn’t find the results very persuasive.

Among other things, the company questions the methodology by which the researchers created the three groupings of channels that constitute its view of radicalization. The methodology also relied on a desktop recommendation feature that is no longer in use, YouTube said.

”While we welcome external research, this study doesn’t reflect changes as a result of our hate speech policy and recommendations updates and we strongly disagree with the methodology, data and, most importantly, the conclusions made in this new research,” a spokesman told me.

The company also cited research from Pew that indicated YouTube tends to push users to more popular content over time, rather than more partisan content. Of course, more partisan content is often quite popular.

The research I wrote about this week wasn’t the first suggestion that YouTube can push viewers to extremes. But other writing on the subject has largely relied on anecdotes. Here’s a case where I’d love to see more research, and soon. Let me know if you’ve seen any.


YouTube says it reduced recommendations of false and extremist content by half in the United Kingdom. The recommendations are for what the company calls “borderline” content — videos that tiptoe up to breaking the rules without going over the line. Alex Hern reports in The Guardian:

YouTube is experimenting with an algorithm change to reduce the spread of what it calls “borderline content” in the UK, after a similar trial in the US resulted in a substantial drop in views.

According to the video sharing site’s chief executive, Susan Wojcicki, the move is intended to give quality content “more of a chance to shine” and has the effect of reducing views from recommendations by 50%.

Amazon-owned Ring has partnered with 400 police stations to build a surveillance network out of homeowners’ doorbell cameras. (Drew Harwell / Washington Post)

The Knight First Amendment Center at Columbia University has asked Rep. Alexandria Ocasio-Cortez (D-NY) to unblock her critics on Twitter. (Adi Robertson / The Verge)

Valve is the creator of Steam, the world’s largest video game distribution platform. The European Union has hit Valve with antitrust charges, which the company plans to fight. ( Foo Yun Chee / Reuters)


YouTube reinstated a prominent European white nationalist after he appealed his removal. “White nationalist activist Martin Sellner and British YouTuber the Iconoclast” are back on the platform, Mark Di Stefano reports at BuzzFeed. It looks like a bad mistake, and YouTube has said almost nothing about why it took the accounts down — or why it restored them.

Martin Sellner is the face of the pan-European Generation Identity movement, which has been staging far-right, anti-immigration stunts for several years. Last year, he was one of three far-right activists banned from entering Britain because authorities deemed their presence “not conducive to the public good”.

His link with the suspected gunman in the Christchurch mosque shootings has also been under the spotlight recently. Before the massacre, Sellner had repeated contact with him and reportedly sent him a link to his YouTube channel. The suspect allegedly replied, “fantastisch”. According to the Guardian, Sellner has also used YouTube to upload German-language videos about the police investigation into his links with the accused shooter.

Speaking of YouTube, Infowars sneaked back onto the platform — and falsely reported that its ban had been lifted — before YouTube removed it again. (Matthew Gault / Vice)

In better news for YouTube, A-list celebrities like Will Smith and Jennifer Lopez are getting into vlogging. (Sophie Kleeman / Vice)

The average user now spends around 45 minutes a day on TikTok, more time than they even spend on Facebook.” (Ryan Holmes / Fast Company)

Oculus by Facebook co-founder Michael Antonov has been accused of groping a woman during the 2016 Game Developers Conference. He is no longer with the company.

Facebook is rolling out appointment booking, lead generation, and other business tools that it announced at F8 this year. (Sarah Perez / TechCrunch)

Facebook is testing an AI assistant for Minecraft that can help you stack blocks and do other tasks. (MIT Technology Review)

Facebook’s privacy cafe pop-up has arrived in London. (Mike Murphy / Quartz)

The Journalism Trust Initiative is an effort to develop industry-wide reporting standards. It’s now accepting feedback on its draft standards.

And finally ...

Teens Are Using Instagram To Cast Each Other In Fake Broadway Shows.

As someone who once harbored modest dreams of becoming a theater sensation, I was heartened to learn that today’s teens are giving one another a taste of this experience using Instagram. This is wholesome as hell:

Shane and Lila are just two of the self-appointed “casting directors” in this small yet booming Instagram community, made up mostly of other young teens. A search by BuzzFeed News uncovered more than 800 accounts using the hashtag #fakecasting so far in 2019. To cast a show, they will post on their accounts what Broadway musical is next in their “season.” To audition, followers will DM a video of themselves singing a song from the show. When the deadline comes, the casting director will post the cast list in a new Instagram post.

But that’s where it ends — there’s never an actual production. The people auditioning are just doing it for the love of the game...and the bragging rights of scoring a lead role, of course. It’s the virtual equivalent of the rush they get seeing their name on the cast list on the school bulletin board.

This is adorable and we’re ending today on this note!

Talk to me

Send me tips, comments, questions, and your advice for Amazon’s platform: