Skip to main content

Alexa told a child to do potentially lethal ‘challenge’

Alexa told a child to do potentially lethal ‘challenge’

/

Amazon confirmed the incident and says it has fixed the “error” that pulled up a dangerous prompt.

Share this story

Photo by Chris Welch / The Verge

Amazon’s Alexa told a child to touch a penny to the exposed prongs of a phone charger plugged into the wall, according to one parent who posted screenshots of their Alexa activity history showing the interaction (via Bleeping Computer). The device seemingly pulled the idea for the challenge from an article describing it as dangerous, citing news reports about an alleged challenge trending on TikTok.

According to Kristin Livdahl’s screenshot, the Echo responded to “tell me a challenge to do” with “Here’s something I found on the web. According to ourcommunitynow.com: The challenge is simple: plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.”

In a statement to the BBC, Amazon confirmed Alexa’s behavior, saying, “As soon as we became aware of this error, we took swift action to fix it.” Livdahl tweeted yesterday that asking for a challenge was no longer working. Testing the prompt with Alexa, we weren’t able to get a similar result today, and even asking for a challenge from the web prompted no response or pulled from a very limited pool.

Alexa’s response to requests for a challenge on December 28th, after Amazon fixed this particular answer.
Alexa’s response to requests for a challenge on December 28th, after Amazon fixed this particular answer.
Image: The Verge

Amazon isn’t the only company to run into issues trying to parse the web for content. In October, a user reported that Google displayed potentially dangerous advice in one of its featured snippets if you Googled “had a seizure now what” — the info it showed was from the section of a webpage describing what not to do when someone was having a seizure. At the time, The Verge confirmed the user’s report, but it appears to have been fixed based on tests we did today (no snippet appears when Googling “had a seizure now what”).

Users have reported other similar problems, though, including one user who said Google gave results for orthostatic hypotension when searching for orthostatic hypertension, and another who posted a screenshot of Google displaying terrible advice for consoling someone who’s grieving.

We’ve also seen warnings about dangerous behavior amplified to make the problem bigger than it originally was — earlier this month, some US school districts closed after self-perpetuating reports about shooting threats being made on TikTok. It turned out that the social media firestorm was overwhelmingly caused by people talking about threats, far more than any threats that may have existed. In the case of Alexa, an algorithm picked out the descriptive part of a warning and amplified it without the original context. While the parent was there to immediately intervene, it’s easy to imagine a situation where that isn’t the case or where the answer shared by Alexa isn’t so obviously dangerous.

Livdahl tweets that she used the opportunity to “go through internet safety and not trusting things you read without research and verification” with her child.

Amazon didn’t immediately reply to The Verge’s request for comment.

Update December 28th, 4:37PM ET: Added additional information on Alexa’s response to the same query now.