Skip to main content

MIT has a new tool to combat online harassment: your friends

MIT has a new tool to combat online harassment: your friends

Share this story

Image: MIT CSAIL

In light of Facebook, Twitter, Discord, and other social media platforms’ struggles to combat online harassment, MIT’s Computer Science and Artificial Intelligence Laboratory has developed a new tool that might help. It proposes that instead of relying on platforms’ moderators, people start relying on their friends.

The tool is called Squadbox, and it “friend-sources” for moderators to filter messages and support people who are being harassed online. So a blogger who wants to have a public email address to receive tips, but also wants to avoid receiving hate mail from strangers, can set up a Squadbox account, and use two of her co-workers as moderators, MIT scientists propose.

Her “squad” of two will divvy up the work between themselves and ensure her inbox stays clean. They’ll also have tools at their disposal: the ability to create whitelists for pre-approved email senders, and blacklists for senders who should be automatically rejected. Squadbox also scores each message’s toxicity level to help moderators when vetting emails. “This line of work helps provide a map for one hybrid solution to harassment that augments human support with tools in a meaningful way,” said professor of information from the University of Michigan Clifford Lampe in a press release. Squadbox currently only works with email, but the team behind it hopes to eventually expand to other social media platforms.

Moderators reading harassment could potentially face “psychological risks”

While the service might help Squadbox account holders, it’s far from a perfect solution. MIT noted, “the use of friends as moderators simplifies issues surrounding privacy and personalization but also presents challenges for relationship maintenance.” Email account owners also felt guilty they were leaning on their friends for so much help and began to grow reluctant to ask for more favors.

More importantly, moderating put stress on the volunteers. “Moderating is a lot of work,” one anonymous moderator said in the study. The study also mentioned that moderators reading harassment could potentially face “psychological risks,” which raises questions about how feasible of a solution this is.

The MIT study also found that moderators eventually got tired of checking other people’s emails and had slower response times. And even though most emails ended up in the trash bin as unwanted messages, few email addresses were blacklisted, likely because a complete blacklist of a contact was too extreme a measure. The study concluded that a lot more fine-tuning needs to go into fighting online harassment.