At the UN Generation Equality Forum in Paris on Thursday Twitter, TikTok, Google, and Facebook committed to tackling online abuse and improving safety for women on their platforms. The pledge came following consultations with the World Wide Web Foundation (WWWF) over the past year, aimed at examining online gender-based violence and abuse.
The WWWF said the consultations showed women want more control over who could reply or comment on their social media posts and more choice around what they see online, where, and when.
According to the WWWF, the companies have pledged to “build better ways for women to curate their safety online” by offering more granular settings, such as who can see, share, or comment on posts; more simple and accessible language; easier navigation and access to safety tools, and by “reducing the burden on women by proactively reducing the amount of abuse they see.”
The way that last part is worded is a bit frustrating; it addresses the aftermath or the location of abuse, but not the person / people committing the abuse. And just because the women aren’t seeing the abuse on social media, doesn’t mean the abuse has gone away. The platforms certainly bear some responsibility to make their online spaces safer, but until they get more proactive and less reactive, and go after the abusers, the onus will continue to fall on women and marginalized groups to report abuse and convince a social media platform that it’s worthwhile for them to address.
In addition to the “better curation” checklist, as part of the pledge, the companies will put in place improvements to their reporting systems by offering users the ability to track and manage their reports, and establish additional ways for women to get help and support when they report abuse. They’ll also enable “greater capacity to address context and/or language,” which may allow for more subtle forms of verbal abuse or threats to be incorporated into enforcement measures.
These all sound like excellent goals, but the release from the WWWF didn’t include any specifics about how each platform plans to achieve them. Nor did the companies themselves offer any comment as part of the news release, so we’ve reached out to all four for comment. Vijaya Gadde, head of legal, public policy, and trust & safety at Twitter said in an emailed statement that keeping everyone who uses Twitter safe and free from abuse is its top priority.
“While we have made recent strides in giving people greater control to manage their safety, we know there is still much work to be done,” Gadde wrote, noting that women and underrepresented communities are disproportionately affected by abuse (which is pretty well known at this point). Gadde said abusive behavior “has no place on our service. It hurts those who are targeted and is detrimental to the health of the conversation and the role Twitter plays in the expression and exchange of ideas where people — no matter their views or perspectives — can be heard.”
Facebook’s global head of safety Antigone Davis said in an email that the company was looking forward to working with other tech companies to make the internet safer for women. “To keep women safe from abuse, exploitation, and harassment online and offline, we regularly update our policies, tools, and technology in consultation with experts around the world—including with over 200 women’s safety organizations,” Davis said in the statement.
Tara Wadhwa, director of policy for TikTok US, wrote a blog post outlining the company’s plans. “Over the coming months, we’ll begin to develop and test a number of potential product changes to our platform that address these priorities and help make TikTok an ever safer place for women,” Wadhwa wrote.
Google didn’t immediately reply to a request for comment Thursday.
At this point, there doesn’t appear to be anything binding the companies to these “commitments” other than the prospect of public shaming if they fail to deliver. And unfortunately, that tends to be the best way to get social media platforms to respond to users’ problems.