Twitch chat could soon be a lot friendlier: today the streaming service is introducing a new automated tool that’s designed to create a “positive and inclusive chat experience.” The moderation tool is called AutoMod, and according to Twitch it uses a combination of machine learning and natural language processing to both block content deemed inappropriate and flag potentially troublesome content for review by a human mod later on. (Such comments will be held in a publishing queue until they get the go-ahead.)
AutoMod will be an opt-in feature, so you won’t see it on every channel, and Twitch says that streamers will be able to adjust how heavy the chat filtering is depending on their personal preference. AutoMod joins a relatively robust set of moderation tools on Twitch, which let streamers do things like ban specific words, links, or phrases; employ community moderators; force users to view a set of rules before they can chat; or make it so only subscribed users have the option to be part of the discussion.
“For the first time ever, we’re empowering all of our creators to establish a reliable baseline for acceptable language and around the clock chat moderation,” says Twitch moderation lead Ryan Kennedy of AutoMod. Twitch says that the feature has been in the works for several months, and in fact a trial run of the feature was employed during the Republican and Democratic National Conventions, which aired on the streaming service.
Currently the full version of AutoMod is only available in English, but the tool is also in beta for multiple languages, including Arabic, Czech, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, and Turkish.