ARTICLE AD BOX
Twitch has launched a tool that uses machine learning to detect users trying to rejoin chat channels from which they have been banned for abusive behaviour.
The gaming-focused livestreaming website said "bad actors" often created new accounts to continue to harass people.
But the new system would warn streamers and chat moderators if a user was a "likely" or "possible" ban evader.
It is part of Twitch's long-running efforts to reduce hate and harassment.
Hate speech
The company has been criticised over "hate raids", in which unscrupulous streamers send their followers or even automated bots to other channels to harass someone.
Often the victims belong to minority or marginalised groups.
Creators had demanded the Amazon-owned company do more to counter this kind of hate speech.
In September, Twitch announced "phone-verified chat", enabling streamers to require some or all users to verify a phone number before chatting.
And the same month, it began legal action against unidentified users allegedly involved in "chat-based attacks against marginalised streamers".
Final call
The new suspicious-user detection system is "powered by machine learning" and uses "a number of account signals" to detect ban evaders, Twitch said.
"Machine learning" describes computer systems that, in effect, "learn from experience".
The new system will be turned on by default, but moderators and creators can adjust its settings or turn it off.
It compares a number of factors, including the behaviour and account characteristics of users trying to join a chat channel, with those of banned accounts, Twitch said, and flags suspected ban evaders in two ways:
- likely: in which case their chat messages will be blocked
- possible: in which case their messages will still appear
"No machine learning will ever be 100% accurate," Twitch said, so chat moderators would make the "final call", but the tool "will learn from the actions you take - and the accuracy of its predictions should improve over time".