ARTICLE AD BOX
Twitter is to expand its Safety Mode feature, which lets users temporarily block accounts that send harmful or abusive tweets.
The system will flag accounts using hateful remarks, or those bombarding people with uninvited comments, and block them for seven days.
Half of the platform's users in the UK, US, Canada, Australia, New Zealand and Ireland will now have access.
And they can also now use a companion feature called Proactive Safety Mode.
This will proactively identify potentially harmful replies and prompt people to consider enabling the mode.
The firm said it had added this based on feedback from some users in the initial trial, who wanted help identifying unwelcome interactions.
The Safety Mode feature can be turned on in settings, and the system will assess both the tweet's content and the relationship between the tweet author and replier.
Accounts that user follows or frequently interacts with will not be auto-blocked.
The firm said it will collect more insights on how the feature is working and potentially incorporate additional improvements.
Twitter has struggled to deal with abuse and harassment on its platform and now faces closer scrutiny from regulators.
In January, a French court ruled that Twitter must show exactly how it combats online attacks, while the UK is preparing legislation to force all social media sites to act swiftly on hate speech or face fines.
In response to its tweet announcing the expanded rollout, many users claimed that their accounts had been suspended for no reason.
Like all social media platforms, Twitter relies on a combination of automated and human moderation.
A 2020 report by New York business school NYU Stern suggested that it had about 1,500 moderators worldwide.