The depressingly toxic nature of internet conversations is of increasing concern to many publishers. But now Google thinks it may have an answer – using computers to moderate comments. The search giant has developed something called Perspective, which it describes as a technology that uses machine learning to identify problematic comments.
The software has been developed by Jigsaw, a division of Google with a mission to tackle online security dangers such as extremism and cyberbullying.
The system learns by seeing how thousands of online conversations have been moderated and then scores new comments by assessing how "toxic" they are and whether similar language had led other people to leave conversations. What it’s doing is trying to improve the quality of debate and make sure people aren’t put off from joining in.
Jared Cohen of Jigsaw explains three ways Perspective could be used: by websites to help moderate comments, by users wanting to choose the level of rudeness they see in the online conversations they take part in, and by people wanting to restrain their own behaviour.
I was intrigued by this last example. He explained that the research had uncovered the fact that many aggressive comments came from people who were usually reasonable but were having a bad day. "If you start yelling in real life you get feedback – online it’s just putting something into a white box," he said.