Among the biggest problems in online discourse is finding the right tools for the moderation of the comment sections. A Google new algorithm used in a bot called "Perspective" is able to detect and filter out toxicity in internet comments.
Bot Perspective Can Filter Comments
According to The Atlantic, these days it can be hard to come by some civil conversation in the comment sections of news sites. The intelligent observations there are often drowned out by off-topic rants, attacks and obscenities. Some sites have abolished the comments section completely, others hide them behind a link at the bottom of each article. Other sites employ humans to moderate and scrape comments, but this is time-consuming and costly.
Unfortunately, much of online discussion is increasingly taken over by hateful and derailing comments. The undesired comments have become so frequent that they've become nearly impossible to moderate. Google is turning to artificial intelligence in order to help solve this problem. Engineers at Google's Jigsaw division that focuses on cybersecurity developed Perspective, a bot based on an algorithm that can sort online comments based on "toxicity" as rated by other users.
Perspective is currently already used by The New York Times and Wikipedia, among others. The bot helps overwhelmed moderators to automatically clean up their comment sections. On Thursday, Feb. 23, Google opened up the access to Perspective, making it a free and open source so more people can use it. Google also created a demo of the technology that anyone can try, in addition to making the code publicly available.
Those interested to try out the Perspective bot demo can see on the site what happens when comments are filtered by toxicity, as determined by Perspective. Users can also write their own comments to see how they are rated by Perspective. The bot seems to be pretty good at filtering. Obvious hateful messages are rated very toxic and filtered out.
Future Development of The Bot Perspective
It is expected that Perspective will evolve and learn more in the future, but its current focus is only on toxicity of the comments. The bot is analyzing and filtering the kind of comments that may lead readers to leave the conversation. Perspective can train itself to identify abuse and hate speech. The machine-learning technology used with the bot has been trained with thousands of comments on the Times, the Guarding, the Economist and Wikipedia.
Most negative comments and feedbacks that use mean-sounding words or swears are given a high toxicity rating. However, Perspective is still under development, so there are many ways to fool the algorithm. The bot appears to have a trouble with context. Once more people give feedback, it is expected that the algorithm will become more sophisticated.
How Perspective Can Be Used
Websites that adopt the bot Perspective as a tool to filter comments have two options for how it is used. The tool can group all toxic comments together for later revision by a human operator. Alternatively, the site managers can set Perspective to directly remove toxic comments.
According to WinBuzzer, Perspective uses machine-learning technology to understand the bad from the good and filter the comments. The tool can be a cost effective-solution and a real help to smoother the comment section by automatically stopping a disturbing behavior.