Thursday, 23 February 2017

Google has an idea to end the comments "troll": let an AI moderate them

Google has an idea to end the comments "troll": let an AI moderate them

IMDB, one of the most enduring pages on the Internet, announced the closure of its forums recently. Over the last few years there have been numerous publications that have followed a similar strategy and closed their comments . The motive is usually always the same: the environment in some cases is very "toxic" and to fix it means to be outstanding and to invest resources. And yet, sometimes it is not enough.

Aware of this problem, since Google have announced today a possible solution: Perspective . With that name they have baptized a technology, still under development, that uses machine learning to process comments automatically and gives them a score of 0 to 100 according to their degree of "toxicity" . Perspective is part of Google's Jigsaw project, it's totally free and anyone who wants it can implement it via API on their website.

After Perspective makes that first ranking of each comment, it is up to each editor to do what with them : some may choose a stricter approach and only allow the best rated comments to be published; Others may choose to leave out only those who have a very very low score. Even someone might choose to tell the commenter that their comment is offensive before posting it. In short: the algorithm only classifies, what then does each one with that ranking is your thing.

Take the example of The New York Times, with which they have tried a first version of Perspective: they receive more than 11,000 comments each day that they are checked by hand before they are published. "We have collaborated with them to train a series of models that allow the moderators of the Times to sort comments faster, and we hope to work with the Times to help enable comments on more articles, every day," they explain from Google.

For now, Perspective will only be available in English (which means that it only processes and punctuates comments in that language), but the idea is to expand it to other languages ​​within two years. It will also be the principle: Google has already said that it intends to develop similar algorithms that can detect better, for example, when there are personal attacks or comments that do not address the main topic of the article where they are published.

How does Google know if a comment is toxic or not?
Unlike other systems, which simply prevent comments containing black words that are present in black lists, Perspective is able to detect, according to Google, other comments that do not include insults or other terms but that can be considered "toxic" and Detrimental to the conversation.

Basically, the editor would "feed" the algorithm with the comments to study and compare them to hundreds of thousands of comments that it knows and that have been labeled as offensive. The algorithm is not static, but learns over time: not only would we be talking about improvements in detecting toxic comments, but also take into account any corrections you may receive from comments that have been identified as false positives.

The goal, by making it freely available to everyone, is for more people to use Perspective and the algorithm will improve over time. Also, as we said before, the idea is that soon other models will be added that are capable of detecting "offtopics" and personal attacks. If someone calls you a "frog face" in a comment, you may do so with the intention of offending, although you do not have any bad words in them. The idea is that these algorithms can detect it.

When a machine takes control of the conversation
Moderating comments is not a simple task, and anyone who has done it will occasionally give me the reason. Not just for the time it takes, but for the decisions you have to make. Where does freedom of expression end and personal attacks begin? What can be considered insult and disrespect and what not? In this case, what happens when we leave control of the conversation in the hands of a machine?

Since Google have insisted that they only put the algorithm and offer a "score" and that what then happens will depend on each editor. In addition, the machine will make decisions based on toxic comments that it has "learned" based on the "toxic" comments that have been taught. Here also arises a question: who decides what is toxic to teach it to Perspective? From Google they point out that they rely on hundreds of thousands of comments that thousands of people have tagged.

In any case, the ball is now on the roof of the editors and it remains to be seen which of them are encouraged to add Perspective in their systems. What is certain is that Google is very interested in using it: offering a free solution to a problem that many suffer (and for which they would possibly pay) is no accident.

PS: If you want to know if what you write is toxic or not according to Google, in the official website you have a section where they let you try it.

Share this

Hi I am a techno lover, as an author of this blog, I give you the most actual news for you daily. Hope you enjoy it

0 Comment to "Google has an idea to end the comments "troll": let an AI moderate them"

Post a Comment