The end to Internet ‘Trolls’ Might be Near

Interplay of human head outlines, lights, numbers and abstract design  elements on the subject of modern technology, digital revolution, scientific thinking, science and technology related issues
Artificial Intelligence: Stealing our Jobs or Empowering us?
October 24, 2016
Composition of outlines of human head, technological and fractal elements on the subject of artificial intelligence, computer science and future technologies
VentureBeat Predicts AI’s Development Over the Next 24 Months
October 24, 2016

The end to Internet ‘Trolls’ Might be Near

adobestock_72989975-rs1

Cyberbullies, Internet Trolls, WebMonsters, and the list goes on. There are many names for those hiding behind their screens and harassing people online. Now, Google is applying AI to detect these people, and to hopefully put an end to cyber-bullying.

The group goes by the name of Jigsaw, Google’s former brainstorming-device and is now working towards enabling that technology in order to address “geopolitical issues”, the report published by WIRED reads.

“Conversation AI” is produced by the organisation, and will work as an intelligence tool to detect and hopefully end harassment online.

”The software is designed to use machine learning to automatically spot the language of abuse and harassment — with, Jigsaw engineers say, an accuracy far better than any keyword filter and far faster than any team of human moderators”, WIRED writes.

So how will this device work?

“Conversation AI” will study abusive, foul language, before rating it from 0-100, giving it an “attack score”. Self-explanatory as it is, 0 would mean that no offensive language was detected, whereas 100 would mean that some abusive language was used.

“Jigsaw has now trained Conver­sation AI to spot toxic language with impressive accuracy. Feed a string of text into its Wikipedia harassment-detection engine and it can, with what Google describes as more than 92 percent certainty and a 10-percent false-positive rate, come up with a judgment that matches a human test panel as to whether that line represents an attack”, WIRED writes.

Abusive and offensive language is mostly known to flourish in social media platforms such as Facebook, Twitter and Instagram. However, “Conversation AI” is starting off in a (hopefully) less abusive environment, being applied in New York Times’ commentary sections.

YouTube was also up for discussion, as well as Wikipedia showing interest in the app, but for now it will mainly focus on New York Times. Jigsaw’s Founder and President Jared Cohen said to Wired that;” I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight to do everything we can to level the playing field.”

For anyone who has been struggling with handling cyberbullies, the good news is that it will be easier to access and apply “Conversational AI” very soon, as it will soon turn into an open source, MobileMag writes.

In theory, any website that wishes to protect its users will be able to, thanks to artificial intelligence.

MobileMag says that the reason why online tool or application so amazing is due to the type of technology being used, which is extremely advanced and capable of instantly flagging offensive language, insults, profanity, and auto-delete bad language and scold harassers.

“For those who are fond of bashing, hurling hate comments and saying bad things online, be careful, your days are over”.

The report was originally found at: https://www.wired.com/2016/09/inside-googles-internet-justice-league-ai-powered-war-trolls/

For the latest news and conversations about AI in business, follow us on Twitterjoin our community onLinkedIn and like us on Facebook

Leave a Reply

Translate »

We use cookies. By browsing our site you agree to our use of cookies.Accept