Well, well, well. A subsidiary of Google called jigsaw has partnered with the New York Times and the Wikimedia Foundation to produce Conversation AI, an artificial intelligence tool they say can detect harassment with 92 percent accuracy and a 10 percent false-positive rate.
Wikipedia and the Times will be the first to try out Google’s automated harassment detector on comment threads and article discussion pages. Wikimedia is still considering exactly how it will use the tool, while the Times plans to make Conversation AI the first pass of its website’s comments, blocking any abuse it detects until it can be moderated by a human. Jigsaw will also make its work open source, letting any web forum or social media platform adopt it to automatically flag insults, scold harassers, or even auto-delete toxic language, preventing an intended harassment victim from ever seeing the offending comment.
Nothing like closing the barn door after the horse is gone.
(See also Research:Detox on meta)