6/15/2017

紐約時報 (New York Times) 對線上評論的管理

除了人機合作外,也可以了解專業媒體的嚴謹流程,值得我們學習。

C.G. Weissman, How Alphabet’s AI Robot Is Helping The New York Times Replace Its Public Editor, Fast Company, 2017/6/13.
Inside the New York Times‘ towering building in Midtown Manhattan, just off Times Square, sit 14 journalists whose primary role is to read and to click. 
They are the Grey Lady’s comment moderators, and their goal is to maintain online civility in the hallowed digital pages of the 166-year-old newspaper. Civility, that is, for 10% of the Times‘ articles–for only that percentage of stories on nytimes.com allow reader comments, given the size of the paper’s comment moderation team. As it is, the Times receives 12,000 comments each day....
Starting today, the New York Times will begin to increase the number of articles that feature a commenting section–including front page articles–and the moderators will now use the Jigsaw-built platform, Perspective. As the year goes on, the paper hopes to offer comments on 80% of its articles. That’s no small goal, given that the Times publishes about 200 articles a day. 
To do this, the venerated newspaper has been leveraging Jigsaw’s artificial intelligence capabilities to achieve this lofty aim. Instead of bulking up the Times‘ team of human moderators, Alphabet has been building a machine-learning algorithm that automatically scans for abusive and superfluous comments. Human moderators will use this new software, built specifically for them, to speed up their moderation processes–and the hope is that this program will make it dramatically faster for humans to comb through all those reader-submitted comments.... 
Thanks to this new system, the human moderating team will be able to access a platform that ranks a comment based on the probability that it would be rejected by a human moderator. If it’s deemed a 0, it’s very likely a good comment. If it’s given a 100, it may be the trolliest thing your eyes have ever seen. The New York Times‘ moderators can use this system to gather comments within a certain number range–say, 0-10–and then approve or deny them en masse. If something is questionable–i.e., if the robot assigned it a high-ish number–the human moderators can take a more discerning look. This software also bolds questionable sentences so the human team can easily see exactly what’s controversial about a comment, according to the machine. This, says Etim, will make the moderators’ jobs “eight to 10 times more efficient.”

沒有留言:

張貼留言