{"id":9549,"date":"2021-11-22T19:58:32","date_gmt":"2021-11-22T18:58:32","guid":{"rendered":"http:\/\/plus.maciejpiasecki.info\/index.php\/2021\/11\/22\/personalized-warnings-could-bring-down-hate-speech-on-twitter\/"},"modified":"2021-11-22T21:23:37","modified_gmt":"2021-11-22T20:23:37","slug":"personalized-warnings-could-bring-down-hate-speech-on-twitter","status":"publish","type":"post","link":"https:\/\/plus.maciejpiasecki.info\/index.php\/2021\/11\/22\/personalized-warnings-could-bring-down-hate-speech-on-twitter\/","title":{"rendered":"Personalized Warnings Could Bring Down Hate Speech On Twitter"},"content":{"rendered":"<p>A new research has found that personalized warnings for Twitter users could help bring down the amount of hate speech on the platform. The findings come from a group of researchers at New York University\u2019s Center for Social Media and Politics.<br \/>\nThe research (via) found that issuing a carefully worded disclaimer or warning could deter people from using hateful or abusive language. This could prove to be an effective tool to bring down the violent rhetoric on platforms like Twitter. However, it\u2019s still early days.<br \/>\nMustafa Mikdat Yildirim is the lead author of the paper. He explained how sending warnings could help alleviate the problem of hate speech.<br \/>\n\u201cEven though the impact of warnings is temporary, the research nonetheless provides a potential path forward for platforms seeking to reduce the use of hateful language by users,\u201d he said.<br \/>\nResearchers created test accounts with profile names like \u201chate speech warner\u201d before warning individuals<br \/>\nThe research began with the identification of accounts that were close to suspension for violating Twitter\u2019s hate speech rules.<br \/>\nResearchers sought candidates who used at least one word in the \u201chateful language dictionaries\u201d over a one-week period. Additionally, these users should have followed at least one account that was recently suspended for violating Twitter\u2019s guidelines.<br \/>\nThe researchers then created test accounts with names like \u201chate speech warner\u201d while tweeting out warnings against violating individuals. The wording of the tweets differed between the test accounts. However, the core idea was the same \u2013 to discourage hate speech. The warnings would also inform individuals about the suspension of an account they follow.<br \/>\nThe authors of the research had around 100 followers and no affiliation with any organization or NGO<br \/>\n\u201cThe user [@account] you follow was suspended, and I suspect that this was because of hateful language,\u201d one of the samples from the research reads. Other warnings include, \u201cIf you continue to use hate speech, you might get suspended temporarily\u201d or \u201cIf you continue to use hate speech, you might lose your posts, friends and followers, and not get your account back.\u201d<br \/>\nThe paper claims that each of the accounts created by the co-authors had around 100 followers. Moreover, none of these accounts had an affiliation with organizations, thus maintaining a neutral stance. The researchers point out that such warnings could have a bigger impact if they came from Twitter or an NGO.<br \/>\nWe\u2019re still a long way from determining whether this method could bring about a decline in online hate speech. But this is a decent starting point for social media giants to expand on the existing frameworks to combat hate speech\/misinformation.<br \/>\nThe post Personalized Warnings Could Bring Down Hate Speech On Twitter appeared first on Android Headlines.&#013;<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/plus.maciejpiasecki.info\/wp-content\/uploads\/2021\/11\/Twitter-Logo-renewed-ah-db20.jpg\" width=\"1920\" height=\"1080\">&#013;<br \/>\nSource: ndroidheadlines.com&#013;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A new research has found that personalized warnings for Twitter users could help bring down the amount of hate speech [&hellip;]<\/p>\n","protected":false},"author":29,"featured_media":9550,"comment_status":"false","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-9549","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-bez-kategorii"],"_links":{"self":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/9549","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/users\/29"}],"replies":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/comments?post=9549"}],"version-history":[{"count":1,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/9549\/revisions"}],"predecessor-version":[{"id":9551,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/9549\/revisions\/9551"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/media\/9550"}],"wp:attachment":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/media?parent=9549"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/categories?post=9549"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/tags?post=9549"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}