Media

Is How Social Media Companies use AI Beneficial or Dangerous?

Google+ Pinterest LinkedIn Tumblr

In 2020, 45 million people or 66% of the total UK population had an active social media presence. That pales in comparison to usage in countries in Asia and North America where between 70-90% of the population have social media accounts. With these figures in mind it is not surprising that social media companies such as Facebook and Twitter are increasingly having to rely on AI to tackle the issues that arise on their platforms, but is this beneficial? 

On the one hand, social media platforms such as Instagram have used AI combined with users’ own self-censorship to prevent abusive comments from being posted on its platform. It has done this by training its AI algorithm to assess whether certain words or phrases could be construed as violating Instagram’s terms of service. Then before a post is submitted to a photo or video, if the algorithm picks something up, it will flag the comment to the poster and ask them if they really want to post their comment. Instagram claims that the feature has encouraged many people to rescind their comments.

This feature follows on from an earlier AI tool that Instagram launched in 2017. The tool was an offensive comment filter that used machine learning to hide obvious abuse. The tool has since been refined using millions of data points generated by users when they have reported in the past. The same feature now asks users to notify Instagram if they feel the platform has flagged their comment as offensive by mistake. 

Similarly, Facebook has started using AI to bring suicide prevention to its Live and Messenger applications, this is aside from its official page guiding users on what to do when someone posts about suicide or self-injury. Facebook’s AI was configured by using data from anonymous historical Facebook posts and Facebook live videos with an underlying layer of pattern recognition to predict when some may be expressing thoughts of suicide or self-harm. When the system identifies a post or Facebook Live broadcast as ‘red flagged’ by using a predefined trigger value for the prediction output, those posts are routed to Facebook’s in house reviewers, who make the final decision of contacting first responders. 

Both Facebook and Instagram’s uses of AI can be said to be positive developments; that goes without question. People should be able to get the help they need without having to risk their mental health, and they should not have to face abuse on their own private pages. 

However, concerns have started to emerge regarding social media’s use of AI, particularly as it relates to racial bias. Two new studies have found that AI trained to identify hate speech may actually be amplifying racial bias. One of the studies found that leading AI models for processing  hate speech were far more likely to flag tweets as offensive or hateful when they were written by African Americans, this increased when tweets were written in African American English (a common form of English spoken by black people in the US.) 

The second study found similar evidence of racial bias against black speech in five widely used academic data sets for studying hate speech which totalled around 155,800 Twitter posts. A reason given for this discrepancy is that what is considered offensive largely depends on social context. Terms which are slurs when used in some settings may not be considered slurs in other settings. But algorithms and content moderators who are tasked with grading the test data that teaches these algorithms how to do their jobs don’t usually know the context of the comments that are under review.

Then there is the impact that the coronavirus pandemic has had on social media’s use of AI. As social distancing and lockdowns meant that fewer content moderators were able to come directly into the offices of many social media companies, they have had to turn to AI powered content moderation. This has spawned complaints, particularly amongst Facebook users that social media platforms are making mistakes and blocking many legitimate posts and links related to the coronavirus pandemic and flagging them as spam. 

The problem seems to arise due to the AI not being able to quite tell the difference between a genuine post and a post that would normally be marked as spam. A problem that may have been exacerbated by there being fewer human content moderators present in many social media offices to offset this error. Something that many companies such as YouTube and Twitter have acknowledged, and something they have promised to work on as the pandemic continues for the foreseeable future. 

AI has many benefits it can make content moderation go faster and potentially reduce the emotional burden on human content moderators through providing them with a quick solution to a possibly traumatic problem. It can also help prevent more toxicity from spreading into the world through prompting human self-censorship, encouraging the deletion of possibly toxic and abusive posts. 

However, AI’s current inability to understand social context means that it often ends up making moderation duties harder for its human supervisors and can create blowback for social media companies, who have to perform U-turns to save face. In the age of coronavirus, this is a particularly tricky issue for such companies to have to face. 

Comments are closed.

%d bloggers like this: