DTC 475–Week 4 Blog Post: Social Media Moderators

When it comes to social media, we have had a fairly smooth introduction in regards to the nature of the content that we see. In the early days of social media one would occasionally stumble across a spam post which may be pornographic in nature. I also remember the video of Saddam Hussein’s hanging being posted on the early days of Youtube. Other than these random occurrences, users of social media have been sheltered from graphic, pornographic, or generally disturbing content. The introduction of those who make social media sites relatively safe, by means of moderating every piece of content that is flagged is disturbing.

The fact that one would have to witness graphic content on an extreme scale day by day is tragic in itself. The fact that we outsource American companies’ moderation needs to other countries is a whole another issue. What I am interested in is the alternatives to human-powered content moderation. These alternatives are slim, though. One could say we just take out moderation altogether. I think it is quite obvious why we wouldn’t take this approach as we wouldn’t want Grandma Jo to see Graham kill Joe. The next best alternative would be to have user-generated moderation, i.e. deleting content solely based of users flagging anything that they believe to be offensive. This doesn’t sound too bad, however do we think that it should be okay if one of your crazy Facebook friends from middle school starts flagging all things soccer because it’s a “communist sport”? The next best alternative is the current–outside moderation who makes decisions based on their perceived notion of morality–specifically in the United States.

I think it is interesting to ponder the last alternative that I can draw up–AI moderation. On the surface this seems like the best alternative. Smart computers moderate social media sites as humans do now, preventing one from witnessing the evil-doings in this world. The issue goes deeper, though. As computers and their computation speed and power are growing exponentially, so to becomes the possibility and reality that computers will be able to think as fast or faster than the human brain. With this improved computation speed, computers will be able to pull, analyze, and interpret data far beyond the abilities of humans. So if one were to instruct a computer or set of computers to moderate content, insuring that offensive material is stowed away from the users, doesn’t that mean the computer has to have a sense of morality? This being the case, where does the morality come from? Is it inputted by a human–who’s moral judgement has the possibility of being flawed? Or does the computer, based off of its experiences (analysis of information and data), create its own sets of moral guidelines?