Tuesday, January 5, 2016

Gatekeepers and social media content

This morning, a friend on Facebook shared an interesting piece from Shurat HaDin, a Tel Aviv-based NPO that focuses on terror victims and Jewish/Israeli issues. Here is Shurat HaDin's Facebook page. Here is the YouTube link to the piece in question, "The Big Facebook Experiment."

Source: Shurat HaDin
In a nutshell, this is what happened with "the experiment": people at Shurat HaDin set up two Facebook pages (Communities). One was called "Stop Palestinians," the other was called "Stop Israelis." They then added some updates to each that basically called for violence against both groups (ending with "Death to Palestine" and "Death to Israel," respectively). After this, they simultaneously reported both pages to Facebook for violating community standards. The pages were set up on December 28, 2015 and they were reported to Facebook on December 29, 2015. The "Stop Palestinians" page was suspended by Facebook on the same day it was reported. The "Stop Israelis" page? It remained activity (garnering some 972 "likes") until today, January 5, 2016. Here's a cached copy of it which shows this.

Now, Shurat HaDin only revealed the experiment today on Facebook. It was soon being shared by thousands of people and was picked up by Israeli media, as well. So, it's not much of a stretch to suggest that this was the reason behind Facebook's decision to finally suspend the "Stop Israelis" page. Shurat HaDin's overriding point/argument is that there is bias at work here, bias against Jews in general and Israel in particular, on the part of Facebook. It's a pretty easy argument to make, given how the "Stop Palestinians" page was shut down almost immediately, while the "Stop Israelis" page was allowed to remain activity until it became a PR problem.

That said, it occurs to me that the exact nature of the "who" making these decisions is important. Facebook is a big company and given the number of active users, pages, and updates, there's simply no way that the same person or persons reviews every reported incident. Different people are making decisions on different incidents. Granted, there are guidelines in this regard. But there's a lot of wiggle room in there. It seems to me that both pages were obviously calling for violence or trying to justify violence as a means to an end, however my point of view is not that of a hardcore activist/sympathizer for either "side." There is a political angle available here where one might argue that the calls of violence were only with respect to defense against persecution and violence already being committed by the other "side."  

It's a narrow road, no doubt, one that I think is in fact too narrow with respect to the community standards Facebook claims it is enforcing, but I can still see it.

Which again returns us to the identity of the actual decision makers at Facebook, the gatekeepers if you will. They are individuals, and as such they don't necessarily see things the same way as someone else. Indeed, they don't all see things the same way withing their own limited group, either. So one person charged with reviewing reported pages at Facebook may agree that a particular page is a problem while another person at Facebook may not. It seems to me that there is some measure of uncertainty here. Of course, that uncertainty is somewhat mitigated over time, as a particular page may be reported again and again and again, until it is seen by a gatekeeper who agrees that it is a problem and suspends it.

Maybe that's what actually happened here. Maybe the "Stop Palestinians" page, when it was initially reported, was seen by the "right" person. Maybe if it had been seen by a different person at Facebook, it wouldn't have been suspended so soon. And the reverse for the "Stop Israelis" page: maybe the "right" person would have suspended it immediately.

Who can say?

But the larger issue remains, with regard to gatekeepers on social media, on Facebook and Twitter, on comment threads for news stories, and on messageboards: someone is deciding what is acceptable and not acceptable on all of these things (apart from the rare completely unmoderated website). True enough, most all of these vehicles have posted standards for comment and participation, but it is still a matter of human choice when it comes to deciding what comment/post/update/page crosses a line.

As a regular participant on one particular messageboard, I say that's okay, as far as it goes. The messageboard is a limited community, not a news site nor a social media platform like Facebook. If people don't like decisions made by moderators, they're free to go elsewhere on the internet to express their opinions.

Yet, the large social media platforms have become an ingrained part of life for much of the population. And supposedly, they represent the "free and open exchange of ideas" that make the internet such a great thing. Ditto for comment trails on news stories. Barring direct threats made against specific individuals and other comments which would be overtly criminal, should there be any purging going on here? I find myself leaning strongly towards saying "no," to allowing that these platforms are too significant now to allow the limiting of any sort of speech, unless that speech is clearly breaking laws. Because these platforms, despite being privately owned, are public communities. Any discussion I can legally have in Starbucks, on a college campus, or at a town hall-style meeting should be a discussion I can have on Facebook, on Twitter, on a CNN comment thread. Anything I can say in the first group, I should be able to say in the second group.

I think we, as a society, have to be very wary of anyone limiting information that we depend on or opinions that we may want to hear, regardless of the offered justification. If social media is going to continue to be a dominant force in our lives, we can't allow what we can see on these platforms to be spoon fed to us via subjective standards enforced by people who may or may not have their own agendas. It's yet another dangerous road we face going forward.

2 comments:

  1. One of the issues here is consistency. If it can be demonstrated that one type of decisions is made consistently (or consistently enough), than there is a case to be made for systemic bias. I don't know enough to say either way, but I am guessing that the decision to run the "experiment" didn't come out of the blue, but out of some sort of prior research where bias was suspected. Again, I don't know enough to say either way.
    With regards to your general point about moderation, I am also not sure where I fall. There are arguments to be made for both directions. On the one hand, I understand your point for free exchange of ideas. On the other hand, there is a case to be made for moderation. For example, unmoderated threads about P/I conflict quickly deteriorate, and lose any informative value. When the threads are moderated based on certain "community standards", there is obviously a need for some sort of consistency (there is a whole cite dedicated to looking at the Guardian's "Comment is Free" section that partly deals with issues of problematic moderation).

    ReplyDelete