Sunday, November 26, 2017

The Internet as Public Space

I’ve noticed a trend among some people, mostly arts/creative types, where they use the phrase formula “_____ as _____” such as “writing as activism” or “sculpture as architecture”. I thought I’d follow in suit, half in homage and half in irrelevant mockery.

Anyway. I’ve been thinking a lot about the internet and censorship. Following the most recent presidential election, Facebook has disclosed that many targeted adverts were purchased by money and probably individuals related to the Russian state. Given this, it was seen as pretty clear evidence that at least on some superficial level, these adverts propagated over various social media platforms, predominantly Facebook and Twitter influenced the informative process of a portion of voters in the country. To exactly what extent and to what possible detriment this occurred is for someone with thicker glasses and a higher security clearance than me to determine. What struck me about this story, however, was that Facebook resolved to “do something about it”. What exactly that thing is is yet to manifest.

Similarly, Youtube has been playing around with their monetization policy relating to advert revenue, and this has lead to several haunches being raised among some of the people I subscribe to on that platform. I think the poster child for this would be Cody’s Lab, which is a channel owned by bright, young, enthusiastic scientist who produces videos that are entertaining and informative. Basically, exactly the kind of content that Youtube would be interested in promoting. His channel was shut down temporarily because of complaints about animal cruelty. Not that he was indeed being cruel to animals, there was just a preponderance of complaints that lead to the youtube algorithm shutting his channel down. Content such as this, or content with strong language, violent behavior, things of that nature, generally get demonetized pretty much immediately, and by the time the underpaid intern at Google gets around to viewing the complaint, most of the advertising revenue that would go to the video maker is lost because the frequency of views tend to taper off over time.

Obviously there is a place for pointing at policies like Youtube’s and saying that that particular algorithm or policy needs to be better. Same with Facebook. They surely could have done a better job at disclosing where the adverts were originating from. However, what I want to explore is the idea behind policies such as this. Basically, there has been a recent trend of calling for content to be restricted, taken down, or censored on social media platforms. This lead me to wonder how first amendment rights would or would not apply to these platforms. Certainly, there is a romantic view that here in the Divided States of America, we have fought and died to protect our right to do and say whatever it is we want. If you want to say you think Pooh bear is a sugar addict and Christopher Robin and all his friends are just enabling his lifestyle instead of getting him the help he desperately needs before eating himself into an early grave of cardiovascular disease and diabetic shock, that is your choice. It may be an unpopular opinion, but it is certainly one you are allowed to have. However, there are restrictions on the kind of free speech one is allowed to partake in. The SCOTUS case Schenck v. United States established that free speech, in a first amendment kind of way, could be restricted if it represented to society a “clear and present danger”. The most belabored example of this is shouting fire in a crowded theatre. People think there’s a fire even though one would think it would pretty easy to see if there was a fire burning in a darkened theatre, and people get trampled because you lied. These rules seemed pretty straightforward and accepted in the age of their conception, but since then, much like our taxes, things have gotten much more complicated.

I suppose the first thing that ought to be done are functional clarifications. Firstly, no one owns the internet. If someone posts a selfie with a stupid superimposed cartoon filter of a dog face with the delusion that they think it makes them attractive, then sure; they may have taken that picture, but someone owns the cell tower it got sent to, someone owns the satellite it bounced off of, the transatlantic cable on the seafloor, the server it got stored on. And the people that own these things are typically not the person who posted that selfie. Furthermore, platforms such as Facebook are privately owned. They have the definitive right to determine what kind of content they want to host. You may not like it, but too bad. Go back to Myspace. In a manner of usage, social media platforms may seem like public spaces where you gather all your (in my case) 7 friends and you can hang out like you would in a town square or a public park, but this is not the case. Someone owns that park, and it’s maintained with advert revenue, not your hard working tax dollars. Platforms are not public space.

However, it seems like that’s the whole reason why the exist, so they can be that space where you hang out with your friends and your uncle who always sends you invitations to play Candy Crush Saga. They want you and all your friends to sign up and hang out so they can post adverts and target posts at you and take a bite out of your cookies so they can make money off of you. This is their first concern. How much you enjoy the website and how it connects you to your friends or family or people you’re facebook stalking is secondary. For the user, the connection and the people are primary, and we just put up with the advertisement because come on, who’s actually going to use Myspace.

The issues that have arisen recently with censorship have come from individuals which appeal to the “clear and present danger” discription among certain peoples on these platform. For instance, if an Islamic extremist militant group wants to recruit on Facebook, it’s pretty understandable, that Facebook would want to do something about preventing that. However, with a less extreme example, this distinction becomes rather blurred. What if someone posts something saying climate change is a hoax? Some people may say that this is “clear and present danger”, because it encourages your local congressman to not vote for legislation that would lead to us all dying from rising sea levels, ocean acidification, a preponderance of pollutants being released into the atmosphere, and other things of that nature. But is it really “clear and present danger”? If your local homeless person sat in a park and talked about how global warming is a lie told by those nasty tricksies chineses, would anyone try to get him arrested? Hopefully not, because that charge would in no way hold up. We’d simply chalk it up to a crazy person being a crazy person. What if someone posted something about a white supremacist gathering on a social media platform? Should that be considered “clear and present danger”? I don’t think that anyone will deny that white supremacists have done some very awful things before. One could see this post with that history in mind and claim that this gathering constitutes an instance of “clear and present danger”, and that the platform ought to take it down. However, the person who posted this information hasn’t actually done anything clearly or presently dangerous yet, nor has he or she threatened to. But then what if the post was a post about white supremacy ideology instead of a gathering? Would it be any more or less clearly or presently dangerous?

It’s right about now that one begins to see this less as the dichotomy of allowing good speech and prohibiting bad speech. I think the development of censorship policy on certain platforms will ultimately be determined by companies supporting and allowing the kind of speech they want, and censoring the kind of speech they don’t want. So in that way, good/bad is kind of irrelevant. But one would hope that what is allowed or censored somewhat aligns with good/bad. I think that in developing is policy and deciding what they want and don’t want, social media platforms have to decide whether or not they want to act like a public space, or engage in the thicket of righteousness. I chose here to use to use the word “righteousness” because I believe that this road will lead to battles not simply for speech that is good, but also for truth, for meaning, and for value. Righteousness is indeed a loaded word, so I shall take precaution to aim it carefully. One other thing to clarity is that these two choices are not mutually exclusive. Between them lies a wide swath of possibility.

If a social media platform chooses to tend toward acting like a public space, then they should implement policies like that of the standard of “clear and present danger”. So long as no one is making threats of harm or doing something like telling a habitually depressed teenager with a bad haircut and more zits than friends to go kill him or her self, speech ought to be allowed. This would include stuff like supporting climate change denial. This would include white supremacist or neo-nazi organization, so long as there is no definitive proof of violent behavior. This would include speech that one would deem hateful, racist, misogynistic, marginalizing, degrading, and disempowering, so long as they do not contain threats of violent behavior. Think about the speech that happens in public spaces all the time. People express racist views and slurs. People catcall women relentlessly and unfruitfully. I know a guy down at the park that knows just about everything there is to know about how aluminum foil hats stop brain reading lasers from the government and how they’re all really lizards anyway. This is stuff we as a society should work to eliminate, but through positive exposure to diversity and teaching equality and respect between each other, as well as apparently civic education and herpetology.

You see, the thing about freedom of speech is that it’s not freedom of speech without consequence. My friend with the curious headgear gets judgemental looks and gets ignored and brushed past for espousing what he believes in. That is his consequence. If you catcall a girl, that’s pretty much 100% insurance that she will never even consider going out with you. That’s your consequence. People get on the internet, and they don’t think that their speech should have consequences. This is in part because of the way that social media is structured. It is convenient for people with similar views find each other. For my lizard-fearing friend, he may sit in the park all day and never once meet someone who will listen to him for more than ten seconds, but if he made a facebook group, he’d probably find a group of people who would sit around and affirm his beliefs like the studio audience of the Ellen Show. When someone is surrounded by the people they have picked and chosen, they are somewhat more insulated from the kind of scrutiny and judgement that characterizes a public space.

If a social media platforms decides to pursue a policy where they plunge into the thicket of righteousness, like a stag who proudly saunters into the brambles, will find itself ensnared. Consider, again, the example of the climate change denier. If they were a reasonable human being who has looked at the sources available to him or her and has just decided that there wasn’t a preponderance of evidence that would allow him or her to scientifically conclude that climate change was a real phenomenon, most of us would probably understand where this individual is coming from. They’re not acting maliciously or out of ostrich ignorance, they’ve simply come to the conclusion that to them, follows from the facts. If the social media platform interprets this as something that ought to come down, it’ll likely be not for the reason of “clear and present danger”, but for the reason that it’s false, and that it’s more detrimental to society to allow than prohibit. What if a kid sees it and grows up to be someone who writes environmental policy? In this case, I believe that the standard of evidence in this case that people find justifiable is truth. But I believe that this standard is impossible to maintain. If someone posts that a pound of lemon contains more sugar than a pound of strawberries would we expect someone at Facebook to go and chemically deconstruct the fruit in the break room? If someone posts about there being a true 9th planet which is not pluto, would we expect twitter to launch a spaceship and confirm it? The standard of truth is something that will never be able to be upheld in a practical way because there are so many things that are filled with doubt. Science as a procedure is very clear about this. There might be a giant space mind reading laser run by a lizard government and the only person who is safe is my friend in the foil hats drinking from a Russel’s Teapot of I told you so. What if there’s a post about how eating an entire pizza is an acceptable form of self-care. Most of us would argue that no, that’s a dietary practice that ought to be prohibited, and might even complain about it to the platform saying that this post encourages eating habits that aren’t healthy and would lead to harm. It should be taken down because it’s not good for us. But then we’re in a position where it is up to the corporation to decide what is good for us, and their record on that isn’t one that tends to inspire confidence.

The problem is that is that people want the best of both worlds. They want to be able to say all the things that they want to say, but anything that they find offensive or objectionable they want taken away from them. Yes I want to share my objectionable opinion with the hashtag #sorrynotsorry but if someone else says something that offends me they ought to be sequestered in Facebook jail. People want media platforms to be this weird hybrid of public and private, and there is an obvious need for laws restricting freedoms even in public spaces, and free speech even in private spaces, but I don’t think we can Hannah Montana ourselves out a best of both worlds solution. In the coming months, years, and decades, there will be a struggle between deciding whether or not these media platforms will decide to take a public role of allowing speech, even of objectionable or offensive, to exist, or whether they will choose to shoulder the burden of selecting speech they think is good, true, or beneficial and proudly march into that thicket.
x