The second question - whether social media companies have the right to censor the speech of others - is a yes. They are allowed to censor the speech of others, and exercise that right.geo wrote:I think there are two issues here. 1) Should social media companies be held responsible for what is posted on their platforms? And do social media companies even have a right to censor the speech of others?
Regarding the first question, there's a 1996 law on the books (Section 230) that provides legal immunity from liability for internet services and users for content posted on the internet. Ironically, Trump has called for this law to be revoked because he feels that he is always being attacked on social media.
Regarding the second question, social media companies do in fact try to monitor content. For example, Twitter regularly flags Trump's fact-challenged tweets and Facebook has just taken the step of banning President Trump until he leaves office. I would say that is absolutely the company's right and responsibility to take such actions. If violence happens as a result of false information or incendiary words being propagated on its social media platform, the company would be perceived as partly responsible, and its bottom line might suffer accordingly. And since these are private companies, the first amendment doesn't really come into play, does it?
What seems especially weird to me is that we have a POTUS who uses social media almost exclusively to communicate with the American people. As such, he is required to adhere to those companies' policies regarding content. This is unprecedented territory, a good example of trying to adapt to new technology where there aren't always easy answers.
The first question is the tricky one. At first glance, their fact-checking efforts and cherry-picked deleting of posts appears to be ample due diligence toward reigning in misinformation. But that's attacking the tip of the iceberg.
The real danger from social media is the transmission rate of disinformation. Of course, it’s far more complex than merely the transmission rate, as human emotion and bias, sourcing, tribal culture, etc. are variables that influence belief. But at the core, the rate at which disinformation spreads is the looming variable. It directly capitalizes on our base nature. If our confirmation bias seeking engines are fed disinformation at a rate 6x that of fact, even critical thinkers will be overwhelmed.
Social media platforms are finely tuned machines, specializing in “engagement”. The longer they can engage, the more likely they are to pass an advertisement across your field of view. The stronger the engagement, the longer you'll stick around to view an advertisement.
Industry insiders can go into a lot more detail on this point, but it’s basically the heart and soul of social media growth. Engagement is king, and it's the sole focus of the algorithm that decides what to put in the feed next.
Increasing engagement is done by maximizing the appearance of content that has “cognitive attraction”. Sounds like a silly buzzword, and maybe it is. But if I were to say that maximizing the “eliciting of emotion” was the best way to increase engagement, I’d be ignoring the hours that people spend watching youtube videos of their favorite gamer. Cognitive attraction includes the eliciting of emotion, but also includes something to do with addiction. I watch videos of woodworking for hours.
“We can shape misinformation to be appealing, attention-grabbing and memorable more than what we can do with real information. Notice this does not need to be a conscious process: creators of fake news can deliberately tailor their content to be appealing, but it could also be that, amongst the misinformation websites, the ones that publish non-attractive content are rarely visited and they end up disappearing.” -https://www.nature.com/articles/s41599-019-0224-y
The result of maximizing engagement is that the spread of disinformation vastly outpaces real information by orders of magnitude. It’s a nearly inextricable consequence, where fact-checking and banning alone won’t scratch the surface. It’s like swatting at mosquitos in the middle of the forest.
So why should social media companies be liable for content posted to their sites?
In short, because that’s the only possible way to get the algorithm to change in a way that suppresses the spread of misinformation. Yes, it’s all about the algorithm, buzzword or not. Social media companies will never change how their engagement algorithm works unless they’re punished for the unwanted consequences. And the only way to do that is to hold them accountable for what’s posted to their sites. If there is another way, I would like to hear it.
It's a bad solution, but I don't see a solution that's any better. I think we need to revoke section 230 as McConnell proposed.