Search

The consequences of social media’s giant experiment - Brookings Institution

sanirbanir.blogspot.com

Facebook and Twitter have banned Donald Trump from their platforms. Fleeing to the right-wing alternative Parler has been effectively shut down by Amazon’s decision to cease hosting it on Amazon Web Services while both Android and Apple have removed it from their app stores.

The actions of Facebook and Twitter are protected by Section 230 of the 1996 Communications Act. This is the same Section 230 behind which social media companies have sheltered to protect them from liability for the dissemination of the hate, lies and conspiracies that ultimately led to the assault on the U.S. Capitol on January 6.

These actions are better late than never. But the proverbial horse has left the barn. These editorial and business judgements do, however, demonstrate how companies have ample ability to act conscientiously to protect the responsible use of their platforms.

Subsection (2) of Section 230 provides that a platform shall not be liable for, “Any action voluntarily taken in good faith to restrict access to or availability of material that any provider or user considers to be…excessively violent, harassing, or otherwise objectionable…” In other words, editorial decisions by social media companies are protected, as long as they are undertaken in good faith.

It is Subsection (1) that has insulated social media companies from the responsibility of making such editorial judgements. These 26 words are the heart of the issue: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That single sentence creates the current conundrum. If you are insulated from the consequences of your actions and make a great deal of money by exploiting that insulation, then what is the incentive to act responsibly?

These 26 words are the heart of the issue: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

The companies have shut down at least one liar and blocked an alternative pathway for his rants. Facebook and Twitter (and Google’s YouTube subsidiary) are protected by subsection (2) when they make this editorial decision. Why didn’t they do it earlier? The actions of the companies to deal with content on the apps, promotion of the apps and hosting of the apps have been decisive and warranted. The question is “why not until now?”

The social media companies have put us in the middle of a huge and explosive lab experiment where we see the toxic combination of digital technology, unmoderated content, lies and hate. We now have the answer to what happens when these features and large profits are blended together in a connected world. The result not only has been unproductive for civil discourse, but also that it represents a danger to democratic systems and effective problem-solving.

Dealing with Donald Trump is a targeted problem that the companies just addressed decisively. The social media companies assert, however, that they have no way to meaningfully police the information flowing on their platform. It is hard to believe that the brilliant minds that produced the algorithms and artificial intelligence that powers those platforms are incapable of finding better outcomes from that which they have created. It is not technological incapacity that has kept them from exercising the responsibility we expect of all other media, it is the lack of will and desire for large-scale profits. The companies’ business model is built around holding a user’s attention so that they may display more paying messages. Delivering what the user wants to see, the more outrageous the better, holds that attention and rings the cash register.

Thus far our political leaders have expressed concern about the effects of Section 230. Too often that activity has been performance for their base rather than progress towards a solution. As Congress takes up serious consideration of Section 230, here are a few ideas:

Social media companies are media, not technology

Mark Zuckerberg testified to Congress, “I consider us to be a technology company because the primary thing we do is have engineers who write code and build product and services for other people.” That software code, however, makes editorial decisions about which information to choose to route to which people. That is a media decision. Social media companies make money by selling access to its users just like ABC, CNN, or The New York Times.

There are well established behavioral standards for media companies

The debate should be over whether and how those standards change because of user generated content. The absolute absence of liability afforded by Section 230 has kept that debate from occurring.

Technology must be a part of the solution

When the companies hire thousands of human reviewers it is more PR than protection. Asking humans to inspect the data constantly generated by algorithms is like watching a tsunami through a straw. The amazing power of computers created this situation, the amazing power of computers needs to be part of the solution.

It is time to quit acting in secret

When algorithms make decisions about which incoming content to select and to whom it is sent, the machines are making a protected editorial decision. Unlike the editorial decisions of traditional media whose editorial decisions are publicly announced in print or on screen and uniformly seen by everyone, the platforms’ determinations are secret: neither publicly announced nor uniformly available. The algorithmic editorial decision is only accidentally discoverable as to the source of the information and even that it is being distributed. Requiring the platforms to provide an open API (application programming interface) to their inflow and outflow, with appropriate privacy protections, would not interfere with editorial decision-making. It would, however, allow third parties to build their own algorithms so that, like other media, the results of the editorial process are seen by all.

Expecting social media companies to exercise responsibility over their practices is not a First Amendment issue. It is not government control or choice over the flow of information. It is rather the responsible exercise of free speech. Long ago it was determined that the lie that shouted “FIRE!” in a crowded theater was not free speech. We must now determine what is the equivalent of “FIRE!” in the crowded digital theater.


Apple, Amazon, Facebook, and Google are general, unrestricted donors to the Brookings Institution. The findings, interpretations and conclusions in this piece are solely those of the author and not influenced by any donation.

Let's block ads! (Why?)



"social" - Google News
January 13, 2021 at 11:45PM
https://ift.tt/35EHeqJ

The consequences of social media’s giant experiment - Brookings Institution
"social" - Google News
https://ift.tt/38fmaXp
https://ift.tt/2WhuDnP

Bagikan Berita Ini

0 Response to "The consequences of social media’s giant experiment - Brookings Institution"

Post a Comment

Powered by Blogger.