Tom Rogers, Executive Chairman of Engine Media.
Anjali Sundaram | CNBC
In the early 1980s, I was Senior Counsel to the House of Representatives committee overseeing the country's media regulation and the FCC. The issue we grappled with most then was how few citizens could access electronic media to provide their views.
At the time, the electronic media landscape was dominated by broadcast networks and their affiliated stations, who were granted licenses to scarce spectrum to the exclusion of anybody else. Cable television was just coming on the scene, and we enacted a federal statute to encourage the diversity of electronic voices by making it more likely that cable program channels, and cable news channels in particular, would have the regulatory and financial conditions to develop.
Now that the internet has come of age, and with it the now-giant social media platforms, the issue today is not how few people have access to deliver their views through electronic media, but how to deal with irresponsible and objectionable speech.
This issue came into the spotlight more than ever last week, when President Trump lashed out with an executive order intended to punish Twitter for tagging two of his tweets with a "possibly false" content warning.
A debate ensued between the CEOs of the two most prominent social networks.
Twitter CEO Jack Dorsey took the position that all speakers must be held accountable to the company's guidelines. Thus, Twitter flagged a pair of tweets by Trump in which he made baseless claims that mail-in ballots enabled large-scale fraud, and flagged one of Trump's tweets with a more restrictive content warning on the basis that it "glorified violence."
Facebook CEO Mark Zuckerberg responded by arguing that his giant social media platform should not become the "arbiter of truth," as that would concentrate too much power in the hands of the large social media network, and entangle it in partisan political debates. Facebook allowed Trump's posts to remain online with no warning or censorship.
The debate is complicated by a legal statute, Section 230 of the Communications Decency Act of 1996, which exempts internet platforms from liability for third-party speech – speech that they did not generate directly. Therefore, no matter how false or defamatory a post on their platforms may be, the likes of Twitter and Facebook cannot be held legally liable. (There are exceptions for some specific types of content, like child pornography.)
Trump's executive order was intended to tie the Section 230 liability immunity to "good faith" content moderation or policing practices. However, it's extremely dubious that the president's executive order could legally overturn the Section 230 statutory protection. An earlier proposal from Republican Senator Josh Hawley of Missouri would require Federal Trade Commission make social media platforms prove that they are politically neutral in order to get Section 230 immunity — an even more constitutionally dubious proposition.
If Section 230 were to be repealed it would give social media platforms an enormous incentive to be overly restrictive toward controversial speech on their platforms, for fear that they would incur significant monetary damages.
With all this in mind, here is a modest proposal to provide a mechanism to redress false, wrongful or defamatory speech, without centering any censorship power within social media company management, and without repealing Section 230 and ushering in the much more restrictive speech crackdowns that would inevitably ensue.
The premise is all views can be "aired" but all views are subject to AIRing — that is, "Arbitration Independent Review."
Creating a special fast-track arbitration system for social media speech could provide recourse without Mark Zuckerberg becoming the arbiter he does not want to be, and without letting Jack Dorsey become the arbiter he seems to be comfortable being. Moreover, it avoids the Trumpian constitutional pitfalls of the government becoming the arbiter appropriate speech.
Here's how the Arbitration Independent Review (AIR) process would work.
- Social media platforms of a certain size, in order to receive the liability protection of Section 230, would have to create robust and diverse lists of arbitrators. These arbitrators would have to represent the full spectrum of political persuasion, demographic, and occupational characteristics. The qualifications or training necessary to be an arbitrator of this sort would have to be developed.
- A process would need to be established where those willing to serve as arbitrators would be paid for this important work.
- The arbitrators would be paid through a tax or fee to be imposed on the revenues of large social media firms in order to fund this process.
- The speaker and the disputing party would each choose an arbitrator, and just like in any other arbitration proceeding, those two arbitrators would pick a third independent arbitrator.
- All arbitrators, as a condition of serving for any AIR proceeding, would have to agree to select a third arbitrator within 24 hours or both would lose their status as arbitrators going forward. This is critical because a speedy determination is required for any independent response to be meaningful.
- All arbitration decisions would have to be rendered within three days' time with a written decision, no matter how short, indicating whether content needs to be flagged or removed.
- During the three-day period, the tweet or post in question would be labeled as "in the process of AIR-ing" and any sharing of the material would require showing that warning.
- Since an objectionable tweet by someone as controversial as President Trump would have many complaining parties coming from many different directions, each post or tweet would be subject to only a single AIR proceeding.
- With so many complaining parties, selection of an arbitrator from the complaint side would get complicated. One way to resolve that is for the social media company to choose a handful of individuals or groups representing the complaining side; request that they each select an arbitrator; and then randomly choose one of the proposed arbitrators as the selected arbitrator for that side.
- Since it would be impossible to have a system to arbitrate all complaints about any and all speech when you are dealing with social media platforms that have billions of people participating, only tweets or posts that emanated from an account that had over a very high threshold of followers, or were read by a high threshold number of users, would qualify for speech that could be challenged on this basis. In other words, this would not create recourse for everything spoken on a social media platform, but it would provide recourse for that social media speech which has high enough awareness to become newsworthy.
- To avoid the issue of partisans challenging a speaker like President Trump on everything he tweets, each social media company would need to appoint an AIR Filter Board, consisting of a broad representative group of, say, nine participants which would immediately review in order to filter any complaint submitted. At least three of these board participants would need to agree the complaint was worthy of submitting to AIR review for violating the social media company's standards related to hateful, violence encouraging, defamatory, or false and misleading speech.
- If a speaker ended up with three adverse rulings within a 12-month period they would lose their account speaking privileges on that platform for 12 months.
Why both Zuckerberg and Dorsey are wrong
The Zuckerberg approach, which represents a fairly absolutist view, is that the social media platforms should not define objectionable speech unless it is way beyond the bounds of anything that should be tolerated according to their standards. The Facebook CEO would argue that while there need to be some guidelines against hate speech and other easily identifiable forms of expression which should not be protected, any political discourse coming from any politician should not be subject to any form of policing by the platform itself.
Zuckerberg's view implies that the more controversial a viewpoint is, the more likely it will be met with many countering expressions pointing out what is wrong or false with the statements made, allowing onlookers to deduce the truth by weighing the competing arguments.
But Zuckerberg faced rebellion from employees who believed his approach was a fundamental abdication of Facebook's responsibilities to in "good faith" moderate what can remain on their platform. Zuckerberg said on Friday he is rethinking his position.
The Dorsey view is that all speakers, even the President of the United States, must be held somewhat accountable. While speech should only be censored under extreme circumstances, tagging offending speech with some kind of content warning that indicates it is of questionable accuracy is indeed an important role for any social media company to play.
(Even the Twitter approach is insufficient and inconsistent when it comes to flagrantly inappropriate speech. This was shown very clearly when President Trump, in a tweet, resurfaced a completely debunked conspiracy theory that MSNBC host Joe Scarborough was somehow responsible for the murder of a staffer that happened some twenty years ago. The victim's widower pleaded with Twitter to take down the tweet as completely unfounded, and Twitter took no action.)
The situation is exacerbated by the social media platforms' advertising revenue models, because controversial and edgy speech helps draw people and increase engagement.
While Dorsey is correct that we need a way to make sure objectionable speech is identified, scrutinized and countered regardless of the speaker, Zuckerberg is right that these major social media platforms themselves will only concentrate their power further if they are the ultimate "arbiters of truth." To the extent we elevate any single person or company to that role, we will be returning to an even more restrictive period than the 1980s, when a handful of broadcasters controlled all electronic media.
Therefore, neither of these positions provides an answer on how to handle the current glaring situation, as we have entered a post-truth era where many speakers believe they are entitled to their own facts, and partisan divides are based on maintaining or creating one's own facts relative to any opposing partisan view.
The precedent for arbitration
Arbitration provides some basis for challenging disgraceful comments like the one Trump made about Scarborough without eviscerating Section 230's liability shield.
The arbitration approach also has some clear analogues to how regulation of scarce broadcast spectrum developed. When broadcasters were granted exclusive licenses to one of the few forms of electronic communications at the time, they were required to operate "in the public interest," which included following certain fairness obligations and airing local and public affairs programming.
As to a tax being imposed on social media platforms, this too can be analogized to when spectrum fees, or auction-based spectrum sales, were attached to the privilege of having a unique spectrum license. Given that Section 230 confers huge economic benefit on the social media platforms, there is certainly sufficient precedent for requiring social media platforms to pay a tax or fee to fund an arbitration scheme to further the public interest.
Some may argue arbitration does not always provide the best possible decision or outcome. However, arbitration has provided a widely used mechanism for resolving knotty disputes, and binding arbitration decisions are almost never reviewable by the courts in order to avoid the burdens on the judicial branch that would create.
By establishing a fast-tracked AIR-ing of contentious social media disputes, with arbiters armed with the ability to require content warnings, deletions, or account terminations, we would at least have a truly independent mechanism for policing the wild west of social media, while also establishing some cautionary admonishment to grossly irresponsible speakers that consequences can attach to their speech. Arbitration Independent Review is a far better path than what Trump, Dorsey or Zuckerberg proposes, and yet provides a solution for fully airing objectionable speech that embraces the policies all three have professed that they would like to see.
Tom Rogers is the former Senior Counsel to the US House of Representatives Telecommunications Subcommittee, the first President of NBC Cable, the former CEO of TiVo, and is currently Executive Chairman of Engine Media and Chairman of Captify. — both digital media companies. He is also a CNBC Contributor.
"social" - Google News
June 08, 2020 at 09:13PM
https://ift.tt/30jUO0F
Op-ed: How to regulate social media when there is no good answer - CNBC
"social" - Google News
https://ift.tt/38fmaXp
https://ift.tt/2WhuDnP
Bagikan Berita Ini
0 Response to "Op-ed: How to regulate social media when there is no good answer - CNBC"
Post a Comment