When earlier this month the Taliban overtook Kabul, gaining control of Afghanistan, accounts on Twitter, Facebook, and YouTube began to herald it. Pictures posted by Taliban and pro-Taliban accounts showed signs of “safety.” There were videos of groups military forces patrolling cities and stopping “looting,” and messages from its leaders.
In response to the flood of pro-Taliban content, YouTube and Facebook stopped the Taliban from using their platforms (citing US sanctions policies). Cloudflare, an internet service provider, appeared to drop Taliban sites. Twitter said it planned to ban individual pieces of Taliban content that advocated violence, according to Vox’s Recode.
Are these bans actually a good idea?
The logic of this crackdown has been basically that the Taliban is bad—and bad things shouldn’t be on social media because they encourage such behavior. It has been pushed for by certain hawkish extremism researchers and some journalists. As one of the proponents of that argument, the Counter Extremism Project, a nonprofit, argued in a statement to reporters, the Taliban represents the “‘worst of the worst in [online] terrorist material.”
“Giving the Taliban a platform and allowing it to remain online in any capacity poses significant risks to public safety and security,” its executive director, David Ibsen, said. But experts are skeptical that this knee-jerk reaction, probably well-intentioned, will actually help.
“I question what good banning the Taliban will do for Afghans,” Emerson T. Brooking, a fellow at the Atlantic Council’s DFR Labs specializing in disinformation and extremism, told me.
When Facebook-owned WhatsApp took down Taliban content last Tuesday, the Financial Times reported there were poor ripple effects. The purge included a complaints helpline that was used by Afghans to report looting.
“If the Taliban all of a sudden can’t use WhatsApp, you’re just isolating Afghans, making it harder for them to communicate in an already panicky situation,” Ashley Jackson, a former Red Cross and Oxfam aid worker in Afghanistan and author of a book on the Taliban and its relationship to Afghan civilians, told the Financial Times in reference to WhatsApp’s decision “Preventing communication between people and the Taliban doesn’t help Afghans, it is just grandstanding.”
In addition to reducing communication avenues for Afghans, Brooking noted, pushing the Taliban further into digital isolation during a “delicate moment in which they seek international legitimacy” could let the problem disappear. He argued that allowing them to stay on social media keeps them within the global gaze and therefore subject to the pressure of the world’s judgment. “If they’re pushed offline, they might be inclined to do the most repressive things that we fear will happen,” Brooking said.
Even trickier is that the Taliban is also the governing body of Afghanistan. It has almost certainly violated social media policies of most platforms and is guilty of obscene atrocities, but so are many nation-states. China runs concentration camps where they brutalize Uyghur Muslims. Saudi Arabia has disappeared countless dissidents and killed thousands of civilians with airstrikes in Yemen. Even the United States military, by its own estimate, killed 700 civilians in 2019 alone. You don’t have to paint a false equivalence to see that, evenly applied, whatever metric of brutality and violence that could be used to justify banning the Taliban would force tech platforms to at least consider banning these governments (and several others) if they were sincerely interested in equal application of their rules. (It becomes even thornier when a president, say in the US, is calling for violence.)
“The truth is that the big five social media companies are effectively the global FCC right now,” Eliza Campbell, director of the Middle East Institute’s Cyber Program, to me, referring to the federal agency partly tasked with setting standards of what is permissible for broadcast TV and radio. The companies have “all of this power to decide what is and what isn’t a government, what is and isn’t a legit government, and even who gets to be called a democracy,” Campbell said.
This isn’t the first time these kinds of questions have come up. Facebook has amassed so much power that it’s often forced to make quasi-governmental decisions in its moderation policies. Hezbollah and Hamas in particular have produced challenging questions for the companies.
Brooking noted that Hamas specifically as a “parallel” case to the Taliban: Both have committed atrocities, but both are governing bodies representing people in the countries they operate in. In 2017, Facebook took aggressive action against Hamas that, in Brooking’s view “directly affected the freedom of Palestinian expression.”
There is a way of thinking that the problems that happen for social media companies are solvable by social media policies. But the issue of the Taliban is, like many others, beyond the companies. What do you do when the one calling for violence is the government?
Jillian C. York, director of international freedom of expression at the Electronic Frontier Foundation, wrestles with this in her new book, Silicon Values. “Terrorism,” she writes, “was once viewed as the product of grievances, a tool, and one that could be used both by opposition forces and established regimes. But within a short period, that definition shifted within the US government to one that cast terrorism as purely an activity of sub-state actors.” In other words, Max Weber’s specifically, the state reinforces its monopoly on violence. Somehow, violence carried out by certain governments, no matter how gruesome and unjust, is exempt from being called terrorism or thought of as the same as those from outside groups. The Taliban burst this bubble. They were the non-state actors who now govern the state.
To hide this conundrum, Facebook (and YouTube) have tried to use the law as a justification for kicking the Taliban off their platforms. But even that is shaky: The Taliban is not on the US foreign terrorist designation list. Instead, Facebook and YouTube have tried to say that sanctions law compels them to ban the Taliban. But Twitter isn’t instituting a blanket ban. Add in that Facebook has lied before and it’s a bit hard to believe they’re compelled to do anything.
The companies end up in hot water either way. If the social media platform complies with governments demands, it becomes an extension of state power. If the company doesn’t, it is an anti-democratic institution that isn’t accountable to anyone. In practice, platforms straddle the line between these two outcomes.
“Facebook behaves as an extension of the US at times and against its interest at other times, depending on where the pressure and money come from,” York told me. In this specific case, “because of [Office of Foreign Assets Control] regulations,” which sanctions the Taliban, “it’s actually much more of a direct extension of [state power].” Essentially, York said, in adhering to US rules, social media companies help extend the reach of the United States far beyond its own borders.
Often, few notice this subtle control of information. The government—and most people with the power to pressure a social media platform—do not want ISIS posting. Other moments highlight a subtle techno-imperialism, when the geopolitical goals of the US, which can be at odds with the safety of local populations, are carried out through social media platform enforcement. If only non-state actors are banned, and states that carry out gruesome and violent campaigns aren’t, it lends credence to the idea that state-sponsored violence is somehow more just. The sheer power and size of companies like Facebook and YouTube and even the much smaller Twitter, end up creating a situation where there are seemingly no good answers within the status quo.
This was evident already in 2018. At that time, social media companies had received public criticism over letting violent extremists take advantage of their platform. The companies invited reporters like me to their offices to tout their work to address this. As part of an on-background press briefing at a plush DC office, they had been explaining their commitment to keeping “violent and dangerous” groups off their platform.
As a proxy, I asked an executive of a major technology company if they would have banned the Black Panthers. The Panthers, after all, were a national security threat according to the government. The FBI director at the time, J. Edgar Hoover, had said that “the Black Panther party, without question, represents the greatest threat to the internal security of the country” among domestic “black extremist groups.” They were occasionally violent. They killed a suspected informant. They got into a provoked shootout with police officers (though this wasn’t exceptional at the time). Yes, the Black Panthers provided meals to children and a force of political organizing that worked in the service of pursuing the equality Black Americans were supposed to get from civil rights legislation. But if the US government had said that the Black Panthers were a threat to national security, and social media platforms had existed at the time, wouldn’t they be banned under your current policies? I asked.
I watched the gears turn in the executives’ heads as they tried not to look like they were bullshitting me. They proceeded to dodge the question. They reiterated the policy of banning people advocating violence. They promised to work with governments to determine terror groups that should be banned.
I asked the question again.
An executive continued to dance around it.
Part of the reason the executive didn’t elaborate is that they probably couldn’t. The rules that Facebook, Twitter, YouTube, and other social media companies use are often amorphous and contingent. What gets something or a group banned in one instance doesn’t get them banned in another. YouTube defended keeping up a video of the journalist James Foley being beheaded by ISIS as “newsworthy,” until it received substantial pushback and reversed course. The families of Syrian victims of ISIS likely wouldn’t have the same ability to mount a campaign in the mainstream media to get social media companies to remove similar types of content.
Each time something bad in the world happens, people quickly turn to these companies to examine if and how these bad things can be excised from platforms. Sometimes it makes sense, but in other moments, what’s to be done is less clear. The Taliban’s takeover of Afghanistan, and subsequent calls for their banning from social media, is probably more the latter than the former.
“The right course of action to fix this,” Campbell told me, “would be to go back in time and bake more social responsibility into companies as they were being built.”
"social" - Google News
August 31, 2021 at 05:00PM
https://ift.tt/3t60lUX
Does Banning the Taliban From Social Media Actually Help Afghans? – Mother Jones - Mother Jones
"social" - Google News
https://ift.tt/38fmaXp
https://ift.tt/2WhuDnP
Bagikan Berita Ini
0 Response to "Does Banning the Taliban From Social Media Actually Help Afghans? – Mother Jones - Mother Jones"
Post a Comment