Search

Social Media Giants Support Racial Justice. Their Products Undermine It. - The New York Times

sanirbanir.blogspot.com

Several weeks ago, as protests erupted across the nation in response to the police killing of George Floyd, Mark Zuckerberg wrote a long and heartfelt post on his Facebook page, denouncing racial bias and proclaiming that “black lives matter.” Mr. Zuckerberg, Facebook’s chief executive, also announced that the company would donate $10 million to racial justice organizations.

A similar show of support unfolded at Twitter, where the company changed its official Twitter bio to a Black Lives Matter tribute, and Jack Dorsey, the chief executive, pledged $3 million to an anti-racism organization started by Colin Kaepernick, the former N.F.L. quarterback.

YouTube joined the protests, too. Susan Wojcicki, its chief executive, wrote in a blog post that “we believe Black lives matter and we all need to do more to dismantle systemic racism.” YouTube also announced it would start a $100 million fund for black creators.

Pretty good for a bunch of supposedly heartless tech executives, right?

Well, sort of. The problem is that, while these shows of support were well intentioned, they didn’t address the way that these companies’ own products — Facebook, Twitter and YouTube — have been successfully weaponized by racists and partisan provocateurs, and are being used to undermine Black Lives Matter and other social justice movements. It’s as if the heads of McDonald’s, Burger King and Taco Bell all got together to fight obesity by donating to a vegan food co-op, rather than by lowering their calorie counts.

It’s hard to remember sometimes, but social media once functioned as a tool for the oppressed and marginalized. In Tahrir Square in Cairo, Ferguson, Mo., and Baltimore, activists used Twitter and Facebook to organize demonstrations and get their messages out.

But in recent years, a right-wing reactionary movement has turned the tide. Now, some of the loudest and most established voices on these platforms belong to conservative commentators and paid provocateurs whose aim is mocking and subverting social justice movements, rather than supporting them.

The result is a distorted view of the world that is at odds with actual public sentiment. A majority of Americans support Black Lives Matter, but you wouldn’t necessarily know it by scrolling through your social media feeds.

Credit...T.J. Kirkpatrick for The New York Times

On Facebook, for example, the most popular post on the day of Mr. Zuckerberg’s Black Lives Matter pronouncement was an 18-minute video posted by the right-wing activist Candace Owens. In the video, Ms. Owens, who is black, railed against the protests, calling the idea of racially biased policing a “fake narrative” and deriding Mr. Floyd as a “horrible human being.” Her monologue, which was shared by right-wing media outlets — and which several people told me they had seen because Facebook’s algorithm recommended it to them — racked up nearly 100 million views.

Ms. Owens is a serial offender, known for spreading misinformation and stirring up partisan rancor. (Her Twitter account was suspended this year after she encouraged her followers to violate stay-at-home orders, and Facebook has applied fact-checking labels to several of her posts.) But she can still insult the victims of police killings with impunity to her nearly four million followers on Facebook. So can other high-profile conservative commentators like Terrence K. Williams, Ben Shapiro and the Hodgetwins, all of whom have had anti-Black Lives Matter posts go viral over the past several weeks.

In all, seven of the 10 most-shared Facebook posts containing the phrase “Black Lives Matter” over the past month were critical of the movement, according to data from CrowdTangle, a Facebook-owned data platform. (The sentiment on Instagram, which Facebook owns, has been more favorable, perhaps because its users skew younger and more liberal.)

Facebook declined to comment. On Thursday, it announced it would spend $200 million to support black-owned businesses and organizations, and add a “Lift Black Voices” section to its app to highlight stories from black people and share educational resources.

Twitter has been a supporter of Black Lives Matter for years — remember Mr. Dorsey’s trip to Ferguson? — but it, too, has a problem with racists and bigots using its platform to stir up unrest. Last month, the company discovered that a Twitter account claiming to represent a national antifa group was run by a group of white nationalists posing as left-wing radicals. (The account was suspended, but not before its tweets calling for violence were widely shared.) Twitter’s trending topics sidebar, which is often gamed by trolls looking to hijack online conversations, has filled up with inflammatory hashtags like #whitelivesmatter and #whiteoutwednesday, often as a result of coordinated campaigns by far-right extremists.

A Twitter spokesman, Brandon Borrman, said: “We’ve taken down hundreds of groups under our violent extremist group policy and continue to enforce our policies against hateful conduct every day across the world. From #BlackLivesMatter to #MeToo and #BringBackOurGirls, our company is motivated by the power of social movements to usher in meaningful societal change.”

YouTube, too, has struggled to square its corporate values with the way its products actually operate. The company has made strides in recent years to remove conspiracy theories and misinformation from its search results and recommendations, but it has yet to grapple fully with the way its boundary-pushing culture and laissez-faire policies contributed to racial division for years.

Credit...Erin Schaff/The New York Times

As of this week, for example, the most-viewed YouTube video about Black Lives Matter wasn’t footage of a protest or a police killing, but a four-year-old “social experiment” by the viral prankster and former Republican congressional candidate Joey Saladino, which has 14 million views. In the video, Mr. Saladino — whose other YouTube stunts have included drinking his own urine and wearing a Nazi costume to a Trump rally — holds up an “All Lives Matter” sign in a predominantly black neighborhood.

A YouTube spokeswoman, Andrea Faville, said that Mr. Saladino’s video had received fewer than 5 percent of its views this year, and that it was not being widely recommended by the company’s algorithms. Mr. Saladino recently reposted the video to Facebook, where it has gotten several million more views.

In some ways, social media has helped Black Lives Matter simply by making it possible for victims of police violence to be heard. Without Facebook, Twitter and YouTube, we might never have seen the video of Mr. Floyd’s killing, or known the names of Breonna Taylor, Ahmaud Arbery or other victims of police brutality. Many of the protests being held around the country are being organized in Facebook groups and Twitter threads, and social media has been helpful in creating more accountability for the police.

But these platforms aren’t just megaphones. They’re also global, real-time contests for attention, and many of the experienced players have gotten good at provoking controversy by adopting exaggerated views. They understand that if the whole world is condemning Mr. Floyd’s killing, a post saying he deserved it will stand out. If the data suggests that black people are disproportionately targeted by police violence, they know that there’s likely a market for a video saying that white people are the real victims.

The point isn’t that platforms should bar people like Mr. Saladino and Ms. Owens for criticizing Black Lives Matter. But in this moment of racial reckoning, these executives owe it to their employees, their users and society at large to examine the structural forces that are empowering racists on the internet, and which features of their platforms are undermining the social justice movements they claim to support.

They don’t seem eager to do so. Recently, The Wall Street Journal reported that an internal Facebook study in 2016 found that 64 percent of the people who joined extremist groups on the platform did so because Facebook’s recommendations algorithms steered them there. Facebook could have responded to those findings by shutting off groups recommendations entirely, or pausing them until it could be certain the problem had been fixed. Instead, it buried the study and kept going.

As a result, Facebook groups continue to be useful for violent extremists. This week, two members of the far-right “boogaloo” movement, which wants to destabilize society and provoke a civil war, were charged in connection with the killing of a federal officer at a protest in Oakland, Calif. According to investigators, the suspects met and discussed their plans in a Facebook group. And although Facebook has said it would exclude boogaloo groups from recommendations, they’re still appearing in plenty of people’s feeds.

Credit...Michelle V. Agins/The New York Times

Rashad Robinson, the president of Color of Change, a civil rights group that advises tech companies on racial justice issues, told me in an interview this week that tech leaders needed to apply anti-racist principles to their own product designs, rather than simply expressing their support for Black Lives Matter.

“What I see, particularly from Facebook and Mark Zuckerberg, it’s kind of like ‘thoughts and prayers’ after something tragic happens with guns,” Mr. Robinson said. “It’s a lot of sympathy without having to do anything structural about it.”

There is plenty more Mr. Zuckerberg, Mr. Dorsey and Ms. Wojcicki could do. They could build teams of civil rights experts and empower them to root out racism on their platforms, including more subtle forms of racism that don’t involve using racial slurs or organized hate groups. They could dismantle the recommendations systems that give provocateurs and cranks free attention, or make changes to the way their platforms rank information. (Ranking it by how engaging it is, the way some platforms still do, tends to amplify misinformation and outrage-bait.) They could institute a “viral ceiling” on posts about sensitive topics, to make it harder for trolls to hijack the conversation.

I’m optimistic that some of these tech leaders will eventually be convinced — either by their employees of color or their own conscience — that truly supporting racial justice means that they need to build anti-racist products and services, and do the hard work of making sure their platforms are amplifying the right voices. But I’m worried that they will stop short of making real, structural changes, out of fear of being accused of partisan bias.

So is Mr. Robinson, the civil rights organizer. A few weeks ago, he chatted with Mr. Zuckerberg by phone about Facebook’s policies on race, elections and other topics. Afterward, he said he thought that while Mr. Zuckerberg and other tech leaders generally meant well, he didn’t think they truly understood how harmful their products could be.

“I don’t think they can truly mean ‘Black Lives Matter’ when they have systems that put black people at risk,” he said.

Let's block ads! (Why?)



"social" - Google News
June 20, 2020 at 01:45AM
https://ift.tt/3enWj2k

Social Media Giants Support Racial Justice. Their Products Undermine It. - The New York Times
"social" - Google News
https://ift.tt/38fmaXp
https://ift.tt/2WhuDnP

Bagikan Berita Ini

0 Response to "Social Media Giants Support Racial Justice. Their Products Undermine It. - The New York Times"

Post a Comment

Powered by Blogger.