Search

How Social Media's Obsession with Scale Supercharged Disinformation - Harvard Business Review

sanirbanir.blogspot.com

The attack on the U.S. Capitol building was the culmination of years of disinformation and conspiracy theories that had been weaponized on social media networks. Could that weaponization have been prevented? Perhaps. The dominant business model of these platforms, which emphasized scale over other considerations, made them particularly vulnerable to disinformation networks and related backlash against those networks — both the loss of infrastructure support, as in the case of Parler, and the threat of regulatory crackdown, as in the case of Facebook and Twitter. While the scale-centric business model paid off for these networks in the short to medium term, the overlooked risks of that model have brought these platforms to the reckoning they face today.

Over the last four years, disinformation has become a global watchword. After Russian meddling on social networks during the 2016 U.S. presidential election, experts expressed concerns that social media would continue to be weaponized — warnings that were often dismissed as hyperbolic.

But the January 6 siege on the U.S. Capitol building illustrates just how powerful a networked conspiracy can be when it’s amplified through social media. The attack was the culmination of years of disinformation from President Trump, which ramped up after Biden was declared the president-elect — and largely the product of social media companies’ inability to control the weaponization of their products.

Over the years, we’ve witnessed different approaches to weaponization take shape. While Russian meddling illustrated the potential for well-placed disinformation to spread across social media, the 2017 “Unite the Right” event in Charlottesville, Va. showed how a group of white supremacists could use social media to plan a violent rally. The Capitol siege had elements of both — it involved a wider ideological spectrum than Charlottesville, and participants had not simply coordinated over social media, but had been brought together through it. The insurrectionists were united by their support for Donald Trump and their false belief that the election had been stolen from him. At the apex of the moment, Trump used social media to message to the rabid crowd in real time from his mobile phone at a safe remove.

This has raised fundamental questions about the future of the platforms where this all played out. Mainstream platforms like Facebook and Twitter are being forced to reckon with their moderation policies and facing calls for regulation. And the conservative social media network Parler, which prides itself on its minimalist approach to content moderation, has lost all infrastructure support from Apple, Android, and Amazon Web Services over posts inciting violence, including planning and coordination around the Capitol attack. Without buy-in across infrastructure services, it can be difficult for apps and websites to stay online.

But in order to know what comes next, we need to ask: How did social media become a disinformation machine? And how do the business models of these tech companies explain how that happened?

Everything open will be exploited.

For more than a decade, the business model for today’s social media giants, Facebook, YouTube, and Twitter has been to pursue scale. Great ideas, such as the video sharing platform Vine, were left behind in this pursuit, while shareholder KPIs were pegged to expanding the user base. This approach has a significant weakness: When a platform’s growth depends on openness, it’s more vulnerable to malicious use. As we can now see, this open business model can leave companies exposed in ways that these businesses are now are being forced to reckon with.

There have been a few critical phases that lead to this moment. Each, in its own way, illustrated how the vulnerability of the open, scale-centric business model of social media platforms could be exploited.

Relatively early on, the focus on growth set the conditions for the development of a shadow industry of fake followers and artificial engagement. According to insiders, this was well-known, but social media companies avoided discussions about the abuse of their products. Billions of advertising dollars were lost to fake impressions and clicks as more and more bad actors leveraged openness as a financial opportunity.

When online marketing was turned into a political tool, however, the field of bad actors expanded greatly — as did the possible damage they could do. The connection between social media and political events such as Brexit and Trump’s win became clear after Carole Cadwalladr broke the Cambridge Analytica scandal. The incident provided a case study in how data harvested from social media could be repurposed to target specific audiences with content that inflamed political tensions and fractured coalitions, not to mention plant junk news and generally make chaos and confusion reign.

That development coincided with a similar assault on the sensibility of social media users — the creation of military fan fiction known as “QAnon” in 2017. Rising from the ashes of the Pizzagate conspiracy, which claimed Hilary Clinton was part of a child-exploitation network in D.C., a mysterious account named “Q” began posting cryptic missives on a message board known for memes, anime pornography, and white supremacist organizing. While wide-ranging, the core narrative of QAnon was that Trump was secretly engaged in a war with the “deep state” to arrest Clinton and stop a Democrat-run cabal of Satan worshiping pedophiles engaged in large-scale human trafficking. For years, QAnon followers were told to “trust the plan.” (Yes, I know it sounds crazy, but the narrative pegged itself to the news cycle and every twist and turn in the media that seemed to prevent Trump from carrying out his agenda provided additional fodder.)

With QAnon, the fringe moved to the mainstream, with Q discussion threads popping up on Facebook, Reddit, and Twitter. The platforms’ growth model meant content and groups that produced high engagement were rewarded with higher priority in recommendations. In other words, QAnon communities delivered the kind of content that social networks prize and benefited accordingly. A few specific events, like the arrest of Jeffery Epstein and the Las Vegas mass shooting, generated bursts of new interest in Q’s posts and analysis of them. Q networks also incorporated the emergence of Covid-19, launching a hoax claiming the pandemic was a Democratic plot against Trump and organized several protests to this end.

Belatedly, some tech companies responded. Facebook and Twitter took some action to remove Q networks on their products this summer. Reddit did not have the same problems because they took action early to remove Q forums, and the conspiracy theory never gained a strong foothold on the platform. But by the time Twitter and Facebook took action, Q communities had already planned for deplatforming, creating redundant networks on other apps with smaller networks, like Gab and Parler.

With the election of Joe Biden in November, the effects of these trends became clear. The outcome of the election was jarring to those who were saturated by these conspiracy theories. The feeling of being alienated politically, while also isolated during a pandemic, had fired up many Q followers to the point where Trump only needed to light the match on social media to spread election conspiracies like digital wildfire.

In every instance leading up to January 6, the moral duty was to reduce the scale and pay more attention to the quality of viral content. We saw the cost of failing to do so.

Where we go from here.

In his book Anti-Social Media, Siva Vaidhyanathan writes, “If a global advertising company leverages its vast array of dossiers on its two billion users to limit competition and invite antidemocratic forces to infest its channels with disinformation, democratic states should move to break it up and to limit what companies can learn and use about citizens.” In the wake of the attack on the Capitol, we’re seeing a growing interest in doing just that.

As we, as a society, consider next steps, we should keep in mind that emphasizing scale has a trade off with safety. Furthermore, failing to act on disinformation and viral conspiracy doesn’t mean they will eventually just go away; in fact, the opposite is true. Because social media seems to move the fringe to the mainstream, by connecting people with similar interests from the mundane to the utterly bizarre, tech companies must come up with a plan for content curation and community moderation that reflects a more human scale.

Tech companies, including start-ups wary of overreach, and VCs should begin to draw up model policies for regulators to consider, bearing in mind that openness and scale pose significant risks not only to profits, but to democracies.

Let's block ads! (Why?)



"social" - Google News
January 14, 2021 at 04:42AM
https://ift.tt/2KfbHUM

How Social Media's Obsession with Scale Supercharged Disinformation - Harvard Business Review
"social" - Google News
https://ift.tt/38fmaXp
https://ift.tt/2WhuDnP

Bagikan Berita Ini

0 Response to "How Social Media's Obsession with Scale Supercharged Disinformation - Harvard Business Review"

Post a Comment

Powered by Blogger.