Search

Berkeley Talks transcript: 'Social Dilemma' star on fighting the disinformation machine - UC Berkeley

sanirbanir.blogspot.com

Listen to Berkeley Talks episode #108: “’Social Dilemma’ star on fighting the disinformation machine.”

Geeta Anand: Thank you so much, Jeremy, and thank you Cal Performances for this invitation to engage in this really great conversation with Tristan. Tristan, it’s such a joy to have you here and to be beginning this conversation on arguably the most pressing issue of our times. The crisis you revealed to millions of people in The Social Dilemma has crushed journalism and its ability to serve its role in our democracy. We as journalists, journalism educators, citizens of this democracy, and human beings, appreciate your role in bringing a deeper understanding of the harms of social media to our attention. And we have so many questions.

The audience has submitted 99 questions even before this event today, and I’ve included some of their questions in the questions I’m going to ask you. And I want to say to our audience, please keep the questions coming. My colleagues here are going to gather them together and share them with me so I can ask them of Tristan at the end of this event. So, without further ado, let’s dive in, Tristan. And what I’d love for you to do is to take us into your Stanford classroom, where you study the ethics of human persuasion, and your Google workplace, where you worked as a design ethicist, to help us understand your journey in understanding the dangers of social media.

Tristan Harris: Thank you so much, Geeta, it’s really an honor to be here with you in the Graduate School of Journalism and part of this series. I actually, many people don’t know this, but my work that led me here was, I actually worked with, I forgot his name, but he was part of the UC Berkeley Journalism School on my first startup called Abshire, which is actually about deepening journalism and storytelling online, and I did that when I was in my very early twenties. So, this is kind of bringing me full circle and it’s really is an honor to be with you.

In terms of my background, as it’s talked about in the film, The Social Dilemma, I studied in a lab and was part of a class at Stanford called the Stanford Persuasive Technology Class that was connected to a lab, run by a professor named B.J. Fogg, who was trying to apply all of the lessons that we knew about the psychology of persuasion. Are there levers in the human mind that are universal? I have on my bookshelf books like Robert Cialdini’s Influence, Robert Greene’s The 48 Laws of Power. You could look at cult programming, you could look at marketing textbooks, you could look at hypnosis, you could look at neurolinguistic programming, you could look at pickup artistry. These are all disciplines about essentially manipulation and influence.

And this class, I was with my friends who were the co-founders of Instagram, Mike Krieger, and many other future alumni who would go on to be within the ranks of the technology companies. And I want to say this very clearly, the idea of this lab in this class was not to grow mustaches and twirl them until we could diabolically rule the world, it was really actually, how could we apply persuasive technology for good? Now, that sounds creepy to some people, because if you know more about someone else’s mind than they know about themselves, isn’t that intrinsically kind of a specious unethical relationship? But we’re living inside of power asymmetries all the time. I think it’s prevalent in our current cultural moment.

And just to give people a taste, the idea of using persuasive technology for good, the co-founder of Instagram and I, the first thing we worked on together in that class, this is way before Instagram, by the way, and before iPhones, this is a little project we called Send the Sunshine, which was the idea that due to seasonal affective depression disorder, which some people are experiencing right now if you’ve lived in a foggy or dreary places for too many days in a row, what if there was a persuasive technology that knew two friends, and it knew the zip codes of both of those friends? It could query a server and get the weather for both of those zip codes. And when one of those zip codes had seven days of bad weather, it could text the other friend and say, “Would you take a picture of the sunshine and send it to your friend over there?”

So, this is kind of a beautiful idea of persuasive technology, using and coordinating and orchestrating more beautiful uplifting experiences. And this is one tiny little example. And this is, keep in mind, before the iPhone actually. And that’s using persuasive technology for good. But in the final class, we actually did these long projects in the future of persuasive technology. And I remember distinctly one of the groups basically saying, “What if in the future we have a profile of every mind on the planet and we know exactly what persuasive characteristics their mind responds to? Do they respond to the idea that the government can’t be trusted? Do they respond to the idea that these five of their friends who they respect the most on these issues, if we use them and said they believe that this thing is true? Do they respond to appeals to authority?

“So, if I say Stanford university, or UC Berkeley, or Harvard said something, and therefore it’s more true. And if you knew those levers in the human mind, and you had that profile, and then you could tune any message to any person, well, this is a pretty scary idea.” And I remember feeling very uncomfortable. And this was in 2006. And I want to be very clear that the professor at Stanford, B.J. Fogg, who really ran this program and this inquiry, was concerned about the ethics of persuasive technology, stretching all the way back to the late 1990s. He presented to the FTC about the dangers of ethics of persuasive technology. But unfortunately, many of the alumni from that class went on to build some of these things.

In fact, Cambridge Analytica is essentially a replica of what I’ve just described. It is a profile of the persuasive characteristics of each mind. And that’s essentially the world we have today. And I think when we get trapped in debates about free speech in social media or things like this, we really are treating speech as if it’s this neutral transaction. You have a listener, you have a speaker, and they’re just talking to each other, as opposed to looking at degrees of asymmetry of power. If on one side, you have a supercomputer that has a avatar rooted all of each person on earth, and can literally simulate three billion variations of a message and advertisement to photo, rearranging the keywords, and then knowing whether the person has high openness, high conscientiousness, if they’ve already joined conspiracy theory groups, based on that, there’s a huge asymmetry of power, and that’s not free speech, that’s free manipulation.

And so, I think we need to really think about and use, and that’s why I appreciate you bringing up at the beginning, that we use the lens of persuasive technology and persuasion as one of the distinctive features of where we find ourselves with social media today and all the harms.

Geeta Anand: Tell me about arriving at Google and working there, and what about that experience informed you and led you to where you are today in your thinking?

Tristan Harris: Yeah. Well, I landed at Google in 2011. They actually acquired a failed startup that I had been working on. That was the one that we presented with the Stanford Knight fellows, and then the UC Berkeley journalism program. I actually gave a lecture at UC journalism class back in 2006 or ’07 or something. And so, I landed at Google as a product manager. I was CEO of the company and a co-founder, but I became a product manager and worked with the Gmail team. And I worked on that team for about a year, working on some future-looking features. And I got frustrated and disappointed that here I was in kind of the belly of the beast, where, personally speaking, I had been addicted to email.

I checked it way too often, I felt stressed out by email. It felt like a very overwhelming experience, information overload, all those things. And I thought, “If there’s anyone in the world who cares about information overload, distraction, and the addictiveness of email, I’m in the room with the designers who are making those decisions. There is no other room, this is the room of people.” And that room was the control room for influencing what a billion people, a billion users of Gmail at the time, were seeing, feeling, and doing. And this is really prevalent. You go to any internet cafe before COVID, and half the laptops have opened Gmail. So, they’re living in this kind of digital habitat.

And I became concerned and frustrated not just with Gmail, and I want to say, this is not like some kind of enlightened whistleblower, it was really just noticing that our daily experience and my daily experience in technology didn’t feel as fulfilling, it didn’t feel like it was empowering, it felt like it was just increasingly about preying on our psychological vulnerabilities and stealing our time. And including many of my friends who were startup founders at the time, including the Instagram co-founders, and the people who were competing with them like companies like Path by Dave Morin, everyone was competing to figure out, “What kinds of new notifications or slot machines could I throw in front of your brain to get you coming back all the time?”

And noticing all of that, I got very uncomfortable. I went for a weekend to the Santa Cruz Mountains with my friend, who now became the co-founder of the Center for Humane Technology, Aza Raskin. AZA, his father was Jef Raskin, who started the Macintosh project at Apple. And we came back from the Santa Cruz weekend, and I thought, “There’s something fundamentally wrong with the tech industry’s direction, which is this attention economy, this race to capture human attention.” And that’s what led to that presentation that’s in the film, The Social Dilemma, which I created a presentation, sent it to 10 colleagues, that went viral at Google at 10,000 people. And then that led to becoming a design ethicist and working on the issues of how do you ethically influence two billion people’s thoughts and beliefs and choices, when you are inevitably making choices about what is engulfing their psychological experience on a daily basis?

Geeta Anand: I mean, the film, The Social Dilemma, was so influential. I mean, it represents, I mean, all of us as journalists, and the documentary journalists among us. I mean, we want to have impact. And that film had such impact. And the first month, I know I’m repeating information you know already, but to think that 38 million households viewed it in the first month is nothing short of astonishing. And what I was wondering if you could tell us is what the impact of that film was on you and your work and what you’ve been doing since then.

Tristan Harris: Yeah. The film was unbelievable. I think it broke all records for Netflix for documentary films, which is just, no one anticipated this. And I want to make sure I’m clear, that’s really the results, and the praise goes to the film team, Jeff Orlowski, Larissa Rhodes and their whole team at Exposure Labs. That team had previously made two climate change films, one called Chasing Ice, the other called Chasing Coral, and Jeff, the director, was a classmate of mine at Stanford, and he had talked to me as, “This is kind of the extractive model, but instead of applying it to the planet, we’re applying it to the human mind and attention. We’re drilling for attention and it leads to a different kind of climate change, not of the outer environment, but of the inner and intersubjective environment.”

And so, I don’t think anybody who worked in the film for three years, it was really a long slog, we anticipated the impact, we knew we wanted it to come out before the election. One of the most amazing things about the film, it was seen, by the way, by now a hundred million people in about 190 countries and in 30 languages. So, in terms of impact, it really just has blown us away and it resonates in so many places. I mean, in India and Brazil, places with huge other misinformation problems, things like this. So, we could have never anticipated it. One thing I’ll say is, especially in the U.S., I think what the film does is it creates a new shared common ground for why and how we lost all this common ground.

Because right now, we feel like we live in this dystopian fractured reality where no one sees the same reality, but the gift of the film, I think, is if more people see it, is it’s a shared reference point, it’s a shared place to stand on common ground about how and why we’ve lost common ground. And I think that’s really critical. We even heard from people who during the presidential debates between Biden and Trump, with family members that they couldn’t watch the debates with and feel peaceful, that instead of spending the 90 minutes getting angry at the presidential debates, they spent those 90 minutes watching The Social Dilemma. And they were actually able to bond as a family and agree that this was what had been tearing them apart.

So, at least, we can agree about that. And I think that gets to some of the later questions that I know we want to talk about around, how do you really get out of this when you have this mass scrambling of our reality? Because it’s not like there’s some magic button that Facebook or Twitter can press to just reassemble reality back together again and make sure we’re all seeing the same things, because we now have an epistemic fracture, we have a confirmation bias fracture. We have a trust fracture. We trust different sources. We have confirmation bias that’s looking to confirm evidence, based on a different foundation of things that we’re looking for. And I think that the fractures go so deep that unless we have something that’s more cultural that’s a shared touch point, we can’t get out of it.

Geeta Anand: That’s going to make me leap to one of the last questions I was going to ask you, which was someone I know telling me that he believes there’s only a 25% chance for humanity to survive, because we’ve lost the ability to have the dialogue that’s essential to solving the problems, the two huge problems we have today, which is polarization and apocalyptic climate change. So, in your opinion, are we doomed? Is it that bad? Is there hope?

Tristan Harris: Well, it’s funny, I was on a call recently with some mentors and advisers and people who practice mindfulness, and we were just talking about how the existential threat to humanity is just a sleepiness, if we are all not aware of what is happening. If everybody was aware of the timelines for climate change, and the timelines for what has to happen by when, and everyone was literally internalizing that, because one of the other books in my bookshelf I recommend for people is called State of Denial, and it’s about how denial as a psychological phenomenon is necessary for human beings to live with their day-to-day experience. So, we tend to be not really in touch with the realities that face us.

It’s one thing to know about climate change, it’s another thing to actually live fully grounded in what that means, and what choices I would make if I live inside of the timeline that extends from embodying that understanding. And so, I think that if we don’t have that shared understanding about these various problems, then, yeah, I think we’re obviously in deep trouble. I think, at every step, we have to ask, what would it take to reassemble that shared sense of reality? And that’s why the other thing that makes me hopeful is the film’s impact on the tech industry. The fact that so many people, I think, agree that this problem exists.

I will admit that there’s a different interpretation on the left, progressive and Democrat side of the spectrum of the film, than the interpretation of the film on the Republican side, who are more concerned about censorship and manipulation of elections. On the left, they view it more as this is why the right went crazy. And so, there’s still a fracturing of even what the film’s meaning is. But I do think that I feel hopeful at least with where we are in, and the common understanding that’s starting to get created. Obviously, the events of Jan. 6, I will first caveat by saying I think really made that come to life.

And just one inside story in the film when Tim Kendall, who came up with Facebook’s business model, who’s the president of Pinterest, before Pinterest had brought the Facebook advertising business model to Facebook, and he has the line in the film that says, when he’s asked by the director, “What are you worried about in the short term?” And he said, “civil war.” And I remember when that line was in the film, and many of you might remember it, he actually said that line way back in, I think it was November 2019 or October 2019 before COVID, before the polarization was as vividly visible. And there was a lot of pressure to take that line out of the film, because it felt like an overreach to what political state we were in.

But for those of us who are monitoring the extremist groups and the recommendation systems and the QAnon and all that, it did not at all feel like a far stretch. And it’s interesting how Jan. 6, I think, made it come real for a lot of people who had doubted that this is in fact where we were.

Geeta Anand: Just a question, putting on my reporter hat, let me start with just ask you about this famous statement you’ve made, which is that fake news spread six times faster than true news. And I’m just wondering where you got that information from and how it’s provable.

Tristan Harris: Yeah, that study specifically comes from a retrospective study from MIT. It’s Deb Roy’s lab, and his lab should deserve a lot of credit for the amazing work that they do. And it was looking back at fake news on Twitter and found retrospectively, due to a sample set, I think it was from 2016, that fake news had spread six times faster than true news. But you don’t have to look at that as a set of a one-off example, I think we can actually see that from an evolutionary perspective as a fundamental truth about how information in a viral information system works. So, what I mean is, if you imagine two organisms evolutionarily evolving in different directions, and one of those organisms on this side is constrained, it can only evolve and gain a new word or gain a new meaning for things that are true.

So, it’s very limited and constrained in what directions it can say, what it can say, what it can speak to. So, that space of truth is very constrained in how it can evolve. And if you look at the other space of being able to say or manufacture any sentence to claim that the entire election was rigged, to say anything you want, and reassemble words based on whatever works, whatever gets the most clicks, whatever gets more likes, more shares, the less restrained constraint actor is simply going to out-compete the constrained actor. I mean, if one guy is fighting with his hands behind his back and the other one’s not, who’s going to win?

That’s why if you just zoom forward and you say, “What is that world going to look like when unrestricted information evolution competes against restricted truth-based information?” This is the world we’re now, I think, visibly living in. And it’s very hard for people to, I think, recover where we get to. Now, again, I think if we realize that this happened, you have to rewind in your mind, all of the biases, all of the scrambling of our meaning-making, and say, “How do we humbly try to figure out what’s true?” And this is what I was excited to talk about, I think one of the roles of journalism is to help build a culture of deeper sense-making, of better humility, of figuring out how would we know that that’s true?

Is there any conditions in which that’s not true? Can I steel man the opponent’s argument? Can I actually make their arguments stronger than they’re doing, as opposed to saying, “Oh, they’re just flat-earther and they’re crazy”? So, anyway, these are the kinds of things that I think we have to get into.

Geeta Anand: Let’s talk about our psychological vulnerability to emotionally salient information. Can you explain how all of this works and inflicting harm in social media?

Tristan Harris: Yeah. There’s another study by a lab at NYU, and by the way, some of these are on our website at ledger.humanetech.com. We have a page called the Ledger of Harms, and it’s a collection of research from the ecosystem about some of these various problems. I recommend people check that out. One of those studies was about how for every word of negative moral emotional language you add to a tweet, it increased the retweet rate by 17%. So, if you say, “It’s a disgrace, it’s an outrage. How can they possibly…” Write these kinds of words, for each one of those you add, it increases the retweetability, because obviously emotions are… this is actually Jaron Lanier’s point, negative emotions hit the brain more strongly than positive emotions. They also stay around longer.

If you think about how long when you’re mad does it take to dissipate the hormones through your system, do you immediately switch back to being fine or does it take a little bit of time for those negative emotions to leave? So, there’s a stickiness to the negative emotions. Another example, because the film talks about the impact on kids, is if you think about… This is true, by the way, of all human beings, if each of you had a photo you posted on Instagram, and you had 100 comments, and 99 of the comments about you were positive, but one of the comments is negative, where does your attention go?

Geeta Anand: I want to ask you just about journalism, because 20 years ago, people were always criticizing journalism for some of these same harmful behaviors, for example, like the story of a car crash being the lead in a local news report, or even the story of a murder being on the front page, as opposed to a story about environmental policy. I mean, can you explain why journalism hasn’t had a comparatively harmful effect?

Tristan Harris: Well, I think these problems pre-exist in social media. So, I appreciate obviously this is all coming up because partisan television and journalism, yellow journalism, are doing the car crashes. Think of late TV news here in California, someone died, there was a stabbing. These kinds of things have existed for a long time. But if you think of an attention economy, that’s why we came up with this phrase, the race to the bottom of the brainstem, because it’s game theoretic. If you have a bunch of journalists who are out there, and let’s say you have a bunch of values-driven journalists, and they actually wait to figure out what’s true.

They call several people to double-confirm the facts, they write the headline in the more common nuanced way, they don’t include the photo of the gruesome thing that happened. But then imagine that all those peaceful values-aligned journalists, they’re competing against this other journalist who shows up, who shouts the most outrageous things, who puts the photo right at the top, who exaggerates and salacious. We’re only as good as who we’re competing with. And in that race, if the competition is raced to the bottom of the brainstem means a race to the reptile brain, a race to who’s doing the worst thing. It’s the same as in sustainability, you have peaceful tribes and sustainable tribes, but they just get killed by the unsustainable warlike and extractive tribes.

So, one of the fundamental problems we have to face in many of our problems is just game theory, that the least ethical actor will tend to outcompete in the short term, the ethical actor, because by definition, having values, things that you’re willing to do and not willing to do, means constraining yourself, compared to someone who doesn’t have values, who’s just doing whatever works in a micro-incremental way. And so, when you add in technology, you now add in that technology is allowing you to split test a million things that could work. When I think about Donald Trump, I think about someone who was tweeting all the time to test literally which phrases, which words, which reactions would get the most response from his crowd.

I mean, forget Frank Luntz, who’s someone I know, who’s ever-looking pollster, Donald Trump had Twitter to be able to get immediate polling data on literally everything that came out. In fact, I actually heard of someone who was in his briefings with him at the White House, and he would be at the meeting, he would listen to everyone’s arguments, then he would silently leave and he would tweet, and based on the responses that would come back, he would know what was politically appetizing and what was not. And I think that’s what we have to really be watching out for is actors that are willing to say or do anything, and to figure out what works, not what’s good for us.

Geeta Anand: Let’s talk about social algorithms. I’m often asked about them and I explain based on what I’ve read. But you’ve been there. Is there a human behind there somewhere? Who is Oz? Is it Zuckerberg or Dorsey? Or what’s going on there? And how does it work? And how has it led us to this chaotic situation?

Tristan Harris: Yeah. Well, I think a lot of people, it’s so funny looking back eight, five, four years ago, people thought, “Well, technology’s just a neutral tool.” I mean, with Facebook, let me steel man their argument, I picked my friends, right? You didn’t pick me my friends, I have my friends, I clicked on the articles I clicked on, I liked what I clicked on and liked, so why are we blaming Facebook for the problems where each user is just clicking on their own things? What I hope that The Social Dilemma really gave culture is this understanding of behind the screen.

You see those three artificial intelligence agents, one played by Vincent Kartheiser, who’s the actor in Mad Men. And the idea of the AIs is they’re trying to scheme a little bit and figure out what would get Ben, who is the character in the film, the teenage boy, to come back. And so, they say, “Should we try the ex-girlfriend? Is that going to work? Let’s try the ex-girlfriend. That always tends to work, to get him back. Should we try the photos of the skateboarding fails?” So, the guy of the skateboarding video fails. And so, what we really have is this sort of machine that’s probing and testing, and it’s kind of mustache twirling in its own mechanical way, to figure out what’s the perfect thing to stimulate you.

So, every time you flick your finger on Instagram or on Facebook or on TikTok, essentially, you’ve set off a competition between three supercomputers pointed at your brain, that have this avatar, voodoo doll like model of you, and those are the algorithms, are the sort of computations that are run to figure out what would stimulate the nervous system of this voodoo doll? So, when I say voodoo doll, I mean like here’s sort of digital theta, and based on all of your likes, comments, et cetera, what you clicked on, what videos you’ve watched, we’re adding more hair to the voodoo doll, then we add little pants to the voodoo doll because each thing you’ve done, all the little data trails, are essentially making the voodoo doll a little bit more accurate over time.

And then what you can do is you can split test, say, if I showed Geeta this video versus this video, it can predict which video will actually keep you there for the next 10 minutes, and which one won’t. And then when you ask, “How does that lead to political polarization and a breakdown of truth?” Well, imagine a newsfeed that every time you flick your finger, it shows you something that’s personalized to you, that confirms your view of reality, versus this other newsfeed where every time you scroll it, it’s not personalized to you. Which one’s going to be better at keeping your attention? Well, the one that confirms your view of reality, and it gives you more and more personalized Truman Show.

And so, you take this shared fabric of reality and you put it through the Truman Show shredder, into three billion independent channels. And that’s why it feels like we’re each living, not just in our own reality, but our own history, because we’re 10 years into this process.

Geeta Anand: It’s been devastating for fact-based credible journalism. I mean, I’m going to just read you some of the statistics which you already know, but half of the people under the age of 30 now get most of their political news on social media. Ad revenue has dried up. It fell by 62% in the decade leading up to 2018, and 25% of the 9,000 news publications that were being published 15 years ago have died. Can you help us just understand the connection between what’s been going on in social media and just the devastating blow that the journalism industry has suffered?

Tristan Harris: Yeah. Well, I think that the big technology companies have unfortunately hollowed out the fourth estate progressively, because their goal… I actually remember, because I was in the room when, her name was, I think, Alison Rosenthal, she was the first head of business development at Facebook, and she was pitching the various news publishing websites to put the first Facebook, at the time, it was the Share button, but it’s really the predecessor to the Like button. The idea that you’re on a news site, you’re on The Economist, you’re on The Washington Post, and then you could hit Share, and they would share the article to Facebook.

And Facebook was actually convincing all these news publishers, including The Washington Post because Don Graham was on their board in the beginning, to add this button, which was basically tricking all of the news industry into giving Facebook all this data, because not only was it making Facebook a bigger and bigger source, sort of a starting place, the home webpage that you start at before you get to the news sites, but it also, once they had the code on the journalist website, they could track where everyone was going. So, even if you didn’t click it, they could know that, “Oh, that user that’s logged in, Geeta14536, that’s the same user that just showed up on that Washington Post page.”

So, they were building and assembling that voodoo doll, not just when you click on Facebook, but as you’re browsing around all these news publishers. They’re also putting little hair on the voodoo doll, clothing on the voodoo doll like shirts on the voodoo dol. And all that led to Facebook getting better and better at building these predictive models and making money on Facebook. And then every time they send traffic to the publishers, they’re giving you a little penny, but really, all the revenue comes from the fact that they’re staying on Facebook, on Google, on TikTok, et cetera.

And what that has done, I mean, obviously, there’s sort of multiple stages to journalism’s declining revenue, including first Craigslist and classifieds, then moving from print subscriptions to online, and fewer and fewer subscriptions to an entirely digital online presence to fewer people subscribing. And then, because fewer people are subscribing, you have to charge more per user, but then very few people do that. And so, it’s this sort of autocatalytic feedback loop that makes it harder and harder for the fourth estate to fund itself. I do think we’re going to need some kind of mass reparations to fund and really revitalize the fourth estate.

I mean, if you don’t have a local newspaper covering what’s happening locally, there’s essentially no accountability. And I think that’s where we are. And this is what’s going behind, I know the Australia and other countries threatening to take their content off of Google.

I want to ask you about Australia. Let’s move to solutions. I mean, I know just Australia on its own cannot solve this problem, or even one approach can’t solve this problem because it’s so deep, but what do you think of Australia’s idea of requiring Facebook and Google to pay for news content they share, pay the news publications for the content they share? What do you think of that? And is that a possible part of a solution worldwide?

Well, certainly, the economics have to flow in the direction of, I mean, it’s a very extractive model, right? I mean, they essentially make all the money off of the publishers. Google will also make money with the ads that they’ll show on the publisher’s website, but then they make all this money when you’re just on Google, et cetera. And, yeah, I do think there needs to be an economic sort of rev share kind of model, but it’s very tricky to say how that will fully work. I mean, essentially what’s happened is due to the increasing amounts of public pressure. Both Facebook and Google have funded, I forget how much, but these big grant programs.

They’ll just say, “We’re putting a hundred million dollars into funding public news. We’re putting 10 million dollars.” But they’re always doing it in response to the outrage that there’s less and less journalism, not as they’re doing. We need a model that is self-regenerative, not, “Hey, we destroy the environment, we do a scorched earth on for the fourth state, and then afterwards, we toss them a few pennies to see if they can regrow some new stuff again. But really, we’ve already emptied out the earth.” I’m not an expert in exactly how Australia is framing its laws, but I do think there needs to be economics that fundamentally strengthens the fourth estate, not predates on it and keeps the profits for the major companies.

Geeta Anand: I found the opinion piece you wrote for the Financial Times last year, really interesting because you put forward some solutions. And I was wondering if you could help us understand some of them. Like one of them, you talked about social media as an attention utility, and said that, just as phone companies and power companies have to get licenses, an attention utility should have to get a license because this network is as larger, if not larger, and more important, or equally important as the telephone or power or these vital services. Can you just explain a little about how that might happen, what your thinking was how it would work?

Tristan Harris: Well, implementing it would be very difficult. I think it’s more just getting people… One of our major jobs we try to do in our work at the Center for Humane Technology is just offer frameworks for people to think in, that are more generative than the kind of infinite black hole of free speech versus censorship. But you’re just never going to get anywhere, you’re just going to have the same conversation you’ve always had, and someone’s going to bring up the counter example to the counter example. That’s not very helpful. So, why would we even talk about something like attention utilities?

Well, fundamentally, we have to ask, what is the common resource, the common environmental resource that is being mined or extracted? That we only have so much of it. So, we have to realize, the attention economy is finite. There’s only so much human attention. When you run a infinite growth profit motive on top of a finite substrate, just like with the planet, if you have an economic system that demands infinite resource extraction, so long as it’s paired with resource extraction, from a finite planet, that’s intrinsically self-terminating. In the same way, if you have companies that make money, the more attention they get living on a finite substrate of human attention, and your business model is just sucking the attention out of people, that’s also self-terminating.

So, we have to have a self-protecting measure. You could say, “What are the national parks of the attention economy?” You could say, “What are the zoning laws?” You could say, “How do we extract? What are the rules for extraction?” Just like we don’t prevent people from tearing down any trees, but we want to make sure that it’s within the regenerative capacity of a forest. So, we don’t want to make people not necessarily monetize some attention, but we need to make sure that it’s not taking out the regenerative capacity of society or trust. So, attention utilities is a framework where we realize that there is a commons, an attention commons that we have to protect just like we protect national parks or our outer environment.

And right now, I mean, this is very obvious metaphor for so many people, but just, we have a system that’s based on extracting and polluting and depleting that attention commons, and ruining the quality of attention that we have available for other things, whether it’s the quality of attention we have to pay to a future advertisement, or the quality of attention that we’re left to pay to our children, or to our democracy. Everyone feels overwhelmed, and that’s where this gets really problematic. But the main point of that article was just to say, we need to have some ground rules for protecting what is the commons, what is the resource that we’re all sharing.

You have different metaphors for this too. You have the FAA saying we have common airspace. We need to have common coordination rules about how we use the common airspace. So, anyway, we could go on for long ways, but that’s kind of the core idea.

Geeta Anand: I was also intrigued by the new business model that you proposed, which was a subscription-based model, and you drew the examples from Netflix, where people pay for a subscription, or BBC. Do you think that would work? Can you see it happening?

Tristan Harris: There’s two aspects here. So, the reason why it’s not happening, I think, should be very obvious to people, which is, it makes a lot more money to not charge people for products, but then to just over-manipulate them and ruin society on top. I mean, by the way, there’s no conscious intention by any one of the tech companies and anyone I’ve ever met in the tech industry to harm children, create depression, to create political polarization, this is so much by accident and through natural competition with other companies. We could move to a subscription model, here’s the problem, obviously, there’s still going to be a competition for attention.

So, now you have Facebook competing for a subscription, you have YouTube competing for a subscription, you have Twitter competing for a subscription, but guess what, those business models still depend on you spending enough time on those services, that you would want to pay that monthly fee. So, just like Netflix, the reason why, by the way, that Netflix does the auto-play, “Five, four, three, two, one, here’s the next thing,” I think that’s still on, is because they also need to justify you paying that, whatever it is, eight or $9 a month, and if you’re not using Netflix often, eventually, you’ll burn out and not subscribe.

So, I think we have to balance several concerns here. I think instead of thinking about subscription, we really want to make sure, is that society is the customer, not the product. Meaning that we can’t have it be the case that our behavior, our predictability, our manipulatability is the product. And I think that the end of the film, The Social Dilemma, really nails this when Justin Rosenstein, who’s the co-inventor of the Like button says, “The fundamental problem is just like an unregulated capitalism if you don’t have certain kind of guardrails for what you’re protecting. So long as a whale is worth more dead than alive, we’re going to kill a bunch of whales. So long as a tree is worth more as two by fours than is a tree, we’re going to turn trees into two by fours.”

In this model, in the attention model, we’re more profitable as dead slabs of human behavior, predictable human behavior, because that meant the business model was successful. And again, it doesn’t have to be this way, we can create boundaries and guardrails and national parks and zoning laws that say, “Hey, this is the kids’ section. Hey, this is politics. Let’s have a fairness doctrine. Hey, this is how we want our society to work, but we need to think about it like a big urban plan that we’re designing consciously, not the kind of unregulated extraction we have now.”

Geeta Anand: On the idea of just thinking about social impact, I was interested by an idea you put forward about a social impact assessment, similar to an environmental impact assessment. Can you talk about this idea and how it would work and how we might bring it about? Because it sounds like such a great idea.

Tristan Harris: Well, I think the principle that underlies something like a social impact assessment is that the greater the power, the greater the responsibility you have. If I’m just going to get up on a soapbox and speak to 10 people, yes, I should be a responsible person for maybe making noise on a crowded street, but really, I can’t harm that many people. But if I’m a broadcaster like the BBC, and I can defame or destroy someone’s reputation by getting the facts wrong, I have a responsibility to make sure that I don’t do that. Or if you think about it in terms of power asymmetries, before we get into social impact assessment, I want to make just one metaphor here, which is about how much the technology companies know about you, that you do not know about yourself.

So, the real thing going on here, if you think about like any business relationship, where one party has a compromising degree of asymmetric knowledge about you that you do not know about yourself. Think about a lawyer. They know everything about the law, you’ve shared all the vulnerable information about you. They can’t have a business model that says, “Yeah, who wants to pay me to get that information so you can manipulate Tristan? That would be a ridiculous business model. Imagine a therapist, who everyone who they heard in the therapist room, everything they heard about the person’s weird fantasies and vulnerabilities and biggest doubts, fears, and anxieties, and they said, “Oh, yeah, that stuff I learned in therapies. Who is going to pay me to perfectly manipulate Tristan?”

Both those cases, a lawyer is licensed to have that asymmetric position with the client. They have to operate, they can lose their license, they can never practice again, there’s high reputational costs. Same thing with a therapist, you have a license, you can lose your license, there’s high reputational costs. You think that lawyers or doctors or therapists have information about you, technology companies have exponentially more information about you that you do not know about yourself that is compromising. They should be in a fiduciary relationship with you that is kind of a licensed relationship, and they should be losing that license if they are not appropriately protecting that asymmetric relationship. Because they’re in a position to massively harm others.

So, when you say social impact assessment, it’s basically when I have the power to create harm that is irreversible especially, that kind of power, we should be doing precautionary principle assessments upfront about what harm it could cause, just like if you’re upgrading your house or something like that, you have to do some kind of licensing or assessment or things like that. Now, the question is, what’s the balance between the rate of evolution of technology? And what kind of regulational slowdown than frictions that we want to add to the system?

Geeta Anand: Who could you see overseeing all of this? I know, in Europe, you suggested a directorate, in the U.S., there’s like the FCC or the FTC. Who would it be? And does it need to be the government? Or who could you see doing it?

Tristan Harris: Yeah. I mean, this is a very complex question because-

Geeta Anand: It is?

Tristan Harris: Yeah. What we’re really entering is a phase where I think people are recognizing, especially post the banning of Trump from Twitter, and the deplatforming of major tech platforms, Parler, et cetera, people are realizing that the digital infrastructure is the democratic infrastructure. It’s our society. We get paid or not through PayPal and Stripe or whatever. We sell things with these online Shopifys and things like that. We email each other with digital products. We communicate and spread knowledge and broadcast to each other exponentially through YouTube, Facebook, et cetera.

So, the digital infrastructure is the democratic infrastructure. It’s what it means to participate in society. And I think this crept up on us. And the reason I’m saying this is, technology has eaten up every single aspect of children’s education, of growing up, of social relationships, of identity formation, of politics, of political discussion, of election advertising. And if you’re taking over a core function of a society, think about a company that wants to manufacture voting machines, you can’t just say, “Here’s some new company on the market, and there’s no regulations to make sure they do it in a fair, transparent, and honest way,” you have to do it in a fair, transparent, and honest way. It needs to be regulated because you’re taking over such a core function of a society.

And that’s what we really need with technology, where we can’t just regulate social media, we have to ask, as technology eats up all these other social organs, when it eats up children’s mental health, does the FCC regulate that? Or is there some new regulator that regulates that? Or do we take the Department of Education, as I said in a congressional hearing two months ago, I mean, a year and a half ago, and we give it a digital update? So, if you think about, we have these institutions that already care about kids’ education, we have institutions that care about election advertising, we have institutions that care about communication standards. We used to protect Saturday morning cartoons.

We could have those existing institutions be digitally upgraded to make sure that whether they have jurisdiction about whether it’s children’s television or children’s education, that they’re basically monitoring for those harms and forcing those companies to be gradually reducing those harms or doing the impact assessments for those harms ahead of time. So, that’s one way you could scale it, if that makes sense. You have these existing institutions, we could give them a digital update and then have them have the jurisdiction over the technologies that are taking over those core parts of society. But this is very complicated because they may not have the expertise to do it. And you’re going to need a whole bunch of new people with the expertise who understand how these systems work, especially as technology is continuing to evolve faster. A year from now, we’re going to have a different set of platforms than we have right now.

Geeta Anand: So, taking you to the most difficult question and the one that usually stops all discussion, and that’s the question of the First Amendment and freedom of speech. And I was just wondering, how much do you think it really limits solutions to this crisis in the U.S.? Do other countries have a better shot at addressing this crisis than we do because they have a different relationship with free speech?

Tristan Harris: That is, yeah, as you said, the big question. I think, when free speech encompasses everything, including the technology platform itself, your ability to make a technology platform, it’s sort of this blanket that we can wrap around anything and it gives it just a free pass, a carte blanche to say, “Well, there’s nothing we can do about that.” And I think that’s just an inadequate moral framework. So, I do think, that might mean that other countries get there faster. It’s so hard to keep track of everything because each country is gradually producing its own answers to these questions, and we’ve been in touch with several different governments, but I think you can’t use the language of free speech anymore.

I would challenge people, when you’re tempted to use language of free speech, what’s a different way to describe the specific thing you’re talking about? For example, when I can broadcast and split test a message, for example, I can right now go into a conspiracy theory group on Facebook, I can get the user IDs for everyone in that conspiracy theory group. I get the QAnon user IDs. Then I can say, “Hey, Facebook, I’m going to create an advertising campaign that says, using your lookalike models, lookalike models to tell Facebook, ‘Hey, give me 10,000 users who have the same psychological attributes as those conspiracy users I just gave you.'”

And Facebook is happily giving me access to the very specific voodoo dolls that it has in its backyard, and it plucks them out for me, and it says, “Hey, these are the users that look just like those other conspiracy theory users.” Now, as an advertiser, I can split test 20 different conspiracy messages to those, that voodoo doll assembly thing that I just picked up, is that speech when I’m speaking to them by doing this entire process? No, we have to see that in terms of the degree of asymmetry of power. I knew something about them that they didn’t know about themselves. I was able to select an additional group of people that didn’t know they’re being targeted. I was able to split test messages and whisper one thing into one person’s ear and whisper another thing into another person’s ear, and they don’t know that, that’s all asymmetry of power.

And so, instead of saying, “Oh, well, I’m speaking to them as an advertiser, I’m speaking to them when I just broadcast something on Twitter,” we have to see things in terms of, what degree of asymmetric power is there? And is there a commensurate level of responsibility? I think the biggest question when it comes to this is what is the responsibility framework for having power? Think of a blue check mark on Twitter. You gain a kind of driver’s license for saying, “I’m a verified person on Twitter,” but maybe that should come with increased responsibilities. If I go to buy a pair of knives at a kitchen store, I don’t have to show an ID or get a background check or get training about how to use the kitchen knives, even though I could use them to harm someone.

But if I’m going to go buy a sophisticated semiautomatic weapon, ideally, there’s a background check, training required, and protocols in place to make sure that with the great power comes more great responsibilities. I think that’s the principle we need to retrieve when we think about this new age of speech, which is really just psychological influence with different degrees of asymmetric power.

Geeta Anand: Clearly, people who have so much power, entities need to be more responsible, but how do we make them more responsible? People who wrap themselves in the free speech blanket argue that these social media platforms will become more responsible if we as citizens demand that they become more responsible. Is that the solution?

Tristan Harris: No. Clearly not. And obviously, it’s too little too late because if that’s what it takes, then, per your question about do we still have time? If that’s what it takes, then the game is already over, because this stuff gets worse before it gets better. And you don’t want a world where you create more of the problem and pollution and they sell at profit for cleaning up the pollution later after the fact. So, no, we need to have a system that’s intrinsically safe, intrinsically regenerative, intrinsically constructive and harmonizing, not where they continue to profit from the problem.

Geeta Anand: Apple’s taken a tough stance against Facebook or so it looks to just the ordinary citizen like me. They’re requiring users of Apple phones and laptops to actually consent to their digital information being extracted and used and sold. Is that helpful? And how helpful?

Tristan Harris: Before we go to that, I just want to make sure I answered your earlier question correctly, but I just want to make sure I remember.

Geeta Anand: Sure, sure, sure.

Tristan Harris: One of the things you asked about what is it going to take for the social media companies to take the responsibility for these problems? Right?

Geeta Anand: Yes.

Tristan Harris: And an important trend people should be aware of is that later this year, Facebook has been building the infrastructure to encrypt all the conversations, meaning, the child, human trafficking groups, all that stuff, that they used to have integrity teams, they had to pay people to check when that was happening, the QAnon, the crazy conspiracy groups that they had to just have integrity teams to try to figure out where are these things being planned? That is all getting locked down into encrypted channels. So, they’re moving from, “We didn’t know,” to, “We can’t know.” “We didn’t know” is, there’s some responsibility you can do after the fact. That’s what everyone’s pressuring them to do post Jan. 6.

Where this is going is, “We can’t know.” If you think about why Zuckerberg and co are doing this, it is because it eliminates and absolves them of all responsibility. So, all those costs that are currently associated with monitoring and getting into these political debates of should they deplatform that? Or should they look at that? Well, suddenly, they’re just encrypting it all, because people were previously yelling about privacy and Cambridge Analytica, so they’re saying, “Oh, fine, you want us to do privacy? We’ll just encrypt everything, throw away the lock and key, and now whatever happens, it happens in the dark.” People should know that because they should know how unsafe these platforms are. And we should eventually move off of this infrastructure.

And I just say that because when you talk about what it’s going to take for them to be responsible, we have to know how they’re eliminating our tools of making them responsible with changes like that. So, now to your second question about Apple and the changes that they’re making, I just wanted to make sure our audience caught up with that, but you’re basically talking about the changes they made on iOS 14, which require that you’d voluntarily consent when Facebook loads for the first time with the new iOS 14. It says, “This app wants to track you across applications, do you consent to them tracking you? Here’s what they would like to know.”

And most people, when given that choice, are probably going to say, “No,” and that actually takes the kind of micro-targeted advertising reality that we’re living in, where everyone has this perfect manipulation of each person into a more of a 1960s, 1970s billboard version of advertising, where we don’t have as much tracking. And as you said, Facebook has been pushing back against Apple because it just drops the profitability per user when they can’t track you everywhere.

Geeta Anand: I think 85% of people in the test they did said no, they didn’t want their information shared.

Tristan Harris: Right.

Geeta Anand: So, what impact will that have on Facebook? And is that helpful? And how helpful?

Tristan Harris: I think it’s a really good example of Apple nudging the entire industry in a direction that is less and less about treating us as the product, and more about treating us as the customer. So, essentially, the ability to micro-target each of us is treating us as the product. And by taking that away, they’re almost like doing… I think of Apple as kind of the Federal Reserve or the central bank or regulator of the attention economy. And what they just did by making it hard to track people is they kind of incentivized this non-extractive version of business model just by a little bit, just a tiny nudge at the whole ecosystem in that direction. And it’s clearly threatening enough that Facebook is pulling out all the stops to try to fight back.

Tristan Harris: And if you saw actually in Tim Cook’s speech to the EU last week, he said explicitly, “We cannot allow a social dilemma to become a social catastrophe,” which in reference to the film. And so, I think this is an example of things moving in the right direction because of the broad scale public awareness that this is not the reality in the world we can afford to live in.

Geeta Anand: Why is Apple doing this? I mean, is it because it recognizes the crisis? Or is there some… I mean, of course, I’m asking you to speculate, which is not what we usually do as journalists, but still, you’re in a better position to speculate than I am. So, do you have any insight into that?

Tristan Harris: It’s probably a combination of seeing the problem at a human level and saying, “This is a toxic system. Privacy is important.” And they’re making these changes. You could cynically say that it’s good for their bottom line and good for business to knock out Facebook. And that’s where the kind of antitrust concerns against Apple are being waged. And this is actually a good example of Russell Conjugation, where you can conjugate the meaning of how you perceive this from an antitrust lens of Apple using its market dominance to just wipe Facebook off the map, or you can take it as a good faith move by human beings who see the pernicious problems of these business models and are trying to move the entire ecosystem in this direction.

I think what we need here to distinguish between those two is something like better democratic governance for decisions like that, because this is the same, by the way, this is the same thing as what happened when Trump got deplatformed. There’s two ways to fail. You could have a world where Jack Dorsey wakes up one morning and can autocratically like a dictator just decide to knock a person he doesn’t like off of a platform. That’s obviously a failure mode. You don’t want that to be the governance for how we make those decisions. But the other way to fail is to allow this unrestricted Frankenstein, where hate and authoritarianism and tyranny wins in the competition with calm voices. That’s the other way to fail.

So, we can’t have that, but we need to have democratic decision making. We need to have some kind of accountable governance. So, when Apple makes a move to change the business models of the entire app store, when Twitter makes a move to change the deplatforming logic or the rules and enforcement, those should be more democratic. And so, I think we have to move the whole industry into having technology companies be more accountable to the public interest. The question is, what is that structure? Who decides? And who decides who decides?

Geeta Anand: On that note, I’m going to make some room from audience questions. I have many more questions I’d like to ask myself, but in the interest of democracy and because we respect our audience who are so smart and so interested, let me just begin with some of the questions, some of the 99 questions that were sent before this even started.

[Question and answer omitted because answer was inaudible]

I mean, since you’re in the space that you’re in, and a leader in pushing thinking about this crisis, what’s going on in the Biden administration? Are they thinking about something like a 9/11 commission? Are they deeply worried about this problem?

Tristan Harris: I have not been in touch directly with the Biden administration. I do know that these are active conversations that are going on there. I worry that the conversation is too narrow and it’s on issues like privacy and Section 230, instead of on the more systematic, like how do we restore a social fabric? How do we restore trust? How do we restore media that is trustworthy? So, I don’t think that’s the conversation that has been happening yet, and I think that’s the conversation that needs to happen.

Geeta Anand: So, this is a narrower question, and this is from a member of the audience, saying, “What links does Tristan see between this problem,” so, this crisis, this polarization, this crisis in social media, and what happened on Jan. 6, or the rise of QAnon? So, how did this issue contribute?

Tristan Harris: Well, it’s perfectly aligned. I think people too often jump to say what I think it’s not. I think people too often jump to Facebook was just where the Jan. 6 events were organized, and it was just about Facebook was being used to organize those things, as opposed to Facebook was part of an ongoing process for 10 years, that put people into these narrower and narrower echo chambers. Matt Stoller actually just wrote a piece for his newsletter, BIG, in which he calculated that Facebook would have made about $2.9 billion from QAnon. He calculates it in his own way, and so, please don’t cite me on that, you can look at the way that he calculates it.

But what I mean by this is that Facebook recommended people into these extremist groups. So, for those who don’t know, there’s a great Wall Street journal expose from May of 2020, in which they reveal the internal documents at Facebook, where they actually said, for the extremist groups that people joined, I think it was 64% of the extremist groups that were joined on Facebook were due to Facebook having put up a panel that said, “Here’s groups we recommend you join.” So, why did this happen? Just to quickly explain this to people, my colleague, Renee DiResta, who’s a rockstar and has studied this forever and deserves lots of credit, she found that the Facebook group recommendation system puts people into these crazy conspiracy chains.

For example, as a new mom, she joined a Facebook group on creating baby food, organic do-it-yourself baby food, not buying the regular stuff. And what do you think was the most recommended Facebook group to people who were in a do-it-yourself baby food group? It was anti-vaccine conspiracy theory groups for moms, Moms Against Vaccines. When you join that group, what did the Facebook system recommended? It recommended Pizzagate, Flat Earth, Chemtrails, QAnon. And so, the point is that once you get into one, Facebook says, “Oh, you’re like that kind of person, you might also like these other crazy things.”

And so, that has been going on for something like 10 years. So, when I look at Jan. 6, I see the results of a 10-year-long process that was pulling people into these crazier and narrower views of reality, where you get social affirmation, validation, meaning, purpose community, from this very strange belief system about the world being run by a global pedophile elite. And I think that’s where we are now.

Geeta Anand: Yeah. Just parallel to the grooming process. I mean, it’s just terrifying.

Tristan Harris: It’s automated grooming. We don’t have to pay people to do this sort of by hand conversation grooming, we have an automated system that does it for us. I want to say one last thing about that. The reason this happened was actually due to Mark Zuckerberg with a positive intention, you can look at the report in 2018, January, he said, “Our new goal…” If you remember, they changed their mission statement from making the world more open and connected, that was the old mission statement, to the new one was, bringing the world closer together. And the way he said were going to do that is through Facebook groups, because Facebook groups provide community.

And he said, “So, what did we do? We actually built an AI that recommended groups for people to join,” and he’s quoted in saying, “It works. It actually increased the amount of groups people can join by more than 50%.” So, they thought they were doing this good thing by putting people into groups because they had this narrative, these are like cancer support groups, these are mom support groups, these are sport soccer clubs, not, these are crazy town conspiracy theorists that are driving the kind of breakdown of our shared society and shared reality in getting past this pandemic.

Geeta Anand: I guess, I mean, unintended consequences seem to be responsible.

Tristan Harris: Which is why you’d have social impact assessment if you’re building something like that. If you’re going to do something that’s impacted that many people with an algorithm, you would have to have a deep understanding of what consequences, first order, second order, third order consequences that you could be causing. With that great power comes great responsibility and need for godlike awareness.

Geeta Anand: I mean, I know just as an educator, if we were just to make decisions on our own, just talking to each other as professors, we would just make big mistakes. Engaging our community, and especially our students, is vital to opening up our blind spots. So, it seems like, I mean, that social impact assessment would provide that opportunity, insist on that consultative dialogue.

Tristan Harris: Yeah, and have a diversity of views who are going to be most effective. Because one of the biggest problems now that is unfortunately not covered as much in the film, but it’s all the marginalized groups and people who don’t have as much of a voice, who are actually most impacted but don’t have a voice in the decision making. You have places like Myanmar, which is covered in the film, where you had a genocide that was amplified by Facebook, and now you have Ethiopia. I’m trying not to be so negative, I’m sorry, but it’s just that these are unfortunately realities where, how many employees from Myanmar or Ethiopia did they have at Facebook while these things were going on?

And by the way, you can extrapolate this principle saying, would Instagram be so toxic for teenage girls if the team that was running Instagram were mothers of teenage girls? They have skin in the game. So, they would see this problem, they would say, “No, we have to do something about that.” It would become a high priority as opposed to a low priority for the mostly white male engineers who run Instagram. If you had Facebook who was run by people who had come out of the Soviet disinformation landscape, and they were running Facebook, and they had had that experience, they would have gotten on top of the Russia disinformation problem much earlier, or even now Chinese disinformation, or Saudi disinformation. There’s many countries that are doing it.

But over and over again, we see that we need the diversity of views that represent stakeholders most being affected and where this could go wrong.

Geeta Anand: On children, someone is asking, “How can we educate our children to differentiate the myriad sources they experience struggling as an adult to figure it out?”

Tristan Harris: Yeah, it’s very hard. I feel like this is actually something where I wish there was just a massively publicly funded new effort to explicitly say we need a new… not that we need to remake things, but people don’t trust even our publicly interested news and journalism institutions unfortunately because it’s become partisan. There’s a lot of things I could say about this. People should have diverse information sources that they’re getting their information from, actively read what people from the opposite political side, the wise opposite of the political side that you read. I spend a lot of time looking at media from different sides and different sources.

I don’t think there’s a silver bullet here, I think we need much more common education. There’s lots of groups that work on that kind of thing, by the way. There’s a training by a group called IREX, I-R-E-X, I believe, that has some good stuff for people around critical thinking and misinformation spotting.

Geeta Anand: I know Berkeley Journalism is expanding its undergraduate program, aiming to teach journalism skills to undergraduates, not necessarily to become journalists, but to be citizens in a digital age. So, being able to just understand how to verify sources and how to recognize if something is well-sourced and what is a credible source and those kinds of things. But that, of course, catches you much later. You’re 18 to 22 years old then. But let me ask you another question from the audience, “Misinformation is also spread by official news networks like Fox, not only through social media, how can networks be held accountable for spreading misinformation?”

I know we’re asking you to find all the solutions, but what are your thoughts about that? Because that comes to my mind all the time, as we’ve been focusing on social media, but as I turn on Fox just to see how they’re talking about a particular event, and I just wonder.

Tristan Harris: Tribalism has replaced epistemology on both sides. So, people are in general looking at information that their tribe affiliates with. And then when even terms come in, like if someone says, “Wuhan lab hypothesis,” they say, “Oh, you must be a pro-Trump, right wing, xenophobic person.” So, you can’t actually do epistemology of how would we know? How would we not know? If someone says, “We are for masks,” or, “We are against masks,” they have basically declared and painted themselves which tribe they’re a part of, as opposed to, “Well, we could actually have a conversation epistemologically about, how do we know that masks work? Et cetera.”

Most people who think that flat-earthers are dumb, can’t themselves prove that how do we know that the Earth is round? So, I think we needed in general to remove ourselves from tribalist sources of information and the outrage economy. So, I would actually recommend everybody unfollow, not watch, get everyone else they know to not look at things like Fox News, OANN, or MSNBC any of the kind of extreme outrage media that is really not good for democracy anywhere. Because one of the subtle things about how technology has affected journalism is it’s made all journalism have to cater to get those clicks because journalist organizations have start to measure the success of their news and journalism employees by how many clicks that they get. And that’s also altered the character of how journalists, I think, produce information to accommodate those incentives.

And it’s much like a child who is getting used to getting validation in terms of likes and comments, it changes the meaning of validation in terms of, “Do I get likes for that?” And I think we’re all being tuned and incentivized, trapped in this kind of matrix of bad incentives, of shallow incentives. And it’s happening for journalists, it’s happening for teenagers, it’s happening for democracy, it’s happening for outraged conspiracy theorists, it is happening across the board.

Geeta Anand: Do you think it’s happening for our best journalism institutions? I mean, do you see that slide in, and I just won’t mention names, but just the main ones you think are fabulous, do you see that slide? Like is deciding to call, I mean, this example may be helpful and may not be, but in deciding to call Trump’s falsehoods a lie, does that push you into a little bit of tribalism, just that slight bit, that then makes you not believable to someone who’s a Trump supporter?

Tristan Harris: Yeah, I think it does. I think that’s what our challenge is, is how do we communicate with the utter humility, where we would say, “How would we know if that side is correct?” It’s funny because I think when Trump came out against promoting hydroxychloroquine, to even say anything positive about hydroxychloroquine meant you are part of the, if you’re in California in the Berkeley sort of Bay Area complex, you’re part of the dumb Trump machine or something like this. And instead of saying, “Well, actually, there’re some scientists and doctors now who actually think that there’s some reasonable ways to use that treatment, but they would never say so publicly because they’re worried about getting tagged for being political and being, say, a right-winger, a Trump supporter, or something like that.”

And I think what we really need, and this is the thing that journalists can do is, how can they demonstrate good faith, “How would we know that it’s this? How would we know that it’s not this? Let’s go through that process.” And showing that process, honestly, which, again, requires time, this is why we need a different digital ecosystem that gives us time. One of the nice things about podcasting or long Zoom conversations is people actually have the time to slow down and actually do a process like that. And I think, I mean, I think including in this conversation, when we slow down and we actually break down each of the steps, people trust you because you’re actually saying something that’s logically true, as opposed to saying the baseless idea that Hydroxychloroquine does this, the baseless thing that the Wuhan lab hypothesis…

Even if it might be true that they’re not legitimate, saying the baseless blah, blah, blah, is essentially painting you as which tribe you’re on, and I think losing trust in a common audience, from a common audience. And I think people are yearning for a kind of fair and trustworthy media, and I think that’s, like I said, I think it’s something that the 9/11 Commission for Restoring Trust should look at.

Geeta Anand: Definitely on a wide scale, including journalism, but maybe journalism organizations can take a step and just look at themselves also ahead of that. But anyway, we’re at time now, Tristan, and I just want to thank you so much on behalf of Cal Performances, Berkeley Journalism, the world, for just bringing your insight and your experience to us and trying to answer so many of these really, really difficult questions on solutions. I just could not be more grateful, we could not be more grateful for your presence here today.

Tristan Harris: Really appreciate it. And what I really hope is that this talk and our conversation just inspires many more people to work on this problem, because that’s what we need, that this is not something that some small group of people are going to go solve, it’s kind of like a decentralized immune system. And by each of us waking up by seeing these patterns of what needs to change, we become part of the antibodies for culture to become more immune from this kind of reality-dividing virus. That is the sort of other pandemic that we have. One last thing, I’ve been joking that it’s almost like we have the Zuckerberg Institute of Virology, and he was playing with these memetic viruses, and it jumps out of the lab and took a little bit of the world and shut down the global economy, because it actually shut down our ability to make sense of the world.

And to become immune to that pandemic, what we’re doing right now and what I hope so many people listening to this do when they see this, is to become part of the process by which we figure out, how do we recover trust? How do we be good to each other instead of participate in the kind of outrage-canceling machine? And how do we really make sure we can constructively survive and make progress on the big problems that face us? So, thank you so much for having me. I really sincerely appreciate the time for everyone tuning in.

Geeta Anand: Thank you again, and we hope to have more conversations with you in the future as we all work to combat this virus that has escaped from the lab.

Tristan Harris: Yeah, indeed. Thank you so much, and thank you again to the Graduate School of Journalism for inviting me, and everybody else, Jeremy Geffen and others.

Let's block ads! (Why?)



"social" - Google News
February 26, 2021 at 11:55PM
https://ift.tt/3uN151U

Berkeley Talks transcript: 'Social Dilemma' star on fighting the disinformation machine - UC Berkeley
"social" - Google News
https://ift.tt/38fmaXp
https://ift.tt/2WhuDnP

Bagikan Berita Ini

0 Response to "Berkeley Talks transcript: 'Social Dilemma' star on fighting the disinformation machine - UC Berkeley"

Post a Comment

Powered by Blogger.