This week my guest is Sander Van Der Linden, Professor of Social Psychology at the University of Cambridge where he has also directed the Social Decision-Making Lab since 2016.
In this episode we explore Sander’s latest publication, Foolproof, in which he details the many ways in which humans fall prey to misinformation and ways in which we can resist such persuasion. This primarily takes us on a tour of his work around “pre-bunking,” an experience that gives one an increased resistance to misinformation almost by acting as a mental vaccine.
Learn more about Singularity: su.org
Music by: Amine el Filali
Sander van der Linden [00:00:00] The theory that kind of underlies a lot of our practical work here is this idea of psychological inoculation, which follows the biomedical metaphor or analogy pretty much. Exactly. So just as the body needs lots of copies of potential invaders in order to mount an effective immune response really works the same as the human mind.
Steven Parton [00:00:32] Hello, everyone. My name is Steven Parton and you are listening to the feedback loop by Singularity. This week my guest is Sander Van der Linden, who is a professor of social psychology at the University of Cambridge, where he has also directed the Social Decision-Making Lab since 2016. In this episode, we explore Sander's latest publication, Foolproof, in which he details the many ways in which humans fall prey to misinformation and the ways in which we can resist such persuasion. This primarily takes us on a tour of his work around pre bunking, an experience that gives one an increased resistance to misinformation through a process that is very much like a mental vaccine. Unfortunately, we have some unreliable Internet in a snowstorm that cut this conversation a bit shorter than usual. But luckily we were able to get through the main points of Sander's work before I lost electricity. So this will be a bit shorter and sweeter than usual. But without further ado, everyone, please welcome to the feedback loop, Sander Van der Linden. Your book, Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity. It's just come out, and I just was wondering if you could give us a little bit of insight into the background and motivation that led you to write this book, why this was a subject matter that you felt was important to invest your time into it.
Sander van der Linden [00:02:02] Yeah. You know, I was always interested in writing a popular book. You know, part of the reason why I love studying psychology is because it's relevance to humans and people. And, you know, I want to I want to tell people about stuff that we find in our experiments and in a way that's relatable and accessible. So that's always been sort of on the back of my my mind. But then. I think what was a particular motivator for me was that once we had, I never felt like I had well enough to say. So I sort of wanted to wait until we had a program of research over numerous years that really added up to something and, you know, doing this or trying to define and monitor and fight misinformation for what feels now quite a long time. I felt there were some interesting insights there for people to know, take away from me some interesting takeaways that that I thought people might enjoy, both in terms of the evidence that we've got added in terms of what works, what doesn't work, how the brain processes information, how it spreads in social media. And so I thought there was really something there that that I could turn into. It's an interesting story and also some really practical, you know, practical evidence based tips that we were able to. To gather over the years. And you at the beginning, it was it was sort of like, okay, well, this is an interesting theory and some interesting experiments, but, you know, I don't want to put a book in something that's that's quite preliminary. So I wanted to wait and see, see where it wins. And then, you know, we have these interesting collaborations with tech companies and public health agencies, and we could really test things in a, you know, in a in a on a larger scale. And that kind of led to a lot of interesting insights, a lot of feedback. And that kind of motivated me to sort of tell that story because often that would be interesting for people to know, like what's going on, you know, and or Google or, you know, inside insights from these government agencies. What are people up to? And and, you know, are they really trying to fight misinformation? And so I thought I had some interesting insights that I want to relate to people. And that was kind of the ultimate motivator for the book.
Steven Parton [00:04:07] Could you share a few examples or one example of like one of the research studies or one of the collaborations that were particularly was particularly insightful to you or really exemplified what you're working on?
Sander van der Linden [00:04:22] Yeah. And so one of the examples was that we had produced some some games. One of the games is called Bad News, and it's sort of a deep dive simulation into how nefarious actors do people online. And it's, you know, it's meant to be slightly radical in the sense that, you know, it's not sort of typical sort of media literacy. Education, where people try to be a good investigative journalist. It's quite the opposite. It's more, you know, exposing yourself to weekend doses or, you know, that we control. But still, you know, kind of you're messing around with the dark arts of manipulation to try to build resistance by by seeing how, you know, how people are duped. And, um. We were building tests and and quizzes to see if, you know, after 20 minutes of playing this game, people are actually improving their ability to spot online misinformation. And then scientifically, we took the same style, kind of combining humor with the inoculation metaphor or the idea of debunking, which I'm not sure if your listeners are familiar with. But the idea is, is basically that instead of debunking, you're going to free bunk by preemptively exposing people to a weakened dose of the quote unquote virus so that people can build up mental antibodies. And that that's kind of the gist behind the approach.
Steven Parton [00:05:44] Could you say more about that in terms of what you mean by the the idea of a virus or like inoculating against that just to tie it more to what you're really talking about here so people have a firm understanding?
Sander van der Linden [00:05:57] Yeah, absolutely. So let me let me back up and sort of explain the theory that kind of underlies a lot of our practical work here is this idea of psychological inoculation, which follows the biomedical metaphor or analogy pretty much. Exactly. So just as vaccines trigger, or at least vaccines introduce weakened or inactivated strains of a virus into the body, which then triggers the production of antibodies to confer resistance against future infection. It really turns out you can do the same with misinformation by preemptively exposing people and by preemptively refuting we can doses of falsehoods or techniques that are used to spread misinformation. People over time can build up cognitive antibodies or mental immunity. And, you know, just as the body needs lots of copies of potential invaders in order to mount an effective immune response really works the same with the human mind. So the more examples, the more we can doses and then we can dose is important because you don't want to do people with actual misinformation, Right. And so you have to you have to weaken it to the extent that you can actually refute it in a very persuasive way or show people or give people the ability to dismantle the techniques of manipulation in advance. And when you do that, people can sort of build up resistance. And the more micro doses or examples you can give, the better the mind becomes at identifying them in a whole range of examples. So not just the weakened dose that you've created for a specific falsehood or a specific conspiracy theory. But you know, once you give people enough examples, they can start to sort of see the underlying tricks and indentify it in new content. And that's kind of the where we follow the analogy and I'm happy have of talk about the psychological limitations of debunking and why we we sort of opted for this pre plunking approach. But when we got to the stage of actually doing this outside of the lab environment and even our games, you know, they're in a controlled environment and you know, it's a choose your own adventure game. But we control the scenarios and right and so it's it's it was still, you know pretty scientific and it's an it set up and so with these videos that we created, we were able to test them live on YouTube. And I think that was exciting for me because up until this point, you know, of social media companies were like, okay, interesting. But, you know, we don't know how this works with our users and how are people going to respond to it and whether it's going to be effective on our platform. And every time it's the same story. And so we always have to do everything from scratch for every social media company. You know, we've did that with WhatsApp, with Metro, with Google. And so every time we go through the same sort of process and I understand that it is different every time. And so it is it is interesting. And so we created these videos for YouTube, and the brilliant thing that Google came up with was that, you know, they said, Well, how are you going to scale this? How are you going to get this to people who would otherwise be, quote unquote, vaccine hesitant? And so they had a really good idea. I mean, they said, you know, YouTube has that annoying sort of ad space. Right. And so what what if they just put it in the app space before, you know, people are exposed to potential misinformation? Um, and I think I'm supposed to say that Google doesn't or YouTube doesn't release anything, any information about its algorithms. So I can't speak to what its algorithm can or cannot do. But let us hypothetically assume that its algorithm could, with some probability, identify whether a video might contain some questionable information. It could automatically insert the pre bunk in the app space and then people would, you know, it would preempted and then it would prevent some of the the the difficulties you get with trying to debunk things after the fact. And we didn't expect that millions of people live on YouTube but actually were able to test that and found that that actually that that does help. I mean not as the effects were not as impressive as in the lab. Because, you know, it's live. People are distracted. It's on social media. But but it it's still work. And that really was one of the best kind of test of this concept. And Google made that possible. There was a lot of internal a lot of legal stuff, a lot of internal complications. And so it was it was a very interesting experience. And I talk about it in the book in terms of what I can say and the lessons that we've learned from that. But yeah, so that that was another kind of real world instance where, you know, I thought that was it. That was interesting. And yeah.
Steven Parton [00:10:19] What's so, what's an example of maybe like a pre bonked piece of content. What does the the vaccine or the small dose that you're setting somebody up with kind of look like? Is it a 15/22 clip that is slightly misleading or that actually instructs people on what fake news looks like or is kind of like a priming exercise? Like what's the what kind of thing are you really trying to show them in this pre bunk experience?
Sander van der Linden [00:10:45] Yeah, yeah, that's a great question. I mean, up until now, it sounds pretty, pretty abstract. So let me let me give the listeners kind of a concrete example. So there's there's various ways that you can pre bunk and it depends on on the level of risk that, that you, you know, are comfortable with. And it's sort of it's not like a nudge or a prime. And so those interventions just kind of ask nudge people to be accurate when they're on social media, but they don't confer any skills and people and so our interventions are a bit more involved in the sense that we want to preempt manipulation or falsehoods and actually give people ways to detect it. And that requires a bit of time. And that's also that's been a struggle since the beginning in terms of how can you get a 20 minute simulation down to a 32nd video clip and still still have it work? Right. To come back to giving you a concrete example. So we studied the kind of techniques that people use to mislead others online for years, and we kind of document that we call them the six degrees of manipulation and includes things like polarizing people with polarizing language in headlines and using emotions to fearmonger and caused outrage, building conspiracy theories, trolling people and trying to distort public perception, but with bot armies and fake experts. And there's a whole we have a whole sort of documented list of these techniques. And with the YouTube videos, what we did was we didn't tell people what they need to believe. We didn't talk about any specific issues. We that the video starts and it revolves around this idea of a false dilemma. So as you probably know, false dilemma is, is a situation where you present people with two options while in fact there is there's more. And the reason that's interesting is because there's a lot of YouTube there called YouTube gurus who tried to radicalize people with extremist rhetoric. And one of the most common strategies that present people with false options, which is that, oh, you're, you know, either you join ISIS or you're not a good Muslim or, you know, we can't really care about immigrants because we've got a homelessness problem in the United States. Right. And so there's there's these sort of false dichotomies that are floated at first that's convincing to people. They're like, oh, yeah, you know, But then when you think about it more, you realize, wait, is that really true? Can't you do both at the same time? And sure, I mean, there are some resource constraints in society, right? But but most of it is, is is misleading. But without talking about any of that, the video that YouTube video cut straight into Star Wars. I'm not sure if you're a Star Wars fan, but, you know, a clip appears and you have Revenge of the Sith. And so Obi-Wan is talking to A and Skywalker who kind of says, like, either you're with me or you're my enemy. And then Obi-Wan replies that only a serf deals in absolutes. And then the narrator explains why that's a false dichotomy and how that is used to mislead people online. And then the micro dose comes of giving people examples of these type of things. And then we test people with real social media, polarizing sort of false dilemmas. And we find that, you know, using that completely inactivated, it kind of strain. It does how people spot these misleading techniques and real sort of content on the YouTube platform, for example. So that that's kind of a concrete example. Now you could use you don't have to use Star Wars as the weekend dose. You could use something about immigration or education or climate or health care, and that can work fine. But yeah, a lot of it's getting better now. I mean, we're working with Matt on on three banking climate misinformation on the platform now. But many years ago, a lot of these companies were not willing to say anything about any of these issues. And so that, you know, as a scientist that makes your job harder.
Steven Parton [00:14:25] Is the idea, in essence, to kind of tackle some of the inherent cognitive biases that humans have without maybe directly trying to tell people what they should believe. And in a sense, you're almost just trying to empower critical thought before you hand somebody over to something that might be attempting to kind of usurp that critical thought.
Sander van der Linden [00:14:46] Exactly. And I think that's the thing most people across the political spectrum can probably get on board with. And that's also why I've kind of embraced embraced this approach in that, you know, it's it's. At the end of the day, we're just trying to empower people to discern repeated manipulation techniques that people can make up their own mind about what they want to believe or not. And it's totally fine with me. Well, I mean, I guess not entirely, but I can live with the fact that if people can identify manipulation techniques and still decide they want to vote for someone or still decide that they want to buy into something, then then, then that's what it is. But at least people are empowered to then make that decision. I think a lot of people will. At least think about revising their beliefs once they can sort of see through some of these tactics. And some of it is. It's so well documented that some people don't know that this has been going on for a long time since the 1800s. You have wonderful examples of paintings where cows are kind of spewing out of people's mouths and that have to do with the the first vaccine, which was using cowpox to vaccinate people against smallpox. And so people were talking about how it changes your DNA and you're going to turn into a cow high human cow hybrid. And, you know, the same narratives you see now, it's the same trope just 200 years apart. And so all that we're doing is exposing these tropes and techniques, and it's a little bit more specifically critical thinking. I mean, sure, scientific literacy and numeracy is good, education is great, but the way that we've tried to do this is to kind of simulate the types of attacks people might face and then inoculate them against those type of attacks in advance. So it's a little bit more specific, a little bit more about actually exposing people to some potentially threatening information rather than educating them about general facts. But I think both are, again, not to present false dichotomy. I think both are laudable goals and but yeah, that's our.
Steven Parton [00:16:43] When you mentioned there that some of these tropes are most timeless in a sense, right? There's there's always been some form of social influence, some sense of conformity, some sense of persuasion taking place just as a social species. You know, we love to gossip and navigate these circumstances, but is there something in your mind that makes the digital realm in the realm, I guess, of misinformation on a digital space particularly pernicious that maybe exacerbates it or makes it, you know, extra dangerous? Is it, you know, is there something in like the metrics of of how things are wired that you see as is a big problem?
Sander van der Linden [00:17:24] Now? I do think so. I mean, you know, of course, there's always been rumors and unverified stories and people have always dealt with politicians lying and things like that. And so that's, you know, obviously that's that's that has a very long history. But I do think that the way that the online environment has shifted the. Information landscape presents lots of difficulties for people that I think we haven't fully digested. So I think at a very basic level, people have what's called a truth bias and that that kind of makes sense most of the time. So the truth bias is basically that we think most information that's coming at us is true. And so that makes sense when you're in an environment where people are not constantly lying to you or where you don't see a lot of misinformation. And then you have the fact that the online environment has become super fragmented, which also presents difficulties not only for. How people process information, but also for pushing our corrections, because the corrections are not getting anywhere, because the the landscape is so fragment and people are in their own echo chambers that they're actually not seeing the corrections and they're not getting rid of people. And so trying to correct it becomes much more difficult because it's so diffuse. And then I think there's a lot of manipulation that happens that people are not aware of. And I think that is perhaps the biggest difference with what you would otherwise see off line. So take Dark Post, for example. A lot of election campaigns use dark posts with dark ads that appear on your timeline from people that you have nothing to do with, and they only appear on your timeline and not other people's timeline. And you don't necessarily know about micro-targeting, which are reliable weapons of mass persuasion in the book, which is really about. The fact that companies are scraping your digital footprints and they can predict your gender, your political ideology, your sexuality, your personality, as long as they get enough likes, you know, if you like certain pages, it becomes relatively easy to sort of predict the things that you're into and the things you might click on. And so the more data that they have in the Matrix, the better these models become. Some are not very accurate and some are based on how much data they can gather on you. But if you're if you have an online presence, if you have cookies, if you're offering data, then you can be targeted. And I think the problem is that people are not aware of it and people are not conscious of it. So it's one thing to opt in to being targeted, let's say, with book recommendations. Right. But if you're being targeted with misleading news ads without your knowledge, that's much more problematic, especially if that influences the way that you might vote, even when when you know, when you've not consented to that. So I think that's where things get much more problematic, that actors can actually leverage these new tools to influence people and in ways that we didn't have before, and for which we have no no good safeguards at the moment.
Steven Parton [00:20:18] Yeah, perhaps that sounds like a naive or obvious question even, but given your study of human judgment and decision making, do you feel that awareness of cognitive biases, awareness of ways that we might be manipulated really does inoculate us in some sense? Because I feel like it's almost one of those things where. Facts are great, but they're really hard to stick in someone's mind if they don't have an emotional salience or if they're not somehow useful to bind you to your social group. I like you know what I mean. It's very hard to often make these things stick. So in your experiences, the awareness of these things really help people go, Oh, okay, that's a thing that is trying to manipulate me. I shouldn't pay attention to that. Even though I agree with it, I'm going to pretend like it's not real. You know what I mean?
Sander van der Linden [00:21:10] Yeah, well, we know that, you know, at a basic level, awareness of cognitive, of cognitive biases doesn't doesn't necessarily fix it. And so, you know, in the book, I give people an optical illusion and then sort of tell them what the illusion is. But, you know, you're still looking at it and you're still seeing the illusion. And so that's that's I think it's similar in some in some ways, knowledge does help, but it doesn't fully kind of solve the problem. And you see that with, you know, the literature on debunking and fact checking. And so it has some effect, some positive effect, but then a lot of misinformation lingers and people continue to retrieve false details from memory, even when they acknowledge having seen a correction, which is what we call the the continued influence of misinformation. And that kind of occurs because misinformation. Activates lots of so so human memories. Kind of like a spider. A spider web, a network with lots of different links and nodes. And once misinformation integrates itself into your mental model of how something works, it becomes like a game of whack a mole. So once you correct one one falsehood, another one pops up somewhere else and it's all interlinked. And so trying to undo that is actually very difficult. And so one of the things that they found is that. When you debunk something, you need to give people an alternative. Just saying something is false doesn't work, because if people take something as false in their memory and if they don't have an alternative explanation of what's true, instead they're just going to revert back to the thinking of what they initially thought. But that that alternative explanation isn't fully enough in itself either. And so it also needs to be sticky, It needs to be simple. Your science has to be nuanced, complex, whereas misinformation is often simple and and and and has some psychological appeal. And so how can you make science sticky and simple? And then the last factor is that the correction really needs to go here, but people want to be true. And that kind of gets that what you were saying earlier and that, you know, trying to frame corrections in a way that doesn't antagonize people is really difficult. So for all of these reasons, I think it's often easier to try to debunk and debunk because you prevent some of this encoding. It's kind of a memory term that people use. But are you going to prevent you prevent this stuff from from being integrated into people's kind of knowledge or mental model in the first place? But debunking also requires that people have some incentive to pay attention and and participate. Right. And so where I think it differs is that our approach leverage is the idea of. Well, two ideas. One is that people have to come to terms with the fact that they might be susceptible to nefarious persuasion. And two, that they perceive some sort of manipulative intent. And I think those factors are really big motivational factors. And so part of the inoculation is based around ability to help people actually be able to become aware of their biases and how to spot it. But the other part is really about motivation. You also have to give people the motivation and the way to do that. We found in kind of the persuasion literature is that people start paying attention when they think other people are manipulating them enough so that this idea of perceived manipulative intent is really important. And that's often why we focus on disinformation and and techniques, because that's kind of where people become more concerned. And also they need to understand that they're vulnerable. Lots of people are overconfident and think that fake news, they're not going to fall for fake news, and that's not a problem for them. And so some of our simulations try to elicit this effect for people to actually show we can dupe you and it needs to trigger some level of threat and vulnerability. And that's kind of part of the inoculation process. And I think part of what helps it be a little more effective in that sense, that it's not only that you give people these skills, but they also have some level of motivation. But we do know that people also lose that motivation over time. And so the inoculation effect does wear off over time, especially when people come across contradictory information or, you know, they get social cues that say something else. And so we've started implementing what we call booster shots. Just as with regular vaccines, you know, people need to be boosted, otherwise you might lose your immunity. Mm hmm.
Steven Parton [00:25:26] Yet this makes me think a little bit of conformity in general in the psychological sense and I guess intent, malice, intentional malice. So I'm wondering, you know, we have the experiments done by Asch where participants are shown, you know, several lines of varying lengths, and they're told to match lines. And if their peers match lines that are obviously not the same, they will often conform and say that they agree, even though they they they don't. And this makes me think about like trending content, trending hashtags, trending ideologies are themes that kind of permeate the Internet. And I'm wondering is more of what you're seeing in terms of. Maybe misinformation or this kind of social influence, a result of things like that which are. Not really intentional. They're just kind of an attractor that people just slowly gravitate towards over time because they feel that's the standard, or is it more something that is intentional, more malice based, where you do have these kind of gurus you mentioned earlier who are purposely trying to seed bad ideas into the ecosystem, informational ecosystem to persuade somebody or to exploit them financially?
Sander van der Linden [00:26:47] Yeah, I think it's both. And so we definitely see these kind of girls and high level influencers for intentionally spreading this stuff. And I kind of like a multilevel marketing scheme. So there's people on top cooking up the conspiracies and then there's the, you know, people who are being duped and spreading them and suffering some of the consequences. But I think there's also in this context, the other point, I think something fundamental about the social incentives on social media that lead people to spread and endorse misinformation. And we've done some research on that. So we looked at millions of posts done on both Facebook and Twitter or media accounts and as well congressional accounts. And what we find is that the stuff that goes viral is best predicted by and the degree to which the language derogate the other side. And so, for example, you know, if if let's say if you're a Republican, then the Post, one of the most popular posts was like, check out Joe Biden's latest brain freeze. And then if you're a liberal, it was it was something, you know, nasty about Trump. And so and so the idea is that this sort of out grouping receives a lot of engagement on social media. Negative emotional stuff does to what we call emotional moral words like outrage and evil and pedophiles and hate and that sort of stuff gets a lot of traction, too. But in particular, the sort of dunking on the other side. And this is what we call the perverse incentives of social media that what drives engagement are really just these negative incentives. And that kind of leads to this vicious circle of people sharing types of content that the algorithm rewards. And then people feed into that, even though when explicitly in surveys, they state that they don't want that type of content on their on their feed. And yeah, we've done we just completed an experiment that I personally talked about in the book about is it worth changing, changing some of these incentives. And we do find that if you reward people either socially or financially, if we start paying people, then they're much more likely to give an accurate answer. And so that tells us something about the incentives on social media versus when we experimentally control the incentives. And and so they could change them. And I think that could lead to a much better online conversation and potentially a reduced sort of level of misinformation sharing.
Steven Parton [00:29:10] Yeah, I've been talking to people a lot about this lately, this idea that, you know, when we were evolving into smaller groups, there were a lot of evolutionary wiring that made us very sensitive to our social status as to whether we were doing something taboo, if we were being seen as trustworthy or as a free rider or something like that. And it feels like in the online space, a lot of those checks and balances that we evolve for no longer really exist, especially because it's anonymous and there's all these perverse incentives. So do you do you feel like we can somehow, like you said with that example, do you think there is a real possibility that if we do shift some of the incentives online, we can maybe rebuild or reintegrate some of those early wiring that made us focus more on treating each other well as as a way to maintain status or look good in the eyes of our peers.
Sander van der Linden [00:30:08] Yeah, certainly. I mean, I think I mean, that's definitely the thinking at the moment that it's all about changing these incentives and that would then hopefully lead to different eliciting different kinds of opinions and behaviors. On online, I mean, it's interesting because, you know, people are some studies that kind of show what they call the interesting if true effect, so that sometimes people share stuff because they think it would be interesting if it weren't true. Right. Even though they might personally not believe it. But it's the sort of social thing that they want to pass it on. And, um. You know, from an evolutionary perspective, it's interesting because, you know, deception is obviously also common in the animal kingdom and. Phrase a lot of techniques to do predators, but sometimes they're facing death or they change color. So they try to exploit weaknesses in the environment to try to sort of, you know, do predators. And I think in the same way, bad actors can now do that online. They can exploit our weaknesses and biases to get people to share misinformation. So changing those incentives might help deter some of that activity.