This week our guest is Renee DiResta, the technical research manager at the Stanford Internet Observatory, a cross-disciplinary team exploring the abuse of our current information technologies. Specifically, Renee investigates the spread of narratives and propaganda across social networks such as pseudoscience conspiracies, terrorist activity, and state-sponsored information warfare.
We focused heavily on these topics during our conversation, exploring how Renee’s interest was first piqued by the anti-vaxxer community, how Redditors impacted the stock market with Gamestop, Russias involvement in the 2016 American presidential election, the potential consequences of a certified digital ID, and more.
Want to learn more about our podcasts and become a part of the community? Join here!
Host: Steven Parton // Music by: Amine el Filali
people, dynamics, propaganda, anti vaccine, accounts, information, happening, personas, isis, ways, thinking, platforms, create, interest, question, read, ad, anti vaxxers, interesting, networks
Renee DiResta, Steven Parton
Renee DiResta 00:00
The most interesting dynamic to me still, you know, years later is ways in which the Internet has enabled networked activism. But more than that, also just kind of persistent crowd dynamics. And, and what that crowd is pointed at and who points the crowd at something is a thing that I'm very, very interested in. Because I think that gets to the real question, the underlying question of power, right? And what do you put all of these people's attention all of these people's energy and effort? What do you pointed at?
Steven Parton 00:42
Hello, everyone, you're listening to the feedback loop on singularity radio, where we keep you up to date on the latest technological trends and how they're impacting the transformation of consciousness and culture, from the individual to society at large. This week, our guest is Renee de resta, the technical research manager at the Stanford internet Observatory, a cross disciplinary team that explores the abuses of our current information technologies. Specifically, Renee investigates the spread of narratives and propaganda across social networks, such as pseudoscience, conspiracies, terrorist activity and state sponsored information warfare. We focused heavily on these topics during our conversation, explain everything from how Rene's interest was first piqued by the anti vaxxer community, how redditors impacted the stock market with GameStop. Russia's involvement in the 2016 American presidential election, and the potential benefits and consequences of certified digital IDs. Luckily, Rene's abundance of knowledge in these areas make for a well articulated and comprehensive journey. As usual, you can check the episode description for details on how to connect to more of Rene's work. And you can also find in the episode description, the link to su.org slash podcast where you can explore your options for joining our community of over 30,000 entrepreneurs, technologists, creatives and changemakers of all kinds. But for now, let's go ahead and get started. Everyone, please welcome to the feedback loop. Renee de resta. Well, I thought to start, one of the best things to do might be to get a little bit of background on you, and your work and probably specifically wondering if you could tell us a little bit more about the Stanford internet observatory and what you do there studying the abuse of information technologies.
Renee DiResta 02:36
Sure. So my background Well, I went to school for computer science and political science. So you know, long ago now and I graduated, I went to Wall Street actually was my my first real job kind of out of college. And I was a, I was a derivatives trader. And so I did that for a while. And then ultimately, I kind of I got bored in 2011. And I just felt like I had been through the financial crisis I had been through the European debt crisis, I felt like I had seen all the types of wild, exciting crashes that the market was, you know, that was one person needed to be there for a lifetime. And so I decided I was going to kind of go back into tech. But by this point, I wasn't as useful as an engineer. And so I came out to the valley as in venture capital, I took a VC job. And I worked with Tim O'Reilly, and the firm O'Reilly alphatech ventures, Bryce Roberts was there at the time. And we did a lot of investing in startups that one of the underlying missions had to be that it had to be doing something good for the world. So it was really interesting to come to the valley, but also come to the valley in this this environment that was very focused on doing good and thinking big. And actually, I went to Singularity University a couple times for for demo day. This is why this is sort of on my mind. And we were always really interested in finding things that move the needle in solving societal problems using tech as opposed to here's yet another, you know, ad based social network. And the experience was was, was really remarkable. Again, it was a great way to meet people in the valley and to understand that the networks, the dynamics, the culture, the blind spots also. And I felt like so I did that for about three years and then left to start a company in supply chain logistics. When I felt that I felt that there needed to be some rethinking of how goods moved around the world. I felt like that was sort of an inefficient process. And so I started a supply chain logistics company focused on container shipping. Meanwhile, as I was doing this, I had my first baby and I got very interested in the anti vaccine movement online. And and as a as As a pro vaccine parent to be clear, but I was very interested in all of a sudden, I felt like I was being barraged with social media posts about anti vaccine stuff. And I felt that that was, you know, bad actually. And so I started kind of digging around the way that online network dynamics around anti vaxxers were taking shape. And that was kind of what started as like a night, my night project, my start, you know, everyone in the Valley has like a side hustle, right? That was mine. And, and so my side project my side interest in in just like using data science and data visualization, which I was kind of, you know, teaching myself more and more of, was really where my interest increasingly started to lie. And what wound up happening was that I started doing work to try to understand the dynamics of anti vaccine, misinformation online across platforms. And that became sort of more and more of a passion area for me. And what took me to the Stanford internet observatory was I kept doing that work at night. And then it was at a time in 2015. When I think other people were beginning to realize that the same dynamics that kind of conspiratorial groups were using to grow their members online to grow their numbers. Were also being used by really bad groups, like terrorist organizations. And so ISIS was operating at the time. And I got this very unexpected phone call asking if I would participate in doing similar research similar, like how do we understand the the network dynamics of these messages? How do we understand how these messages spread related to ISIS? And so all you know, is like, Well, I'm not a counterterrorism expert by any stretch and they were like, that's not the point. So I spent some time with a few other folks at State Department looking at this question. And then I started to feel as sometimes happens that my side stuff was becoming more and more my primary area of interest. And so that was, you know, kind of a long winded way of saying eventually I just kind of wanted to, to do this full time so I went first to a company that was doing it called yonder and then I went to Stanford in an observatory when all of a sudden there was funding an academic interest and you know, sort of four years after I had started feeling like something was kind of off there was now a center that we could build and dedicate to studying it so we work not only on understanding mis and disinformation that's part of it but the the framing the abuse of current information technology has really come to to mean a lot of different things and so some of what we try to understand is the dynamic of you know, what happens to the world when end to end encryption becomes prevalent? What happens to the world when generative AI produces videos and and images and text? What happens to it how do we think about protecting the rights of individuals the rights of children the rights of of people who live outside of areas that Silicon Valley focuses on in you know, whether that be with privacy protecting technologies or ways in which social media has a negative impact on their political environment or ecosystem ways in which exploitation occurs? So that that's been in a nutshell I think we try to focus a center not only on missing disinformation though, that is a very big part of it, but also on these other information technologies, thinking again about the internet ecosystem as as a holistic system and you know, much like the old work on Wall Street ways in which different technologies temporarily shift, you know, shift or create problems in the in the information space, and then how this all kind of fits together. holistically.
Steven Parton 08:48
I feel like you're alluding to GameStop there a little bit.
Renee DiResta 08:52
I watched that with that was really fascinating to me on so many different levels. I mean, I don't know if you want to talk about GameStop I can definitely
Steven Parton 08:59
Yeah, we can take a quick segue cuz I actually that might be an interesting interesting point, because that is something that was almost in a way conspiratorial was this mass of people on internet forum, rallying together to create a real world change. I mean, what was your take on that?
Renee DiResta 09:19
I thought it was fascinating. At first there was the, I think that the most interesting dynamic to me still, you know, years later is ways in which the Internet has enabled networked activism But more than that, also just kind of persistent crowd dynamics. And, and what that crowd is pointed at and who points the crowd at something is a is a thing that I'm very, very interested in. Because I think that gets to the real question, the underlying question of power, right? And what do you put all of these people's attention all of these people's energy and effort What do you pointed at? And so seeing it pointed at Wall Street, you know, I remember occupy it was it was in the park adjacent to my office. When, when I worked on Wall Street actually, and And I remember thinking that there was a lot of energy and you know, and they were right about about certain things. But there was there was no guidance. There was no momentum there was no ask there was nothing and there was anger, but that was it. And anger is justifiable and can be used to galvanize movements but it was the sort of it was very aimless. They were very aimless. Yeah. And so what I thought was really interesting about watching this dynamic was all you know, there the opportunity for certain influential voices to kind of rise to the top and the, in the community and to, you know, to some extent, democratically, but also we have to, you know, be real. You never know who's in the internet crowd, right? That's one of the one of the interesting dynamics and the, I was reading these, these early, you know, tweets and articles about it, thinking about it as like this populist, anti Wall Street revolution, as you know, like fighting against the traders. And I was like, you know, when I was a trader, we read Reddit, you know, what are you talking about? Like, like, like, people on Wall Street don't know that Reddit exists? What is this nonsense? Matt Levine, I think was the one who did the best coverage of like, articulating the how ridiculous that particular aspect of it was. But I think, you know, again, it, it does show the ways in which bottom up networked activism can have a real transformative effect today on power structures. And I think that the thing that I'm also very interested in seeing is the extent to which Wall Street which actually has functional regulators, comes back and does something about it, you know, I'm really interested with the response to that it's going to be whether they're going to amend their definitions for what manipulative behavior is how they're going to think about these dynamics going forward, to what extent they're going to try to get in there and figure out, you know, who some of those accounts might have been, because some of them were certainly other finance professionals as well. This was not just a, you know, plucky band of outsiders, despite the characterization. And then there the other piece that I thought was really interesting was the immediate devolution into conspiracy, when Robin Hood had to halt trading, and some of the names like the, like the assumption that like they, like the man had somehow gotten to them, as opposed to an understanding that, you know, margin calls are a thing and and brokerages have to you know, have to have a certain amount of capital on hand and such. And so the willingness to read conspiracy into everything, I think, is, is one of the real problems in the information ecosystem today, and it was reflected in this particular instance, as well.
Steven Parton 12:47
Yeah, I absolutely loved how you frame that in terms of aiming the masses or weaponizing. The crowd, so to say, and specifically for you that is done by, or you explore how that's done by misinformation, disinformation, propaganda, fake news, could you maybe define those terms for us and ways that people might misunderstand them? Because I think that's one of the ways that this gets muddied is that people really don't know, what is the difference between somebody who just posts something that they don't quite understand versus somebody who's actively manipulating people.
Renee DiResta 13:23
Yeah, so misinformation. Academically, we use it to mean information that is inadvertently spread and wrong. So false and false or misleading information, but that is spread inadvertently. So the person who is sharing it doesn't know or sorry, the the person who's sharing it believes it, and they're doing it out of a sense of altruism. So there's no underlying intent to deceive, inherent in sharing misinformation. It's just something that's wrong, and I believe it and so I share it with you because I want you to be aware of the thing in the article. And, you know, there's no nefarious, you know, bad guy behind the scenes involved in this in this interaction. disinformation involves it really the differentiator is intent. There is an intent to deceive an intent to influence in a certain way and attempt to achieve some sort of objective political or economic and so the information is seated by a person who knows that what they're spreading is false knows what they're spreading is manipulative. But again, it's doing it out of out of a particular, you know, with a particular goal in mind. propaganda, I use to mean information with an agenda. I think it's actually a very, you know, it is a broad term, it's the history of propaganda is absolutely fascinating. it you know, the term itself dates back to the the Catholic Church, fighting the Protestant Reformation to the committee for the propagation of the faith. And the the idea is that, you know, you have to convey the information to the people, you have to propagate that of course, correct ideas And it was not always seen as a pejorative, interestingly, it wasn't really seen as a pejorative until around World War Two. So there's this whole vast history where the sort of very respectable figures in media and government speak quite openly, particularly in the early 1920s, about how it is actually the responsibility of government to properly inform the people to align them around policy. And this is where, because otherwise societies ungovernable. And so there's this really fascinating literature, if you, if you go back into it, it reads is very, very shocking today, but they're articulating this idea that people have better things to do with their time than become experts in all the different policy areas that are going to impact them or go, it is the responsibility of government institutions and the media to convey to them the things that they need to know. And we read that today is very much, you know, they're telling us lies, this is manufactured consent, you know, this is a is a, you know, propaganda is is a way that powerful entities that control communication ecosystems, abuse us or mislead or manipulate us. But if you if you go back and read the early literature on it, it was framed, sewed so distinctly differently. And some of the work that that I've been doing is trying to understand how do we contextualize disinformation campaigns today, in the context of the way that propaganda operations, both both what we call white propaganda attributed or overt propaganda. And then there's a spectrum white, gray, black, the sort of darker sides of the spectrum, where it becomes harder to tell who the message is coming from. And this sort of darkest phase, the message is actively mis attributed to somebody else. So there's this spectrum of different types of propaganda operations that can be carried out, and how do we read what's happening on the internet today in the context of these historical dynamics, because propaganda didn't just come out of nowhere with the you know, with the creation of Facebook. And so I think some of the work that we do at SEO is understand the ways in which propaganda has been democratized, meaning that anyone can do this. Now PR the point about steering the crowd, you don't have to be a ruler, you don't have to be a person who controls a major media channel. You can it is much, much, much more democratized. And much more participatory, meaning ordinary people are very much involved in the sharing process in the spreading process. And so as that evolution has happened, you know what? It's interesting. The technological manifestation is the evolution, the propaganda is very old. So how does this very old phenomenon modernize in the age of new communication patterns, and affordances. And that's one of the things that I spend a lot of time looking at.
Steven Parton 17:52
When it feels as I'm hearing you say that I'm thinking in a sense that propaganda really just feels like everybody having an opinion and just wanting to share it, right? It's hard to, it's hard to in a way, if you'd like to separate the fact that all of us are kind of always using language to manipulate people to our point of view or perspective, in some sense, it feels like a pretty natural way that we interact, that's kind of our goal. So
Renee DiResta 18:19
it's, it's interesting, you even say manipulate, because one of the things that it's tied to very early on, again, even back in the 1920s is persuasion and public relations, right? It's the idea that you should persuasively convey your point of view to dissuade people over to your particular side of a political issue or, you know, or make them love your brand. You know, there's a lot of different ways in which it manifests advertising. And so this, this, you know, there is this very obviously manipulative angle to it when it's done through these, these gray and black sort of, you know, covert, actively mis attributed means that's where someone is actively manipulating you. But there is then also this idea of propaganda with the quote unquote, white propaganda, which is over communications from state media, for example, which has another name in academic literature, and that is public diplomacy. Right? So the state is conveying its point of view to you and, and the state has a point of view, and here it is putting it out there, and it's attributed to the state. And so the question then becomes, you know, how do we how do we process that dynamic, again, in the broader information ecosystem of the day where there's so much information coming at you constantly that to some extent, what matters is how that information is curated for you. It's not even Are you seeing propaganda? It's what is the what is the makeup of the information hitting you? How are people targeting you with their messages with their information, either organically or through paid communications and again, then in aggregate, what is the effect that this has on the individual but then also on society?
Steven Parton 20:00
So that is maybe a perfect chance then to talk about how that invitation to explore terrorism matched up with the work you were doing at OSI. Oh, and maybe you can tell us a bit more about what your what specific data you were exploring around the Russian involvement in our democracy, and maybe how that related to the terrorism tactics and how this really played out in your personal work?
Renee DiResta 20:25
Yeah, so so in 2015, October 2015, was when we were looking at these ISIS dynamics. And again, I was not the terrorism expert on the team, I was there, mostly because again, that the ways in which groups began to use the affordances. By affordances, I mean, the technology that is produced for another purpose entirely, but when, when a tool is created, it can be turned into a weapon, just as a really simple example, right, depending on how it's used. And so the fact that you could run an automated account on Twitter meant that lots of artists and really interesting people news feeds, there was never the Big Ben Twitter account that used to like ball on the hour and stuff, just these little kind of whimsical creations. That was like the nice side of Twitter automation. And then the bad side of Twitter automation was terrorist organizations realizing that they could completely own trending topics and various hashtags, you know, really dominate what's called share voice, by running automated accounts. So here, again, this, that we had this affordance, where we could run automated communication tools. And at the time, Twitter didn't really differentiate between an automated account and human account in in determining what inputs went into deciding that something was trending. And so trending was just constantly being manipulated by whoever ran the bigger botnet, which has really interesting implications, if you think of it as like, trending is supposed to signal to the public that a lot of people care about a thing. And instead, what it's signaling Is that the one guy with the biggest bot army cares about a thing, right? So you have these interesting questions around like disproportionate reach of someone's speech, if you will, right, this idea that with the right button that you can, you can kind of come and hear the conversation. So as ISIS was doing that, as as, as they were using what we can point to in history of agents of influence, the idea that you're engaging with somebody who's operating on behalf of something else, but isn't necessarily disclosing to you who they're operating on behalf of. So with ISIS, because they were trying to legitimize themselves as a, you know, as a as a nation, if you will, like the virtual Caliphate, and then the real land that they were working to control, what you started to see was their propaganda was very much about establishing themselves as a viable entity. They were extremely over the black flag was everywhere, they had an iconography, they had a language, they had names. And so they were using the affordances of the social ecosystem to almost build themselves into being as a as a, as this as a more cohesive, larger organization than they actually were. What when we were doing that research, though, one of the interesting things is we were trying to map the contours and understand what that what that entity was doing. One of the things that was happening was that we already knew that Russia was operating on social platforms, Adrian Chen had written this excellent article that I highly recommend called the agency. And it was the story of the internet research agency, and he becomes a character in the story. It's an absolutely fantastic read. And so the trolls from Olga know that the Russian, the Russian troll farm already existed, and anyone who was paying attention to manipulation on the internet knew about it. And that includes, of course, the US government. So as we're having these conversations about ISIS entities within USG, within your defense department, and DARPA and others, have already realized that, that Russia is operating on us social platforms in this covert manipulative way as well, creating personas that look like Americans. So this is a little bit different than what ISIS chose to do with this affordances. But it's the exact same ecosystem, it's the exact same tools, it's just that they're using it in a slightly different way. So instead of overtly putting themselves out there, as we are going to try to persuade you to our point of view, by identifying who we are. Instead, what you start to see is, again, using those same tools, the ability to create a persona, the ability to use automation, the ability to reach directly to people to infiltrate the groups and communities that they participate in and to serve as agents of influence in this way that is very, very difficult to detect. It's happening at the same time, the IRA is operating at the same time as ISIS. And so you just see it inflected slightly differently. The challenge was, there was a belief there needed to be sort of a center within us government that was going to be responsible for this right and what came what came out of it was this entity at the State Department called the Global Engagement Center. And it's called the Global Engagement Center because it had to engage Globally, because the recognition was that US government could not counter propagandize to people who wouldn't? You know, nobody gives a damn if US government tweets at them, don't go join ISIS, like, Who cares? Right? Nobody actually motivated to join ISIS cares. But the US government, state department Twitter account. Thanks. So the question became, are there ways for us as a as a international body to address this organization that was spreading internationally by trying to connect with and amplify the voices of people who could authentically counter that propaganda by giving the people who are receptive to it, another message another way forward another way to live. And so it was really more about finding those other organizations as other partner organizations to serve as counters, and then lifting up their message and helping them better reach the public because, again, this was now a competition for attention whose messages were going to be heard, who's bought that on Twitter was going to get seen the wall that conversation about what do we do about this was happening though, the thing we kept emphasizing, those of us who were more in the weeds, technically, was that this was a system problem, and that you were not going to solve it by wiping out one adversary. Because what was confronting us was that the tool of social networks, the tool, the the magic of human connection, was being co opted and manipulated by people who wanted to use it for very deceptive and, you know, bad aims. And so you couldn't design a counter approach that focused on the one adversary of the day, you had to think bigger, you had to think about, what how might the system be restored? To you? How might How might the system be changed to diminish the bad, you know, diminish the bad while keeping the good? And so this was just a really hard question. Because we're saying, look, if ISIS can do this, like, we know, Russia is doing it, you better believe any nation state with, you know, is is going to find this to be a phenomenally effective tool, and they're all going to get in on it. And then since it's so easy to do, you know, my, my little antivaxxer, you know, friends are running their button that's better believe every domestic faction in America is going to get in on this too. And and, you know, and then it happened, right, and so and so now, what you've seen is I think, over the last few years, the rethinking of the space not only focused on foreign adversaries, you know, state actors and terrorist organizations, but also this recognition that it is democratized. propaganda is democratized. And what does that mean for us as a society going forward? When there is this, this kind of constant barrage of messaging and and you know, and various tactics to try to come into your attention, and that's the information environment and social ecosystem that we're all living in.
Steven Parton 27:59
I think you mentioned at one point that when you were trying to create an ad for vaccination to kind of counteract the anti vaxxers that you were actually unable to find a way to target that demographic that it didn't actually exist so in a way you're being forced to you were kind of prevented I guess, from engaging in the most trending conversation.
Renee DiResta 28:23
Yeah, yeah, there's a there's a really interesting reason for that. So Facebook's ad tool, which is you know, it's it's it's very powerful if you're a small advocacy organization then you need to reach audiences and grow a social movement I mean, it is many new emerging political candidates also you know, there's a, it's incredibly powerful and you can do very granular targeting and so what we were trying to do, you know, me as an activist in 2015 in California, we wanted to find pro vaccine parents to galvanize this counter movement because you can only do so much by resetting the technical playing field you do also have to address the the substance of the of the conversation and so we thought okay, we're going to grow a parent led pro vaccine counter movement, so I was like, okay, so I was I was paying for my own ads there is no like, you know, pharma shilling or whatever the hell that you know, they always accused you of that, but I was like, okay, so I have about $2,000 to donate to this to like, dedicate to my cause here. So I'm gonna run my ads, you know. And, and then I'm going through the ad tool, and I was like, interestingly, the only targeting remotely related to vaccines was all anti vaccine targeting and I couldn't figure out why would Facebook be making it so easy to reach anti vaxxers but the anti vaxxers were doing this they were running ads, I was getting seen I was seeing their ads constantly. I was being targeted, because I had indicated an interest in the vaccine conversation on the other side, but you know, whatever. I kept getting these anti vaccine ads served to me. And what what I began what realize that the realization was that there were certain candidates Like vaccines that were that were in there. But then there were these random small niche pages like you could target ads to fans of sherry tenpenny. Who's this kind of anti vaccine quack doctor? I was like, why is that in here? Why the hell cares. And it was because at the time this was before pro publica was actually the one that that figured out why this was happening from the outside, the ad tool would key off of if a sufficient number of people entered a certain word in their interest or BIOS, the ad tool was just kind of pulling from that and after it hit a certain threshold, it would serve that interest up as a as a as an option in the actual now pro publica was looking at this in the context of like phenomenally racist and anti semitic terms, right? That, that, you know, people on Facebook who held those views were listing that as like their occupation, like, you know, x hater, you know, there was some really gross stuff. And that was making its way into the ad tool, because the tool was just pulling when a sufficient number of people had had indicated this interest became an interest. And so this was where again, the problem was the fact that there was so much highly active anti vaccine activism meant that no pro vaccine person who vaccinate their kids and goes on with their day is writing like pro vaxxer in their, you know, Facebook bio, it's just, you know, it's not a thing that you do, right. And, you know, most people, this is like, flat earth was my other example of this, where it's like, nobody is like defining themselves as a round earther in their Facebook bio, you know. And so, so there were these certain kind of nice, pseudo sciency areas where people who were very passionate about them, like that was their thing, made it their, their identity on Facebook, and Facebook responded by keying off of that, and turning this into a thing where you could actually target that identity with an ad. So that was how that was. So that was happening. And they, after this pro publica, article came out, most of those categories went away. But there was really no good way to run a pro science advocacy campaign short of ad targeting doctors and public health professionals getting them involved, and then having them reach out to their personal networks. And I was very disappointed in that state of affairs, where
Steven Parton 32:23
there were there other aspects of the system like inherent ways that social media platforms, especially war were built, that made them susceptible to these forms of misinformation, disinformation, that kind of enabled a lot of this behavior.
Renee DiResta 32:40
Again, the recommendation engine was the biggest, the second biggest thing, so there was, for maybe the biggest organic thing by far, and this was the, the, if you followed an account, it would recommend more accounts to you. And sometimes on Facebook that took the form of groups on Twitter that took the form of people. And And interestingly, this was happening with the ISIS accounts too, like there was just so many similarities and how the technology didn't understand what it was recommending. Right, it was just supposed to show you something where if you had this interest, and so just for the, for the audience, there's content based filtering where it says you have this interest. So here's more of the things related to that interest. So you follow an account in a particular topic area, it shows you other accounts in that topic area. This is how it kind of the ISIS, you know, dynamics are taking shape. What was happening with the antivaxxers, and the recommendation engine on Facebook was this idea of collaborative filtering, which is you have an interest and people who are statistically similar to you, they have some degree of similar interest or location or actions, or they read the same things, you know, there's this this cloud of data points, and they can assemble some similarity measure in which they say that you are like this person. And so even if you have never searched for anti vaccine, your interests are similar enough for me it was I was a new parent, I joined a couple of parenting groups. And I like made my own baby food, I did some things that were a little longer, like crunchy or side of parenting. And so since I was, you know, I was flagged as like crunchy parent, you know, by the Facebook, I decided that naturally as a crunchy parent, I probably was interested in anti vaccine content. And that's not a bad guess, unfortunately. Right? And so if you have a, an algorithm, and it is optimized to show the most relevant, salient thing, again, with no ethical sense of what that thing may be, then that similarity measure is correct. And it's inconvenient, but it's correct. And so there was there were indications that maybe we didn't need to be proactively pushing the stuff to people. And saying that in 2018 was like a very like, Whoa, You know, who are we to decide the slope is so slippery? You know? And and I remembered hearing the slope is so slippery when we were having a conversation about what to do about the ISIS Twitter accounts in 2015. Because then there was this whole like, well, one man's terrorist is another man's freedom fighter. And if we take down ISIS, you know, what next. And if you say that today, it sounds just crazy. No, that that was that that was where the conversation was in 2015. But it very much was. And it is because the platforms were growing larger and more powerful. And today, I think we've all recognized the impact of that power, and the difficulty in moderating on platforms that are that big, because there is the possibility of bad moderation decisions, there is the possibility of deep platforming. And people have come to think of this as the new public square, instead of just an app on their phone that they, you know, pull out when they're bored. And so the in in a very relatively short period of time, the ecosystem really exploded. And now we're trying to kind of come up with norms, and to some extent, regulation, and guardrails after the fact. And it's very difficult to do that, when there's been when people have come to expect that that things work a certain way, and that these affordances, you know, provide a certain degree of ever reaching algorithmic suggestion and curation of a certain type.
Steven Parton 36:32
How much has changed in the past six years or so have you seen much in terms of how the suggestions work? Or how the algorithms work or any kind of policies?
Renee DiResta 36:44
Yeah, so in 2019, the anti vaxxers came out of the recommendation engine, and that was because of a couple of very high profile measles outbreaks, including one in which about 70 Kids died. So there was a, you know, kind of finally a recognition that that, that this was having significant harms. And so the definition of harm was what changed, really. So it used to be that harm, referred to immediate incitement to violence, right, and that the kind of content that would come down was content that was advocating an immediate harm. And the platform's began to change their view of what constituted a harm in that period of time. So there were some things with health right? And with health that was, again, the vaccines cancer quackery things where if you serve that content up to people, they they would potentially be, you know, harmed by it on a personal level, and then public health at a societal level. Interestingly, Google Search had already had that designation already had that recognition that harm could mean other things besides imminent incitement to violence, Google's search came up with this framework called your money or your life, I think it was around 20 2013, maybe it was it was a it was actually pretty old. In that they recognize that if search results, you know, you get a new cancer diagnosis and you type in your cancer, and then it serves you a juice fast, you know, website, probably that is not going to be very beneficial for your health. And so your money or your life was this recognition that there was a certain sort of duty of care, if you will, or a certain quality that they had to return for that information. Because people were searching for information to make potentially life or death, or very, very financial sense, meaningful, weighty decisions, and they needed to have good information. That didn't translate to the social platforms. Because we were never supposed to use them to answer all of life's questions. They were supposed to be places where you like connected with your friends, your wedding pictures, and like talked about, you know, cat names, right? They weren't supposed to be like, where you went when you needed information about your new cancer diagnosis. But then as groups and communities formed on there that were focused on those things, all of a sudden, people were searching for that stuff. And then the platform's had to again, try to think through what does it mean to, to potentially harm a community when they're when they're searching for these words, and they're finding, you know, searching for MMR as a new parent, and they get a litany of anti vaccine groups instead of, you know, qualified medical professionals. So that was really started to see this idea that maybe again, again, that is the determination that was made was, we're going to remove them from recommendation engines, but they're going to be allowed to remain on the platform. And that was where they initially went with Q and on to so it's hard to you know, there are certain bright line areas, and then there are these, these areas that are not quite so bright. And that's where the public conversation is very much around. Are people being unfairly penalized for holding certain beliefs. And that I think is the real, challenging, dicey question that faces us now, in some ways, we've addressed a lot of the low hanging fruit. So the policies related to risk trolls just again, going back to the quaint old days of Russian trolls. I mean, they're still there. It's just they're not as impactful now as they were before, but that's largely because their infrastructure was dismantled. And now Facebook has integrity teams that go looking for them. And the investigations teams are looking for these, these these networks of fake accounts. And they come down not because of what they say, write the message, they oftentimes they'll just grab messaging from some random hyper partisan political site. But they come down because they're, they're inauthentic, they're fake. And that's a violation of Facebook's Terms of Service. So we have a whole policy now related to how to handle these networks of fake foreign accounts that were kind of significant challenge in 2016. But now today, their policies that deal with them, health misinformation, again, same thing, now we've got a lot more in the way of policies dealing with it, this doesn't mean that it's solved the problem, because in a lot of cases, like the, you know, we've got the the horses left the barn, right, and these communities of people are formed and the demand is there. But at least there's no new recruiting, if you will. But now we're at the the areas where the lines are much less clear. And this is where you start to get at questions of what should be done about q anon. Yeah. And Facebook made the determination last year to, to remove all of the groups related to Q anon from the platform and they did it under the dangerous orgs team was the team that made that determination and took them down. And the interesting challenge that faces us now is when you try to dismantle groups that have been cohesive for a very long time, they usually migrate and they go somewhere else, more underground, perhaps into you know, more encrypted spaces today. So the question becomes, is the trade off of that dynamic, the create the creation of the perception that a viewpoint is being censored, and the pushing of these people into you know, less visible spaces? Is that worth it so to speak, in the cost benefit analysis of, of not facilitating additional recruiting, or not making it as easy to organize? And that is the that's kind of where the the front line in the in the question about platforming deep platforming and censorship is today's is how should platforms be thinking about that going forward?
Steven Parton 42:24
Where do you fall on that?
Renee DiResta 42:27
It's really tough. I mean, I, my personal preference. So again, for audiences who are not as immersed in how moderation policy works, there are three rough buckets of moderation, there's what's removed, which is what we've been talking about the idea that you take them down, you know, you remove them from the platform, kill the account, you know, there's reduce, which is what I was describing with, you remove it from the recommendation engine, you throttle, its distribution, you make it harder to find. And then there's the last piece, which is inform and inform is where you put up an interstitial or a fact check.
Steven Parton 43:04
I'm wondering though, do you have any data on the success of inform of the informed approach of those little flags that let people know,
Renee DiResta 43:14
that's where I don't think we have very great data yet. There are some studies related to fact checking. That, you know, there's been rendered NIH hands work on the backfire effect, but then he himself has said that he feels that some of the way in which that finding was processed kind of overstated, it created the perception that the backfire effect says that even just alluding to the conspiracy, and the fact check reinforces it. People are I think moving away from that a little bit now. And so there is more more of an opportunity to study these things, I think we're going to hit the point where now that you know, the inform function was used so widely during the election, we'd actually do now have a you know, a data set that is somewhat within the platforms but they're, you know, Facebook and others are trying to do a little bit more to to help outside researchers, like I said, SEO and other places, do more studies using this data. So there's the potential to that one of the real concerns is how do you protect privacy well, while enabling this kind of research, and so I think there'll be some some data sets that will likely be made available to to researchers who go through an application process to get it and look at it and try to answer exactly that question. What did this do during the election? Did people click on the inform link? Was there a decline in shares You know, when the when the interstitial was put up, if you have to click through to see a video do people still click through so
Steven Parton 44:57
I was gonna sorry, as you said, like my intro wishon would make me think that it would work somewhat, right, because I feel like a lot of the power of what we're talking about here is that you're really like hijacking people's social insecurities. And if you were about to share something that you know was labeled with this could be a lie. I think it would make people pause a little bit before they reposted that and, you know, had this thought in the back of the mind, and like I'm sharing, maybe I'm sharing something that I'm telling, I'm telling people, I'm sharing information that might be a lie, maybe this would look bad on me, in terms of somebody with integrity or something like that, and maybe would slow things down, you know, at least the people on the fringe.
Renee DiResta 45:36
I think I think it'll write the kind of the undecideds. I think it'll be interesting to see also to segment that by is this information about health or something that is, you know, demonstrably false versus political sharing where there's so much of a signaling function that goes into political sharing right where you're, you're really just saying like, I'm on this team. There have been people where you know, some of these videos have come out heavily edited the one of Nancy Pelosi where they slowed it down made it look like she was drunk and if you remember this,
Steven Parton 46:06
no, I feel like I'm gonna look it up.
Renee DiResta 46:09
I know this is the thing like I feel like I spent all my time looking at wild bs that
Steven Parton 46:12
you're like the perfect redditor really,
Renee DiResta 46:15
really am way too online. But the there was a video that went viral of Nancy Pelosi and they had slowed it down to make her look like she was slurring her speech and it went out as like drunk Nancy or something like that. And you know, you do see when people will ask like, why did you share that video? Sometimes you'll see news coverage of it. Sometimes you'll just see people on Twitter asking. And they'll say like, well cuz I hater. Yeah, that doesn't matter if it's true. I hate her. So you go, I shared it. And so that I'm curious to see if there's like differences in dynamics between what you're saying. Not wanting to be the person who shares factually inaccurate, like pandemic information, versus whether that consideration that commitment to facts. How that intersects with political identity, and yeah, sort of political tribalism,
Steven Parton 47:14
or just the lore of being somebody who's kind of doing something off the mainstream, like people love it. That idea that I had right there. My intuition might be totally counterbalanced by the fact that people love the bad boys. You know, they love the things that are just like, maybe not acceptable, but I like that because it makes me unique. I want good.
Renee DiResta 47:35
There's a really interesting dynamic where on the Russia stuff, people ask people, people tracked down some of the folks who the Russians were actually dealing with because they were in, you know, in the DMS, like trying to get people to hire a Hillary Clinton Pro, you know, hire Hillary Clinton impersonator for a protest and put her on a flatbed truck in a jail. That was one of the things that happened, right? You know, and so they're dming people to try to interfere in these protest movements and create these big spectacles are offering Black Lives Matter activists, like, Can we send you money for posters, you know, like, how can we help your protest? And so they were doing this for all the different, you know, issues and sides and groups that they had enter interfaced with. And as some of these posts became, some of this information came to light and became public. There was an interview done with a, you know, a woman who, first her response was like, No, there's no way they were Russian, right, which is, you know, which is fair, honestly, I mean, it's that like, the disbelief response, like hearing that this even happens seems like a kind of a wild claim. And then as she, you know, as you see the kind of realization heading other sort of showing her the citations from DOJ saying this is, this is this happened. But then a lot of the attitude turns into, like, well, who cares? If it was the Russians, I, you know, like the Russians are amplifying a point of view that I already agree with, so who cares if it was the Russians or, you know, and And so again, this this, it was really seen as like, well, it doesn't matter who the speaker is, as long as I agree with the message. And so that's kind of an interesting dynamic, as well,
Steven Parton 49:13
which is, yeah, a real quickly do you do what was the scale of the the Russian infrastructure you talked about earlier that it kind of collapsed, and that slowed them down? But how many people are we thinking were actually involved in this coordinated effort?
Renee DiResta 49:29
So the interestingly, our understanding of how the troll factory operates because a couple 1000 people of whom a few 100 or, you know, passed on the American desk, if you will, the American desk was like, the highest order, you know, you'd like you were the best. You had fluent English, you know, you were good at making your memes, et cetera, et cetera. Because the internet research agency was operating globally. You know, they're running operations in Ukraine also. And so, so they've got different you know, it's like a marketing An agency different focuses. The question of impact is a really interesting one. So it takes years to build up plausible accounts and personas. It's a very long game. And one of the things that we see is Russia consistently puts in the effort to create accounts, constantly creating accounts, and building up their credibility until they're put to use for the operation that they're needed for. Versus China, where we see state sponsored state attributed information operations, talking about the Hong Kong protests, talking about the Taiwan election, about COVID about Chinese dissidents that they don't like. They do really flimsy just garbage personas, like they don't invest the multiple years in building up that persona over time. So what you'll see is, some, a lot of times the accounts are relatively new, and you know, it's like Sarah 1248678, right, you know, it's versus the Russian personas where they have a, there's a biography, there's a character there, they live their character, it's much more of like an agent of influence model, because there this is a, this is a persona, right? This is a the same way that you would have a person an alias serving in another country, you know, during the Cold War, you're effectively replicating that model of persona creation and, and the relationship building that goes into it. Right, these personas they're they're not just supposed to be broadcasters are supposed to form relationships with people to look like them speak like them act like them. And that's the that dynamic takes a much longer period of time. So to make sure I answered your question that with Russia, it was the accounts that they were using in 2016, and 2018, were created back in 2014 2015. So it's a much more long game. So they have reconstituted other accounts, we do see newer accounts that are again, technically linked to, to Russian state linked actors, but they don't have the, the infrastructure of that entire, they weren't really trying to hide their stuff. That's the thing. in the, in the old days, the accounts that the accounts that came down, where once you had half the network, you could get the rest of the network. So
Steven Parton 52:20
and this is gonna be an impossible question, I'm sure. Because if you could figure this out, you would be probably the most popular person in the world right now. In a perfect world, what is your like solution? Going forward to this kind of stuff? What What would you love to see happen, that you think would assuage some of the issues or at least increase the integrity or efficacy of the system?
Renee DiResta 52:46
Yeah, I, you know, I stopped talking about solutions, which made me like, slightly unpopular for this question. But that's because I actually started thinking about it more as like, how do you manage a chronic condition, right, you know, if you if you think about it, it's like something for which there is no cure, but you can still live quite well, assuming you get the condition, you know, down to a manageable degree of control. So I don't think that you're ever going to kick off every manipulator on the internet, or every foreign troll or every domestic, you know, domestic folks who run manipulate mullet manipulation campaigns to, I think where we are is, I'd like to see more thoughtful product design. Within tech platforms, we're starting to see more red teaming, where they're thinking through, when we release this feature, how will it be abused? Not will it be abused? How will it be abused? And so I always think that, that that kind of work is actually like, you know, it's, it's like security research, right? You know, the same idea that you're going to have someone pen test your networks, or, or think through all of the ways that you have security vulnerabilities, I think that we can be doing that a little bit in the information space as well. When we add new affordances How does the new affordance change the state of play across the internet ecosystem for those who wish to manipulate it, we can think about that proactively as opposed to, you know, releasing it, and then throwing up our hands and saying, like, Who could have thought, you know, I like the idea of friction, the injection of friction into the information ecosystem, and I think this possibly is also an unpopular point of view. But to defend it, what I will say is, I think that the problem is not necessarily content a lot of the time it's not the message, it's the it's the inclination, it's the virality. It's that these things, you know, again, the idea that you can throttle a video and slap an informed link on it is something that allows us to inject a little bit of friction that's at the moderation level, but even at the user level. You know, you may notice that Twitter has a thing now where it pops up a little message saying like, Did you read Do you want to read the article, right if you go to if you just hit the retweet button before have actually clicked through and read the article, it gives you this little nudge, do you want to read the article? And I think we'll have some interesting data into what that experiment shows, you know, does that reduce the the people actually go and click through and read it at that point, you know, so there are ways to incorporate friction in to slow the speed of information a bit in the hopes of increasing its accuracy. And and I think that that is an experiment I would like to see conducted. And I think it's it's a very, you know, tools like that, you know, not being able to share until you've read or getting this prompt to share before you've read or are potentially interesting, making you read copy the URL is potentially interesting. Just things like that. And I think that's, and then the last thing, I think, is really curation. I think that the problem of finite attention means that there's this constant need to create sensational headlines to do things that capture attention. And right now, the platform's generally rely on engagement as as the determinant for what they're going to show you first, what they're going to rank at the top of your feed. And I think that there are opportunities to rethink how we curate information to create perhaps a healthier information environment that uses other things. Besides, is this the highest engagement content to decide what we see, I think putting some control in the hands of users, there would also be very helpful for people to be able to have a little bit more control over what they're seeing, or at least understand why this post was surfaced. So that's the kind of three three things that I think are, you know, can lead us towards a better information environment without over promising on this belief that it's going to, you know, fix something.
Steven Parton 57:04
Yeah. I love your acceptance of the of the battle that we're in here. I don't think those are unpopular opinions. I think I think those are great ideas. I mean, it's no, I really do like it. It was it was it being facetious, I think it's great that it's owning the fact that this is like the human condition. And this is something we're in a lot of ways, and this is just something that we're gonna have to learn how to respond to?
Renee DiResta 57:27
Well, that was I really spend a lot of time you know, my limited free time, I'm actually most interested in like old propaganda books, you know, old books on what did this look like then? And what did we do about it? You know, and are there as we think through this as the next iteration, as opposed to as a brand new problem? What are the ways in which the technological affordances have made it different? Potentially worse? And how can we rethink those specific affordances as opposed to operating in this misguided belief that we're going to remove all false information or all propaganda from the internet, which we will not?
Steven Parton 58:05
Yeah, I'll save the Edward Bernays conversation for another time. Yeah. That's it. I want to get one more question real quick from you, and then let you go, cuz I know we're running pretty long here. But this is one's kind of for me, do you? Do you support the idea of like a digital ID online at all?
Renee DiResta 58:23
That is such? You know, I talked about this a lot with them with Balaji? Actually, I am really intrigued by it. I would have to say Yes, I do. I'm not as a mandatory thing. But as a you know, what the blue check is supposed to be in a sense, like the you know, or what it originally was, before it became a marker of this is an impressive person you should pay attention to but when it was really just like, this is who this person says they are. Right? That that idea, this is who the person says they are, I think is a really fascinating idea. I think that the idea that we don't need homogeneity across the internet, there are going to be platforms that allow for varying degrees of, of who you are, you know, I remember hearing mood speak back in the day, it was like it's not who you share with to share as, and I think that that's a really interesting idea that there are certain spaces that were you share differently, because it's a different facet of your personality or identity. And I think that one of the things I really like about Reddit is this idea of persistent pseudonymity, where it's not like a throwaway alias where you recreate one and you know, because people can see if you're a brand new person, you have that zero there and it actually throttles you, right, you can't just go post. And I think that that makes sense. It facilitates good communication, and it facilitates better community that's not overrun with spammers and trolls and garbage accounts. So I think that that that was a good policy. And I think that that persistent pseudonymity is also a really interesting idea because it at least suggests that there's a Kind of a the same person, you know, behind the scenes attached to that identity, I think the there will be certain spaces where people will gravitate to when they want to know that they're engaging with real people. And I think that this is going to become more important as we have generative AI. And what I mean by that is, as AI begins to generate text, there is a potential for more kind of harder to find, it'll be a lot harder to detect these types of things, because there won't be that repetitiveness, or that combinatoric repetition of phrases that you can see when you look at these campaigns today. And it creates an interesting environment where there can be just sort of low key constant chatter produced by machines that creates the perception that a lot of people think or feel a certain way, there'll be a lot more of these types of things generated. And so the idea that you can participate in a space where you know, that you're engaging with real humans, I think becomes a lot more important in the next five years, maybe. And I do think that there is something to be said, for having a, you know, a marker that, that you are real. And, you know, I don't think doing it as a mandatory thing is where we want to be. But I do think that, that creating that infrastructure is very valuable going forward.
Steven Parton 1:01:27
Yeah, I agree. I love that idea of it being not mandatory. But I feel like if you had that option, you can see a space be created where you could expect more honest conversation, because if people would be held accountable further, their ideas. To close out, I just want to give you the chance then to tell us about anything you'd like to share with the audience. Is there anything you're working on new research that you just did the plan Dimmick stuff, anything?
Renee DiResta 1:01:54
Yeah, we are putting out our election 2020 report. It is so long, but it's in chapter so you don't have to read it all. It's it's looking at the narratives that shaped election 2020, beginning about two to three months before and going up through January 6. And so it gets into the narratives themselves. So an understanding of what hashtags and other things were kind of moving the conversation, it goes into the dynamics by which they're shared. So people who want to understand how information hopped in very specific ways, like how a meme went viral, we kind of walk through some specific interesting case studies on things like stop the steal. And, and, and it kind of takes people up through. And then it also layers on for the techies, how the policy shapes propagation. So how, how tech platforms are actually changing their policy to respond to kind of new and innovative challenges that emerged during election 20. So it's really interesting stuff. And it'll firstname.lastname@example.org
Steven Parton 1:03:04
wonderful and include that in the show notes. And that's going to be a publicly available report. Yep. Fantastic. Renee, I want to thank you so much. This was a fantastic conversation. I feel like so informative. So I really appreciate you taking the time.
Renee DiResta 1:03:18
Thank you. It's great to chat.
Steven Parton 1:03:20
And now we're going to take a moment for a short message about our membership for organizations, which you can find by going to su.org and clicking organizations in the menu.
singularity group was founded upon the belief that the world's biggest problems represent the world's biggest opportunities. Our mission remains unchanged, but our methods have evolved exponentially. Today, we're opening doors around the world as a digital first organization. We invite future thinking companies to join singularity group to learn about the breadth of exponential technologies to empower your organization with an abundance mindset, and to grow networks that can create solutions to humanity's greatest challenges. With an unprecedented year behind us and many great challenges ahead. leaders across the globe are wrestling with the future, how to embrace change, stay ahead of trends, and build sustainable businesses. We help entrepreneurial leaders better understand how exponential technologies can be applied in their companies to advance their goals for people planet, profit and purpose. And it all starts with the mindset, the skill set, and the network. Together, let's discuss how membership can turn you and your leaders into exponential thinkers and prepare for an abundant future for all together, we can impact a billion people