< return

Digital Trust & Safety

August 8, 2023
Lauren Wagner


This week our guest is investor and researcher, Lauren Wagner, who has extensive experience shaping the trust and safety protocols at some of the world’s most influential platforms and institutes.

In this episode, we explore the lessons Lauren has learned from her time at Cornell, Oxford, Meta, and Google, and how that’s shaped her current approach to policy building. This takes us a tour of the impact of free speech, community building, social media’s impact on polarization, governmental regulation, and much, much more. Lauren provides a unique and candid insight into what it’s like working at the crossroads of societal well-being and the tech industry.

Find out more about Lauren and her work via twitter.com/typewriters


Learn more about Singularity: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠su.org⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Host:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Steven Parton⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ /⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

Lauren Wagner [00:00:01] So this is, you know, a rapidly evolving industry. Companies are trying to keep pace with needs, and there are a variety of conflicting incentives. But at this point, we've gone long enough. We've had enough elections to know what the problems are, what needs to be addressed, and whether that comes from the government or that's a company self-regulating. We need more of a professionalized industry at this point. 

Steven Parton [00:00:39] Hello, everyone. My name is Steven Parton and you're listening to the feedback loop by Singularity. This week. Our guest is investor and researcher Lauren Wagner, who has extensive experience shaping the trust and safety protocols as some of the world's most influential platforms and institutes. And this episode we explore the lessons that Lauren has learned from her time at Cornell, Oxford Medicine and Google and how that's shaped her current approach to policy building. This takes us on a tour of the impact of free speech, community building, social media's impact on polarization, governmental regulation, and many, many more topics in this domain. Lauren provides a unique and candid insight into what it's like working at the crossroads of societal well-being and the tech industry. And I hope that you will enjoy this conversation as much as I did. So with that, please everyone, welcome to the feedback loop. Lauren Wagner. I want to start with what I would call your enigmatic but impressive history. So you did public health at Cornell. You did social science at Oxford. Then you went on to work with Google and Metta. And there's an obvious throughline here that one could assume. But I would love if you could give us your own explanation for what kind of motivated you down this trajectory. Was there something in you all along that you knew you wanted to pursue, or were there some epiphanies along that way that you started to latch on to and you realized, Oh, this is a thing that I really want to explore more of, I guess. In other words, what's what's driving you down? This, you know, really impressive path in this domain that you're going down? 

Lauren Wagner [00:02:26] That's very kind of you to say. But yeah, so it's more the more the latter. So a epiphany is, if you want to call it that, throughout my career experiences. So, I mean, I'm a very I'm pretty mission driven person and my parents are both physicians. I grew up around health care, so I always wanted to heal people or be healer or be working at health care doctor, etc.. And so that's what led me down the path of public health. And then simultaneously, I started doing research in the Department of Communications at Cornell to understand how the media influence health behaviors, specifically looking at television commercials, anti-smoking television commercials. So what kinds of commercials if you had five and were testing five different versions, what would make people more likely to stop smoking? And I think that was really the genesis of this combination of interests, which is the persuasive power of technology and media applied towards a pro-social mission, which in this case was health care. I been interested in technology my whole life. My dad was very into computers. I would make him drag me to. There was this museum, the Sony Museum of Wonder in New York City, that had robots, and you could see all the new Sony products coming out. We'd go to the Nike surrogate by foot, digitally measured when I was seven, and I was buying sneakers forced by dad to take me to get my foot digitally measured. So there was always this fascination with like technology and what's new and what's emerging. And I think at that point when I was in university, it became clear a little bit that I can combine these interests in some way. It was still unclear how at Oxford that solidified more in the sense that I found my own area of research around how to configure online social networks to influence certain behavioral outcomes, still primarily around health, but more in the wellbeing space now, which is very much related to the work I do combatting misinformation. So that's that's kind of the start of it. Happy to go into more detail. Ended up spending half my career in digital health and building digital health startups and then moved over to Google and met I'm doing today. Yeah. 

Steven Parton [00:04:36] Do you see a lot of the techniques that were discussed along the way in your education as ones that are now being implemented by, you know, a run of the mill companies, by influencers and by other people? Like are those health messaging techniques that were supposed to help people live healthier, better lives now, ones that are maybe more. Used more by people who have kind of exploitive monetary aims. 

Lauren Wagner [00:05:01] Yeah, 100% agree. And looking back at the things that I was interested in at the time, they would have seemed like entirely disparate interests. But now they've really converge in terms of the career I've been able to build and the impact I'm able to have or trying to have. So a few threads I can pull on. One, I would say I was very, as I mentioned, the persuasive power of technology and media, which kind of centered for me or on propaganda and film. So I ran a cinema when I was in university, I studied the history of Soviet propaganda. Lenin invented the montage. So how you splice together different pieces of media to achieve a certain outcome in terms of your audience and make them understand certain ideas? A specific way like that is kind of all the genesis that led to where I am now. Another thread I can pull on is human computer interaction. That's essentially what I was studying at Oxford. So how do you design online environments? How do you evaluate online environments? And a lot of these outcome metrics are things we look at when we talk about topics like misinformation. Does this does this person feel a sense of belongingness? Do they feel more lonely? Like what? When you're building an online community, how does that impact the end user in terms of how they think about it or how they think about themselves? And what got me interested in that was a kind of this combination of of media and health care at the time. This is again like 27, 28. I saw a documentary film about people who were engaging in self-harm. And a point I found very interesting in that film was that most people learned about self-harm or techniques to self-harm through online discussion forums. And that was the thing. I'm like, That's incredible. Like this whole movie about people in the real world and their parents and what they're going online and understanding how to do this. And so I went down a rabbit hole of health discussion forums and that's what I was studying in part at Oxford. So all of these different, as I said, seemingly disparate topics kind of converged. And that's a combination of both my personal interest, but also the market shifting in a way and certain events happening politically, macro, etc., that led to me being able to do this work. 

Steven Parton [00:07:23] Yeah. How do you reconcile that? I guess it's kind of a typical argument at this point, but in a sense, that idea of free speech versus the harm that results from viral ideas that carry a lot of negative valence, you know, things that harm people. How do you reconcile the fact that it's good for people to be able to talk about things like, you know, feeling down to the point where maybe they're suicidal and maybe there's some comfort people find there, but at the same time, that might become a repository for people to share techniques or promote, you know, self harm. Is there a balance there that you've found in your own work that you think could be beneficial to navigate those waters? 

Lauren Wagner [00:08:07] It's a very challenging question. 

Steven Parton [00:08:08] It's a very challenging question. I realize that. 

Lauren Wagner [00:08:11] Yeah, what I'm what I'm trying to help address or solve, I guess right now with the work I'm doing at the Berggruen Institute, I'm happy to get into. But I'm. Yeah, I think the lines are always moving. What I hear now in terms of people speaking about their free speech, absolutist people should be able to say whatever they want, but that doesn't always yield the best user experience. I think you have individuals speaking about their own experiences with online platforms and it's not the same for everyone. So I wouldn't say there's a hard and fast line beyond what's legally acceptable. I mean, there are rules that have been put in place by regulators for what you can and cannot say or behavior you can and cannot engage with. But beyond that, the lines are always moving. And so I think now what I'm most interested in is figuring out how to professionalize trust in safety, which is this area of tech about deals of combating online harms, and then also what tools do people need or what training do people need to be able to evaluate this in a rigorous way where it's not just individuals saying, I think X, Y or Z, and we could actually have a transparent process or dialog around this yet. 

Steven Parton [00:09:28] Do you think the impetus lies more on the websites, on the legislative bodies, overseeing the websites, on the cultural zeitgeist, and kind of just the norms of our society that, you know, if you're going to be on this platform, we expect you to behave in a certain way. Is there certain weight that you would give to each of those that I don't want to say blame, but I guess just responsibility. Like who's where does the responsibility lay in your eyes to kind of help move that conversation a better direction? 

Lauren Wagner [00:10:05] I think it's a combination of all of those. Ideally, the laws that are developed reflect the values of that society. So there's a question now of is our government developing laws that broadly reflect the values of our society? And can they do that in a way that keeps pace with technology? That's an open question, though. I mean, ideally, you would have a perfect process there and people would be able to draw the lines and kind of understand and reflect values. But it's difficult to do. Companies have their own motivations, obviously a profit incentive delivering value to shareholders, also providing a good and safe user experience. So there too, I think people sometimes lose sight of maybe the latter there that yes, you're trying to make money, but you also want to retain users and grow your user base and you need to create a safe environment in order to do that. So it's it's definitely a hard, hard problem. 

Steven Parton [00:11:08] Yeah. Yeah. I realize I'm asking you some questions here that are if you could solve would be Nobel Prize award winning answers. But you know, the speculation is fine, But, you know, you do have some unique insights, one of which was your experience at MIT. You know, also knows Facebook specifically. You know, you're working on a data transparency and privacy team during a time where there's a lot of backlash against social media and against Facebook in particular. What was it like for you being kind of inside? This machine that from the outside was being faced with a lot of hostility and suspicion. 

Lauren Wagner [00:11:54] Yeah. So just to give a little bit of context. So I was working at Google at the time when Marta recruited me to join the team to build products to combat misinformation at scale. This was in I joined in September 2019, so a little bit over a year before the US 2020 election. I, in my interview, spoke about, you know, researchers I'd worked with at Cornell Books, referencing books, social science, my experience in go to market, etc., thinking that the team really understood what I was talking about. We're all on the same page, and so join the team and pretty quickly realize that not everyone had the same background or training that I had. So I think understanding where people were coming from in terms of their perspective and what they were trying to achieve, what their motivations were, was was pretty interesting to see firsthand. So there were kind of the internal politics of these teams, which I would say were quite different from my time at Google and certainly from my time working for startups. And then coupled with the, as you said, kind of the external environment and how the outside world perceived the work that we were doing and trying to. Kind of meet the needs or achieve the goals of leadership while being on the ground and trying to set policy while everything's in motion. So it was happy to dig into any of those. But yeah, pretty wild experience. Well. 

Steven Parton [00:13:29] I guess I'm wondering if there were a certain insights or perspectives that were kind of clashing there. Like for me, I've had the fortune of interviewing some people who are, you know, in the public eye in a pretty severe way. And I've gotten to know some of them more closely. And then I would see things come through, you know, newspapers, through social media, etc., where I basically know that the the public perspective is wrong or that it's a lie. Right. And so there's this knowledge that you have in certain circumstances because of you working for a company like better or knowing certain people where you get to see something that everyone else seems to perceive differently. And I guess I'm wondering if there was some of that going on for you. Do you think that Metta was, you know, truly trying to resolve these issues and were making these really good faith efforts in a way that most people were accusing them of doing otherwise? 

Lauren Wagner [00:14:30] Uh, yes. Self-forgiveness. Delicately. Because I think that my perspective is probably a bit different. Like, I can't say, you know, unilaterally, no matter, had great motivations in doing this work and everyone's trying their best and kind of that's the end of it. They should be free from from criticism taking a step back and understanding kind of the history of these teams and the history of trust and safety at large platforms I think is interesting. 2016 is really an inflection point where leadership starts taking this quite seriously, that it's harming the brand, it's harming users in the sense that they don't feel that it's a safe place or they feel that it's bad for society. Shareholders are upset, board members are upset. So 2016, you see companies like Mirror starting to take action and build these teams, but at the end of the day, they're not revenue generating teams, right? So think about who goes to work at Metta to work on a non-revenue generating team where the targets are not clear in a very competitive environment, where the goal is to advance and have metrics and numbers to be able to kind of move up the ladder and grow your influence internally. So thinking about who goes to work on these teams and what their experience is and what their expertise is, I think that that was most surprising to me coming in from the outside and having access to social scientists in this body of research and having worked at other large companies. So. I wouldn't say like they certainly should be criticized. But from my point of view, I think the real work needs to be done on creating some sort of rigor or process in who is brought into these roles. Who is given the power to make these critical decisions. And doing that in a way that is understandable both internally and externally. 

Steven Parton [00:16:30] Yeah. Did you did you feel like there? Maybe it was a lack in the amount of people who kind of came from your background in public health and social sciences. And, you know, this is something that I run to a lot of in the in these podcasts. There's a lot of wish that more philosophers and like social scientists and psychologists and people were in these companies not to manipulate people, to click on buttons, but to like have an honest conversation about impact. But, but did you see many people who who shared your background in that regard, or was that kind of lacking? 

Lauren Wagner [00:17:05] No, I certainly did not. And the folks who had this background, a lot of them were in UX research, which is more of an evaluative role within the company rather than in my job is to figure out what products to build and then get those in the hands of the people who need them and fuel adoption. So in terms of product building from inception and then go to market as well, there were not many people who who understood social science or understood the research that really was informing the product development that we were doing. 

Steven Parton [00:17:38] So that they're problematic to you. 

Lauren Wagner [00:17:41] Yeah, entirely. Like, I don't think I mean, it's it's not anyone's fault, but I think that more work and thought has to be put behind training for these roles. Like when I left Oxford in 2011 and I studied social sciences, the Internet data science was not a job. I could not go and apply for a data science job. So this is, you know, a rapidly evolving industry. Companies are trying to keep pace with needs and their variety of conflicting incentives. But at this point, we've gone long enough. We've had enough elections to know what the problems are, what needs to be addressed, and whether that comes from the government or that's a company self-regulating. We need more of a professionalized industry at this point. 

Steven Parton [00:18:26] Yeah. Well, you know, potentially to defend that a little bit, I definitely was not trying to attack them there per se, but I believe a paper just came out last week. Maybe it was at the New York Times who announced that something like 200 million Facebook users were research, basically. And they found. 

Lauren Wagner [00:18:45] That was my project. 

Steven Parton [00:18:46] Yes. Okay. So the the report basically was that the algorithm didn't change beliefs or increase polarization. Right. Is that is that accurate? 

Lauren Wagner [00:18:56] Um, yeah. I don't I mean, there are going to be many, many papers that are published, so you may see some conflicting findings. I think that the fact that we were able to do this in the first place is pretty incredible and ship this data on a privacy layer to the extent that, you know, external researchers can analyze it in a way that doesn't put users at risk. And then there's also the question of replication. So other researchers have to be able to access the data so that they can replicate studies at how academic research work. So quite a large undertaking. But yes, if we're looking at the US 2020 election, some of the findings did show no fact. That doesn't have as much of a factor as people thought. But I kind of these these conversations about filter bubbles and Facebook's impact on people's beliefs and political beliefs and voting, etc., etc., like. It really depends on the population you're studying. And I feel like for any research conclusion that someone draws, someone can offer a counterpoint and say, But in this geography or for this person, it actually did the opposite. And so it's great to have, you know, volumes of data so that we can study this in a in a bigger way and draw more meaningful conclusions. But I feel like the conversations are very, very circular at times, which is actually one of the reasons why I ended up leaving leaving media. But yeah. 

Steven Parton [00:20:21] Yeah, for Fair enough. Do you I guess do you feel like that is the case though? I mean, in your own personal opinion, this data, this first bit of data aside, do you feel like social media has played a role in changing beliefs and increasing polarization? 

Lauren Wagner [00:20:40] I mean, I don't want to speak to the research because I'm not a social scientist day to day anymore. But from folks I've spoken to like there are there do seem to be limited effects from different types of media, whether it's social media, television, etc.. So I would encourage people to just read the study. They don't want to call it one way or another, but for me, the real focus is on. Platform transparency. And if this kind of work is going to be a way for people to feel more comfortable with social media or other emerging technologies or figure out ways to better regulate technology, like I want to focus on those problems and how we get that set up rather than the specific conclusions from from studies, if that makes sense. 

Steven Parton [00:21:29] Absolutely. Yeah. Well, let's jump to your current fellowship then. I mean, I think that's you said that this was kind of one of the the focus of that work. What? I guess. Where do you see things now in terms of making sure that trusting relationship exists with platforms and we have that transparency? How far do you feel like we have to go? 

Lauren Wagner [00:21:58] That's a really hard question. 

Steven Parton [00:22:01] I mean, what are your current, I guess, complaints with maybe where we are now, Maybe that would be easier. Like, are there things we're doing at this moment that you're like, this is still problematic? 

Lauren Wagner [00:22:13] Yeah, I think I mean, I think a lot of the solutions that have been proposed are steps so platforms might take, either primarily on the self regulation front. There are like it's an evolving process, right? So, I mean, one piece I already spoke about was professionalization of trust and safety. I think a lot of people you mentioned what what maybe don't people understand about what's going on at the large platforms? I think there was a sense when I was there at MIT around the 2020 election that a lot of the guidance came from senior leadership, like the C-suite had very concrete ideas about what should or should not be allowed. And that kind of came from the top with this idea of where these, you know, decrees sort of came from. And I can say that that was not the case when I was there. It was even at times like hearing a speech that someone in the C-suite gave. And there was one line about free speech and policy, folks would say, okay, let's index on this. It's like, what did you validate that? Like it was one line and you don't even know who wrote the speech. Yeah. And these would literally these ideas would be translated into, into content policy around what is and is not allowed. So one misunderstanding maybe is that oftentimes it's kind of middle folks and below who are coming up with these ideas and then it's rung up the ladder to leadership and then they approve or disapprove. So it's not this kind of master plan about what should be happening. It's very iterative, very dynamic, and often not the most senior experts who are coming up with these ideas and then having them implemented ultimately. So that's one piece. I think transparency is is a big topic nowadays, especially with this kind of study of like, well, if platforms made data available to researchers, they would be able to hold them accountable for potential issues or harms or whatever. My understanding is that there are not many computational social scientists in the world who are able to analyze this data effectively. So even if you made it available and you added privacy layers and ensured that it wouldn't put anyone at risk or that adversarial attacks get it, whatever it is, who's analyzing the data, And then it's a question of, okay, how do you train the right people to be able to analyze the data? Oftentimes, even the professors leading these massive studies don't have the analytical skills and they have postdocs doing it or graduate students are doing the analysis. So it's not I think these are quite complex issues and there isn't really a silver bullet. And you get a lot of loud voices saying, like, if only this happened, then we'd be able to do X, Y, Z. But it's a bit more complex than that. 

Steven Parton [00:25:13] Yeah, certainly. Where do you think there's any value in kind of nipping it in the bud? I guess by maybe stopping the flow of data in the first place? Like do you, do you feel like maybe we allow too much collection of users data? 

Lauren Wagner [00:25:31] I mean, there are rules. How long you can keep data. There's GDPR, so there are rules being put in place for that. So I can't speak to that. But I will say that personalized online experience is like all of these things require data. And now, you know, large language models are being trained on data. So I don't know. It's a it's a hard question to answer. 

Steven Parton [00:25:54] Well, maybe let's jump in that direction a little bit more than, you know, with responsible innovation labs you're building, I think, what is arguably the the first AI protocols for startups and investors. What does that look like? Exactly. 

Lauren Wagner [00:26:09] Yeah. So there's been a lot of guidance put out by industry, civil society, federal government, etc. about what responsible AI means either to them and their constituents. And that hasn't really been translated for early stage companies and investors who fund them. So as we know, early stage startups, they're resource constrained, they're moving really fast. They're trying to either, you know, prioritize everything or just get to a place where this is a viable, sustainable company. And so to ask those folks to do and also we want you to develop AI responsibly and use these frameworks that maybe aren't purpose built for you to to work with in these constraints, I think is challenging. So our responsible innovation labs, that's that's what we're working on building. 

Steven Parton [00:27:01] Can you, can you say more about like, what those protocols look like? Like what? Can you dive into any of the details thus far? 

Lauren Wagner [00:27:09] Yeah. I mean, it's not released yet, so I don't want to say too much, But I will say that, I mean, the NEST framework is something that was put out by the federal government, which I think is quite instructive. I mean, once you do enough research into this world, there are themes that emerge. We spoke about transparency before, but transparency with responsible, I really is the foundation for all for all of this that kind of you need to be able to apply any of the other principles or risk identification strategies. So if you're not documenting what your company is doing, you don't have model cards, you're not documenting the data, analyze it, etc. It's going to be very hard to do anything related to responsible AI. So we're indexing on that as a foundation and then providing additional layers or steps that you might take as a company. Things around risk forecasting also benefit forecasting as well could identify the key benefits of your technology, what you should be doing as a startup. How do you augment the benefits and mitigate the risks? Different techniques that have been proposed as risk mitigation strategies the White House put out, I think, two weeks ago. They have voluntary commitments from large AI companies, and so some of those are like mandated red teaming, not releasing model weights, things like that. So trying to see out of the universe of possibilities, what can we grab on to and adapt for startups? 

Steven Parton [00:28:42] Yeah. How do you think we're handling that thus far? I mean, do you feel like A.I. is as existential as some people make it out to be? Or, you know, there's also the Andrew Sins of the world who can't see how I could possibly go wrong? How do you how do you fall on that spectrum of this is definitely going to kill us, too. This is the perfect tech for utopia. 

Lauren Wagner [00:29:08] I think having experience what I have over the past ten years with the rise of social media and we've seen the problems of social media like. Technology is created by humans and will be imbued with the values that technologists put into it. So you have to be mindful of that. But I do believe that. A lot of tech issues that emanate from technology can be solved with technology. So it's like a little bit of like a middle of the road response. But I think having these kind of multi-sector approaches and coalition building and getting the right people in the room can hopefully get ahead of some of the issues we might see, especially around bias and issues of discriminate. Like these are real problems that need to be addressed today and you just need to make sure you have the right people who are able to do that work. 

Steven Parton [00:30:08] Do you worry sometimes that we focus a bit too much maybe on the tech as a scapegoat and not enough on like maybe the social circumstances, just like pure socioeconomic, xenophobia, stress, things like this that are acting kind of behind the scenes, that kind of are the source of the bad behavior online and in technology. 

Lauren Wagner [00:30:30] Can you give an example? 

Steven Parton [00:30:31] Well, let's let's just say, you know, there's a big issue right now. We talked about things like polarization, right, with culture wars, with election issues and whatnot. And some of that could stem more from the fact that people don't have good access to meaning in their lives. People might have economic disadvantages that make them feel like they're unable to find a path forward in their life reliably. So they have to jump on a political team and kind of fight for it to win in order to feel like their future is is more guaranteed. And then you just put that person in front of a technology and yeah, maybe the technology is amplifying it. But at base where you really have is a society that's not serving its citizenship very well. Do you I guess my question is, do you think maybe people spend so much time talking about the technology that we're kind of being blind to some of the social science issues that lay behind the scenes? 

Lauren Wagner [00:31:30] Oh, and 100%. And so when I did this research way back in 2010, 2011, I mentioned that some of the outcome metrics I was evaluating was loneliness, feelings of belongingness, feelings of isolation. I mean, there were certain kind of scales or assessments that I was borrowing to figure out what people thought about their online social networks. Flash forward like that were very missing 13 years ahead. And you see organizations like my college roommate runs an organization called Moonshot, which uses digital tools to combat violent extremism. She was just on PBS talking about how when you look at folks who are at risk of becoming extremists when they do online interventions and speak to these folks, something they realize are high incidences of loneliness and engaging with this content makes them feel more belonging. So, yeah, it's it's all the same. I mean, people are people and it seems that, as you mentioned, that technology augments or opens up new pathways to connection that at times can be problematic. But we are starting to see the same themes emerge again and again. 

Steven Parton [00:32:40] Yeah. What are your thoughts on deepfakes and and kind of the way that. I is going to change capital, social capital and kind of the trust between individuals less so about really polarization or something really existential, but just the inability to know what's real, so to speak. 

Lauren Wagner [00:33:03] Yeah, I'm quite worried about that. I think that over time things will shake out, hopefully in a more sustainable way where there are technological developments, where you could do watermarking and have, you know, accurate information provenance. We are not there yet. And so with an upcoming election, I mean, I worked there was the Nancy Pelosi Deepfake thing in 2020. I don't know if you remember this, but when she appeared to be drunk and that was something that we had to deal with and build a deepfake policy at that, which was one of the if not the first that was ever created. And so just going through that experience and amplifying it times a million when this you know, now that this is widely available with stable diffusion, it's it's really worrisome. And so no matter what topic you're addressing, whether it's politics in the upcoming election, thinking about I mean, I work a bit with Thorne and combatting child sexual abuse material. That's a minefield. Mm hmm. Mm hmm. And then I think what you're seeing now and, you know, people want to. It could foster trust with new technology, etc. And even with these White House commitments that came out a few weeks ago, one of the mandates is that companies apply the latest in watermarking technology like. Do we have that? Like, I don't I don't think that that word. My sense is that that is does not work very well right now. So. What is the solution? It's like putting a Band-Aid on a potentially extremely big problem. And so my focus is, okay, let's identify what tools are available and what the affordances and drawbacks of those are so that we can move forward. So, yeah, very worried. 

Steven Parton [00:34:57] Yeah, understandably so. Me too. If you had a, let's say, a clear path forward where you could enact a policy that would maybe even either address some of the platform trust and safety issues or maybe the upcoming issues with A.I.. Is there a policy recommendation or something that you would put forth that you think could help maybe assuage or inhibit some of these issues from, you know, becoming the worst version of themselves? 

Lauren Wagner [00:35:26] It's hard because we talk about auditing and transparency, and that isn't always accurate or useful. But I think at this point it's kind of the best that we have. So I'm part of an organization called the Integrity Institute, which is kind of an open source trust and safety nonprofit think tank. Essentially, it's a lot of former platform folks. And so, for example, they look at that as transparency reports that they put out. And essentially we have data scientists going through it with a fine tooth comb and evaluating what it means and whether it's accurate, etc.. And there's a dialog essentially, not indirectly, but between an entity like the Integrity Institute and so that in some sense they're held accountable. These are incredibly specialized people who are doing this auditing, who are also very mission driven. And because a lot of them are volunteers and they just really believe in this work. And so that's one example I could think of How do you scale that and make it so that it's institutionalized? And I think you have a company like Metro, which is actually very transparent. If you look across the platforms in terms of sharing these types of metrics, it's certainly not perfect. But what kinds of tools or software, SAS, etc. would you have to implement at other companies with user generated content to make that reporting possible? And then you have the auditing. So I think that there's something around compliance Once these these tools become available, they're being built by startups. Now, once they become adopted by the enterprise, then you can have more of that auditing mechanism that can eventually result in accountability. 

Steven Parton [00:37:11] Yeah. Are there are there any, I guess, movements or individuals or any ideas like this that are currently being put forth or maybe even technologies that are that you see as helpful? Are there? Is there anything emerging right now that you're kind of catching yourself being like, Oh, there we go, I like this. This is a step in the right direction for you. 

Lauren Wagner [00:37:31] Yeah, absolutely. So I'll just mention the part, a major reason why I left Neta as we were circling around, a lot of the same problems are how do we share privacy protected data and ensure that it's getting in the hands of the right people and not the wrong people are going to misuse it? So these problems and I saw folks outside of the company building startups that addressed some of these issues that we were talking about. So I thought, okay, why don't I just go and invest in those people rather than feeling like I'm being my head against the wall and I don't want to sound too Pollyannish but build the future. I want to see like I think this should exist. I think that this would unlock a lot of opportunity for us as a large company, so I assume it will unlock opportunity for others. I can just go work with them. So that's why I moved into a role investing in early stage startups and a sector that I'm most excited about is this role of trust and safety software as a service. So a lot of the folks who have left the large platforms are now building workflows and layers where they're able to take in data from companies with user generated content and provide employees with a workflow so they can evaluate it, action on it, develop policies, publish transparency reports, etc.. And once you have this workflow that's unified across different kinds of companies, then a regulator can start evaluating it and say, okay, what do we do with this? What do we mandate? What do we require as an element of whatever transparency ends up being so? Right now you have companies building their own bespoke tools, are using things that are really well suited for trust and safety, things like customer surveys, customer experience, ticketing tools, etc. not really purpose built for combating online harms. So I'm pretty excited about the growth of that sector. 

Steven Parton [00:39:26] Yeah. Do you feel like there's something to be said for maybe like the uber motto of of ask for forgiveness later? Kind of. Here where you just have to have people who are going and putting in the hard work, like you said, volunteering, doing start ups, kind of pushing the paradigm in a way that shows how beneficial it could be. And then maybe that shifts the norm enough that we get some political will to follow suit. 

Lauren Wagner [00:39:54] Yeah, 100%. And I think I learned this lesson eight years ago when I started Meadow, where I thought I was coming in and everyone was going to be an expert on X, Y, Z, and it turned out a lot of people were not experts on those kinds of things. So you have people who are very mission driven, are technically quite skilled, have worked across many different kinds of teams who are coming up with these ideas. And yeah, I think not that it's a free for all or like anyone to be able to propose things, but I think this is a a unique period where a lot of these key issues are being worked out and so elevating and there are channels where new voices can be elevated is just really powerful and beneficial and hopefully moving us closer to the place that we want to be in terms of technology benefiting society. 

Steven Parton [00:40:44] Yeah, I like that as a as a note to kind of segway here out of the conversation and leave you with maybe a chance here to give us any final thoughts, anything that maybe you wanted to say here, something that we didn't talk on that you would like to discuss anything at all, that you would like to promote, anything at all. 

Lauren Wagner [00:41:04] So I invest in technical founders who have spent some time in trust and safety. I think these are really great people to invest in and that they I would bet on them building, you know, a more positive future that we've seen over the past ten years. You speak about Uber and this intense like competitiveness. Yes. Companies winning. Yes. Market dominance, sure. But you also have to think about the societal implications of your work. And I think that folks who have had this experience even for six months, it's not that you've had to have worked in this for ages, but just that you've chosen to expose yourself to this and have a realistic view of the problems folks are experiencing online and are open to tackling those. I think that those are going to make the next generation of great founders. 

the future delivered to your inbox

Subscribe to stay ahead of the curve (and your friends) with new episodes and exclusive content from the Singularity Podcast Network.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.