< return

Lessons From a Computational Social Scientist

January 9, 2023
Sandra Matz


This week my guest is computation social scientist and professor at Columbia University, Sandra Matz, who recently published her book, The Psychology of Technology: Social Science Research in the Age of Big Data.

In this episode, we explore many different ways in which technology and psychology are influencing one another in the modern era. This includes but isn’t limited to the influence of big data on psychological research, the battle between exploitation and exploration as fundamental dynamics in our digital lives, the ways in which algorithms shape our views of the world, and a whole lot more. Sandra delivers her expertise with candor and humor, and this makes for a truly enjoyable discussion that I hope you’ll all enjoy as much as I did.

Find out more about Sandra and purchase her book at sandramatz.com


Host: Steven Parton - LinkedIn / Twitter

Music by: Amine el Filali


The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

Sandra Matz [00:00:00] The narrative that we have, and I think it's driven predominantly by Silicon Valley for whom this narrative is very convenient, is like, well, you can either have privacy and self-determination and all of the good stuff, or you can have this amazing service and convenience that we offer in our products. But you can't really have both. 

Steven Parton [00:00:32] Hello, everyone. My name is Steven Parton and you are listening to the feedback loop on Singularity Radio this week. My guest is computational social scientist and professor at Columbia University, Sandra Matz, who recently published her book The Psychology of Technology, Social Science Research and the Age of Big Data. In this episode, we explore many different ways in which technology and psychology are influencing one another. In the modern era, which includes but isn't limited to the influence of big data on psychological research. The battle between exploitation and exploration as a fundamental dynamic in our digital lives, the ways in which algorithms shape our views of the world and a whole lot more. Sandra delivers her expertise on these topics with candor and humor, and that made for an incredibly enjoyable conversation that I hope you all will enjoy as much as I did. So without further ado, everyone, please welcome to the feedback loop. Sandra Matz. Can you just tell us what a computational social scientist is for people who might not be quite familiar with what you do? 

Sandra Matz [00:01:47] That's a great question. And so the way that I think about it is really just kind of people who are interested in social science questions. So that could be sociologists, economists in my case, psychologists. So we study human behavior. And the way that we typically used to do this was giving people questionnaires or we invite them to the lab and have them French press a bunch of buttons. Right? So kind of trying to extrapolate from that to the human experience was a bit of a stretch. And computational social scientists try to use computational tools to really study human behavior at scale. So in our case, we look at social media data, your credit card spending data that we can collect with your with your smartphones, and we're trying to use machine learning AI some of these these techniques to really try to understand what is what do people's data experiences look like. That's how I would describe it. 

Steven Parton [00:02:40] Yeah, perfect. Well, and then how's that coming then to your latest book, The Psychology of Technology Social Science Research in the Age of Big Data. What is this introduction of computation and big data bringing in beyond just shifting you from, you know, the survey kind of studies to the stuff you're doing now? Like what is that transition that is coming around? 

Sandra Matz [00:03:02] Yeah, it's a great question. And the book I think has two parts to it, at least the way that I think of it. I think the book is on the one hand trying to figure out, well, how does how is technology influencing our lives? How is it influencing our psychology, our well-being, the way that we interact with one another? So in this case, we're essentially looking at the outcomes of technology. So how is technology impacting our lives? There's amazing research out there. So people looking at the impact of social media on well-being or to what extent has technology influenced how we work or how we create public policy? So that would be the what's the outcome of technology and the field that I guess I am more involved in, which is the other half of the bond is thinking of technology as a tool really for science. So knowing that there's all of these data traces out there that we live with pretty much every step that we take online, right? So you create a digital footprint by just using technology and you might not always be aware of that and really thinking about, well, how can we use technology as a new tool for the social sciences? And on some level, what fascinates me about this, this topic is that if you think about psychology, we always used to say that psychology is a behavioral science. And on some level that was true because we try to capture behavior by asking people about it or by just looking at how people behave in the lab in a relatively controlled setting. But it was it was always clear that we're not necessarily capturing people's daily experience. Right? We're not really capturing what is the behavior of people look like when they just go about their their life in a way that is observed in a more unobtrusive way. And I think that is what technology offers is now. It's a way of really getting a very granular insight into what people are up to but at scale. So instead of having this for one person, there's this amazing book that I love they published I think is book and they published it and must have been in 1950 some sometime in 19 4049. And they just got a one boy, one seven year old in rural Kansas somewhere around for for an entire day is that eight reaches research assistants kind of from the moment that Raymond I think was the thing the guy's name from the moment little Raymond woke up in the morning, they documented every single step along the way. So him taking classes, him talking to his family, and they called it a lived experience. So they were interested in people's lived experience and it was amazing. So they made an entire book out of really less than 24 hours with eight research assistants. And you can imagine it's it's an amazing piece of work, but it's also very limited because it's one person. And it also how how naturally would you behave if you have eight people following you around for for an entire day? So not necessarily amazing, but also kind of limited. Yeah. So I think the big data and technology allows us in a way to do that for a lot of people in a way that is much more unobtrusive and much more natural. 

Steven Parton [00:06:20] Yeah, I mean, maybe it's a bold statement, but do you think psychology as a, as a field is therefore becoming empirically better because of big data? Like a lot of people have issues with psychology as a soft science and they point to the replication crisis and after. I'm concerned that, you know, it's not as valid as something like physics or mathematics. Do you feel like the big data is starting or in technology in general is starting to fix this problem? 

Sandra Matz [00:06:47] I mean, clearly biased since I'm doing this kind of research. But I would argue that this is it's true. I think it's it's not replacing the science that we've done traditionally. So I don't think it's replacing necessarily experiment. It's necessarily replacing survey research. There's a time and place for this kind of research because when survey research oftentimes does, it gives you subjective experience, but it gives you an insight into what is the experience of a person look like. And we don't necessarily get this with data. So data is amazing at giving us this picture of someone's behavior, of their preferences, whatever it is that they do. But we don't necessarily get very detailed insights. And how does it make them feel? Or like, what are some of the causal mechanisms? Why do some why does someone behave in the way that they behave? So my the way that I usually like to talk about it is essentially if you can combine the two worlds, you get the benefits of big data and you get the benefits of being able to observe millions of people, look at what a day to day looks like, and then you can still follow up with experiments and say, okay, now we're going to try and see is this a causal effect? What are the mechanisms, what is the experience that people have when they go through some of these behaviors? 

Steven Parton [00:08:02] Yeah. And where is this data coming from? Because I think some people are concerned about things like privacy issues and whatnot. But a common issue that I end up discussing on the podcast is that most of the best social science data in the world that universities would dream to use to help better humanity live behind the paywalls and behind the, you know, corporate walled. 

Sandra Matz [00:08:25] Garden. 

Steven Parton [00:08:25] Sections of Facebook and Twitter and things like this. So where does this big data come from for the average researcher or I guess, you know, for those companies they have it. But, you know, where where is the typical way that you would get this data? 

Sandra Matz [00:08:40] It's a great question. And I do think people should be concerned. And so just to foreshadow that, I am thinking a lot about privacy. How should we be using personal data or should we be potentially regulating personal data? So I think it's a little bit different for different researchers. I sometimes work with companies which, as you said, just gives me access to data that I otherwise wouldn't have access to. But a lot of it, at least the way that science is regulated, right? So we have to go through an entire process of an ethics approval that says the way that we are using data is aligned with these core values that we have the scientists, and it's protecting the rights of the subjects that we that we study. So in in my case, oftentimes there's on some level, at some point is a consent stage. And so when we collect data, oftentimes that is very explicit. Right. So you can argue that companies have consent, but the consent looks like something like we don't want you to read it and just kind of click here very quickly and then we're going to get access to everything that you have. So in science, it's much more user friendly. So we explain to a participant, here's exactly what data we're going to capture Here is exactly what we're going to do with it. It's going to be used for research purposes and research purposes only. And then they also have the ability to essentially revoke the right. So if you don't want us to use your data anymore, you can email us and we'd take it out. Some of it is public data, right? So some of it that you can find online and you can scrape. And I think there has been there's been arguments even within the scientific community to what extent that data should be considered. Is it public, Is it private? Because it the people who put the data out there, what they thinking about, the way that you're going to use that data? Maybe not. Right? So you put something on Twitter and you essentially want to have a conversation, but do you think that it's going to be used in an algorithm that predicts political orientation or sexual orientation? Maybe not. And that's a problem that's not really solved. Yeah. So I think you can, as a researcher for these public datasets, get IAB, which is the ethics review and approval relatively easily because it's considered data that you didn't necessarily collect yourself. I think there it's much more of a question of your personal values. To what extent do you think this is a legitimate use? And then also how do you communicate it? But I think that's a question that I ask myself a lot. What is the value that I'm creating, right? What is the value that I'm creating by doing this research? And am I actually giving something back to the people whose data I'm using? Do I try and communicate the findings that I get from my research to try to communicate that to the public? And I think that's. Still undecided, I think within the research community of what's and I was not allowed. And what are some of them? The conditions was. It's just not there. 

Steven Parton [00:11:37] Yeah, I mean, to that point, I guess, how do you feel like the ethics review boards or universities are adapting to the changes that technology are bringing? Are they doing a mature job of updating their their ethics reviews and their principles and ideas around these things? Or are they still lagging behind and maybe a pretty severe way? 

Sandra Matz [00:12:02] I very much appreciate the IRP, I should say. I think the people there are generally doing an amazing job and I think they're generally there to protect the people that we do research on. I do think it's extremely difficult because the way that we've been trained, the way that I think our bees have been trained, is to think about personally identifiable data. And that would mean, well, it's due to data sets, have names attached to them, to things like date of birth that would allow us to identify people. Now, you can easily identify people without having that information attached to data sets, and I think that's something that is oftentimes not considered. So I have a hard time getting survey data through that has names attached to them because that falls under a personally identifiable information. I could relatively easily get a data set of G.P.S. records, which give me exact longitude and latitude. They don't have names attached to them, but that is as soon as I know where someone's phone is at night. I pretty have a good sense of who that person might be. So on some level it still is identifiable. It's not that we're necessarily doing this, but it is it's just identifiable on a very different level. And I think that's where our are having a much harder time because it's not the traditional or it's a name. And I can immediately see how this would be relevant. 

Steven Parton [00:13:23] Well, yeah, and it seems like there's a cultural aspect here that, you know, is different between countries, but also different between generations. Because the younger generation, I would say in a lot of ways doesn't care who knows about them. And in fact, they want everyone to know everything about them. I think of Bo Burnham and his inside documentary says, Do you think maybe it's a bad idea for everyone just to say everything they think all the time. Like that's the kind of the access that we have, especially and that generation. And then we have European policies that are different around these things and American policies that are different around these things. So I mean, I don't know how you know, it is. 

Sandra Matz [00:14:03] Super interesting because we also collaborate across the globe, right? So as scientists, we oftentimes collaborate with people in Germany and people elsewhere in South America or Asia. And the rules that apply are totally different. So GDPR, the general data protection regulations in Europe, are the strictest when it comes to privacy protection. So certain things that I can do in the U.S., my collaborators in Europe can't do. And so it is somewhat limiting when it comes to science. But then also the question is to what extent should that be the case? And I think that the point that you raised about the younger generation not caring as much about their data, you also see this a lot in older people, I should say. So I teach MBAs, Executive MBA. I wanted to post a class on the ethics of personal data. And one of the most common feedback or comments that I get in the class is like, Well, I don't care about my personal data being out there because I have nothing to hide. Then like, Well, let's just unpack the statement for a second here. First of all, it's a very privileged position to be in, but if you don't have to worry about your personal data being out there, it just means that you had a very good spot that's admitted that that's not the case for everybody. But like you're not having to worry about the government looking into that or you're not having to worry about companies discriminating against you in some way, that just means that you're probably in a in a good spot. And that's not true for everybody. So it's a very privileged point of view to say, I don't care because I don't have anything to hide. And the second thing is, you have no idea if that's going to change tomorrow, Right? So you have not data is permanent and leadership isn't. So it could be that all of this data gets collected, it gets stored somewhere. And tomorrow is a new government like going, I'm German, so I grew up in Germany. Like you could imagine what this would have looked like back in the day. Right? So it's a terrible it's a terrible thought experiment. And you're like, this could still happen. Like, nobody says that tomorrow is going to look exactly the way that today looks. And I think certain companies have. Have actually are started are starting to acknowledge that. So I know that Apple, for example, they do something which I really like, they call it evil. Steve test I guess nobody ever. Tim test. But the thought experiment to date go through with their engineers and their product teams is if we had a new CEO and tomorrow who had like fundamentally different values to the values that we have right now, let's say they wanted to undermine democracy. They just have a completely different values than what we still feel comfortable connecting the data that we're collecting today. Right. So we might be collecting data today with the idea of this is going to create a lot of value for the customer. And it might be true, but you don't know what's going to happen tomorrow. And I think that's this notion of I don't care about my privacy, I think is a fundamentally flawed one, because you probably do care about your privacy. And I think it's coming from this false dichotomy where the narrative that we have and I think it's driven predominantly by Silicon Valley, for whom this narrative is very convenient, is like, well, you can either have privacy and self-determination and all of the good stuff, or you can have this amazing service and convenience that we offer in our products. But you can't really have both for us to be able to offer you these amazing services. You just have to give up your privacy and self-determination and you just have to live with that. Now, if that's the question that people are asking, and I think that's I think most people substitute a question of do you care about your privacy with the question of am I willing to give up privacy for the perks that I get in return? And for that, I agree. Most of the time the answer is probably yes. The answer is probably yes. I'm getting something that is really amazing and I might be willing to give up my privacy, but if you could have both, I think most people would be happier to have course. But if I could tell you, you can use Facebook completely anonymously. Nobody's going to use your data. People would probably. And you also still don't have to pay, which is really difficult to do. But people would probably pick them. And because we still have this notion that it's either or, we're not really having the right debates in my mind because they are is pretty amazing. New technologies like federated learning or differential privacy, where you actually don't have to send all of your data. And so Apple is one to Siri is one of the examples. So instead of you sending all of your speech data to Apple and then they're training all of their models, the speech recognition models on a central server, what they do is they send their models to your phone because your phone is an incredibly powerful computer, right? So it's like much more powerful than the computers we used to use to launch rockets into space just a few decades ago. So what they can do is instead of grabbing all of your data and now also being responsible for protecting it, they can just send their models to your local phone app. They there give you the same convenience, the same service, but in a way that's much more privacy conserving. And I think that's the real conversation that you want to have. And you can only happen when people understand that they actually do care about their privacy and they do care about making their own choices on some level. 

Steven Parton [00:19:17] And it also feels like there's the issue of not just your own, the impact on yourself, but given that this is going to feed data sets that will potentially lead to something like a Cambridge Analytica or some kind of I don't want to say propaganda necessarily, but you know, marketing is influence. Marketing is understanding consumer data. So depending on how you want people to understand yourself and those around you and how much power they have over your decision making, it also seems like that's a risk to consider as well. 

Sandra Matz [00:19:50] Very much so. I think the way that I've because I used to talk a lot about privacy, I think that's in a way the starting point that the most fundamental part of it, I don't think people care as much about this. I think the part that they care about is the self-determination and agency or that you just mentioned. Right? It's about, well, if you give away information that just gives up a lot of power over your choices and your decisions. And that's especially when you have such a fundamental value of I want to be the person who's in charge of my own destiny, of my own decisions, of the choices that I make. And I think that's where it's much easier to get people to care about. Is this level of choice not necessarily the level of privacy? Yeah. 

Steven Parton [00:20:33] Do you do you think that there is a calcification or a standardization of behavior that we may be seeing from this data? Because one of the concerns that I have with tools like technology and social media is that let's say you something is trending and it gets more likes than another hobby of your behavior, you're more likely to maybe do that hobby or behavior because you want the likes. And as a social species, we want the social validation. We we want to we want to do more of what makes people like us do. You do. You think that maybe this is actually leading us towards becoming more like each other because we're less afraid of having the unpopular view online or because the technology is just pushing us towards the same view? 

Sandra Matz [00:21:20] I think it could be it could go in different ways. So one is like we're all becoming much more similar to one another. But I mean, on the other hand, what you see is that people are just becoming a lot more polarized. So I think the question I'm oftentimes asking, since I'm personality psychologist and I care about individual differences is what extend all of the recommendations, I think is whatever it is, the recommendation algorithm is feeding us to some extent. And so the question is to what extent potentially are we becoming more similar to one another, but also to what extent are we just locking people in to these boxes and just reinforcing what they're doing anyway? Right. So if I'm like extroverted now, all of the stuff that gets optimized for me, seeing extroverted stuff and maybe that just kind of reinforces it. And I'm on a trajectory now that is very difficult to break out. Like once the Facebook algorithm has figured out that that's what I'm all about, that that's what I'm interested in, it's just going to put me under on that trajectory. Same thing. What's in a way, Google searches. Right. So they have like a sense of who I am and now they're optimizing for. That's the one thing that I would love to see. And I think there's an amazing opportunity and it's certainly not being used because right now it's like all focused on. We call it exploitation, exploitation. As I understand what you want. We're going to show you more of that and you just kind of see the stuff that you're interested in anyway. And that's oftentimes really helpful because there's no way for me to go to page 500 on Google to find what I want, right? So it's very helpful to find the stuff on page one. But right now there's no way to shift to a different mode, and that is exploration. So exploration and exploitation, they're kind of two modes of learning about the world, right? Exploitation kind of says, here's something that is familiar that I know I like. It's low risk, but then sometimes you want to do something different. And I think there is an opportunity to embed this in some of the systems, right? So Google, instead of saying, I'm only going to give you the chance to see whatever is alive with your profile anyway, I would love to have an explorer mode that says, Well, today, most of the time I want to see the stuff that I'm interested in. The same on Netflix, the same on Facebook, the same on all of the other platforms. But today I'm in kind of an explorer mode. And what I would really want to see is what you the Google searches look like for a 50 year old Republican in Ohio. And so this is a world that I never get a chance to access because I only see the world from my perspective. And now suddenly this would allow me not just to explore, but explore in a very specific way that I can define. Because Google obviously knows what the search results look like for the 50 year old Republican Ohio. It's just that I don't see them. But there's nothing that prevents Google from saying you just kind of tell me what you want to see right now, and I'm going to show you that. And I think if you think about it that way, it's it's an incredible opportunity because we never really if you think back like my parents generation, they never had an easy way to say, I want to see what the life of a 50 year old Republican in Ohio looks like. They could actually do that. I could say I want to be anyone in the world, and I just want to see what their experience looks like. And so it's just not as profitable because exploitation is what keeps people on the platform and what drives attention. But I think it should be at least an option. 

Steven Parton [00:24:38] Yeah, And I mean, it feels like it would be better for everyone as well, because for my understanding, things like enriched environment, curiosity and novelty, these things tend to be great for learning and for well-being and for social development. So it feels like it would be a step in the right direction. 

Sandra Matz [00:24:55] Yeah, we could even try and figure out, well, what is it that you don't know but you should know? It's I think there's so many opportunities where you could just use the flip side of what we're currently doing. 

Steven Parton [00:25:06] Yeah, I don't know if you've seen studies on this, but like those pockets that you talked about, how we're being not necessarily pushed towards one personality type, but how we're being pushed into the echo chambers, you know, the little Republican group, the Democrat group, the Antifa group, whatever. Are you seeing any data that suggests there is like an increase in. Group membership with a smaller set of beliefs. Like with in these little pockets. 

Sandra Matz [00:25:35] I think it has and it's it's still a debate that's going on. And you have people from different parts of academia and industry publishing. So you can imagine that the industry papers usually say it's not a big deal and then academic papers say it's a big deal. And so I do think on some level, this is happening, right? It's the one thing that I should say is that it's not the only place where it's happening. So I think this debate of is Facebook putting you into echo chambers probably on some level. Are we also putting ourselves into echo chambers? Yes, absolutely right. So if you look at the friends that you surround yourself with, they're most likely somewhat similar to you. So does this problem of echo chambers? Mostly it's not new and it's not unique to the online world, but I think it's certainly exaggerated. And that's just because we're we're very constrained in in what we see. So I just have no no way of seeing this, not even what my friends are seeing. So I kind of have roughly it's like similar newsfeed, but I still don't know exactly what they're seeing. And I certainly don't see all of the news that someone who is completely different ideologically, age, whatever it is, I just don't have any any access to that. And I think that is really dangerous because we're not living in the same reality anymore. And it's really difficult to have two conversations because we're not even talking about the same facts or the same pieces of information. And I think that's what what makes it dangerous. And there's quite a bit of research on political sectarianism, polarization, where you see it as happening. And it's the sad truth is that it's oftentimes not even that we agree so much on, at least in a political space, agree so much on the the things that we should be doing or like the policies or to the fundamental questions. We just agree with the others that we just don't like the other side anymore. And I think that is part of it because we're not and we're not living the same lives. That is true in the offline world because we're not living in the same places and we're increasingly segregated in our physical spaces, and it's also true in online spaces. So there's just no public town square Vincom Sunstein, Cass Sunstein and thought about it this way It's just there's no public town square anymore where we can have these debates and all get together. 

Steven Parton [00:27:50] Yeah, I feel like social science has to be one of the most interesting places to be right now because of the way technology seems to kind of be shaping behaviors. And, you know, whether or not you're seeing people like express different objective versus subjective belief systems based on whether or not they can be seen online or if they're in the closed doors in their room or, you know, I don't know. It feels like. 

Sandra Matz [00:28:13] There's so much because it's like it's so frustrating because on some level, like I'm interested in climate change, there's lots of questions that I'm really thinking more about and how do you change people's attitudes about climate if they are skeptical that climate change is happening? It's a concern. We should support it. And and if you think about it, it's really no cost to holding false beliefs. So there's just no like it's like a cost to the future next generation, not necessarily right now. And there's only benefits for most people. Right. So if you're and I'm not saying that everybody, if you like, conservative and you fall into it, if you're in a community that generally skeptical about climate science for ideological reasons, then there's only an upside, because the moment you're also skeptical and you post about it, that it oh, you're going to be embraced by the community. So the cost comes in from you going against the community. So there's an upside of holding false beliefs, but no downside. So one of the things that we've recently tried to explore a little bit more is to see if we could design an environment where holding false beliefs actually becomes costly. So we kind of use playing around with climate prediction markets. So essentially have people bet money on the outcome of in this case it's weather related events mostly. So we ask people, is this going to be the hottest July in the last 15 years? And they can bet real money. And depending on whether they're right or wrong, they essentially get rewarded. So now if you have someone who just ideologically motivated says, I don't think climate change is happening, questions like, what do they do? Do they bet against climate change? Understanding that they're probably going to lose quite a bit of money and or or do they better in line with climate change? And is that a way of changing attitudes? And you see it as all the time because like a lot of investors that are conservative and publicly speak out against climate change, they're not investing in Florida in real estate in Florida. I was like, Well, why is that the case? It's probably because you think it's not an ideal place to invest in, right, because there's all these climate related events. So it's I think you're absolutely right. It is like what are some of the incentives and what is how does the social structure of social media, whatever. It's just really rewarding. You talking to your own tribe? 

Steven Parton [00:30:33] Mm hmm. Yeah. Do you think we're going to have to update our social institutions to. To deal with technology? I mean, as somebody who's watching this stuff happen, do you think we're at a place where society actually is going to have to really update how we harmonize, how we create those town squares, how we interact? Because technology is changing, The dynamics are drastically. Like, are we are we running on an outdated paradigm at this point? 

Sandra Matz [00:31:00] I think it's really I mean, it depends on what you mean with social institutions, because you can imagine, like social institutions could be both online, offline. And I think that the hard thing or the difficult thing is that you see it everywhere, right? You see it online where people are just segregating. You see it offline. And there's an amazing book called The Big Sort, which is essentially looking at what does it look like in residential segregation, where especially in the political space, you're just now, again, Democrats living together with all Democrats and Republicans living together with on. So I think this harmonization would have to happen across both offline and online and on some level. I actually I mean, what I find and it's not necessarily a very popular opinion to hold, but I wish there was more of this happening in universities. And I don't I see like the extent to which you can actually have debates and say, let's just discuss it from both sides. But you don't you don't have to agree with the other side. But let's kind of argue and let's kind of take up some and see what what is your best argument against it? I think we see less and less and it's harder and harder. And I think if we can't have that in these intellectual institutions like universities, I think it's going to be a very hard sell to expect everybody else to to have those conversations. 

Steven Parton [00:32:20] Yeah, it feels like the struggle between ah, with those debates these days is that people are more interested in winning than learning. So it's kind of like what you're talking about with Google. People just want the right answer. They don't want to explore mode. So I mean, maybe this is one way it's kind of changing our thoughts around things. 

Sandra Matz [00:32:35] Yeah, and I think it's this notion of which is always interested. And I'm like, I would consider myself pretty liberal. But I think one of the things that you see is that liberals are open to anything and very kind of liberal towards ideas other than what they think of as intolerance and intolerant towards everything except for and this makes it really hard to have a conversation with the other side because the moment that you think someone is not as tolerant as you are, the debate essentially gets shut down and there's no way to have a more deeper conversation and say, Where is that coming from? How could we potentially change this? Because the moment that you think, well, this is like something that my values cannot possibly ever stand for, and we don't even have that conversation anymore. And I think that's what I that's what I miss is like, in a way, the best response to have is to shut down the argument from the other side with an even better argument, not by saying that's something that you can't say and can't talk about it. 

Steven Parton [00:33:36] They are letting the marketplace of ideas idea kind of take over. What do you I mean, to bring it, I guess back to kind of your work and what you do. Do you feel like there's things that we can do to push people back in that direction in a healthy way? I'm thinking of things like Twitter and and other institutions have talked about introducing things like road, what they call speed bumps, things that are like little notifications that come up when you see a post that is maybe has misinformation in it, or a button that says you really want to share this, something that cognitively makes you stop for a second and get out of your habitual kind of instinctual behavior. I wonder, are there things like this that maybe we can use around data and technology to help shape better social behavior? 

Sandra Matz [00:34:27] Yeah, I mean, I think it's it's a super interesting question because the one problem that you have with behavior is that people get used to it so quickly. Right? So you had this with cookies in the new European Union in the beginning is like, do you accept these cookies? And people are like, oh, I'm not sure. Should I, should I not? And now everyone is like blindly accepts because it's it's annoying. So I could see how like on Twitter, for example, when you have like this thing popping up, it's like, just be careful. In the beginning, people are like, Oh yeah. And at some point I think people might just ignore it. So the question is like, how do you have how do you change behavior in a way that's sustainable? And I think the way that I've been thinking a lot about and I know that like a lot more psychologists are thinking about and which undermines in a way a little bit discipline of psychology, because I think psychologists focus on how do we change individual behavior, what are some interventions? And there's been this shift and I think what are some of the kind of ways in which we can change them? But I think on some level, what you really want to do is make systemic changes that just make the best behavior the easiest. And in social media, I think this I don't know exactly what this would look like. Now, you mentioned before one of the reasons for why we post all of this moral outrage stuff and stuff. That content that is very emotional is because that usually gets a lot of likes and that gets a lot of attention. So the social feedback mechanism I think is just really, really tricky and it gets amplified by that, by the algorithms. So one way would be to essentially kind of put a little bit of a cap on this. So one steers a thousand people have shared this, maybe just kind of push it down a little bit better, don't amplify it even more because that is what's happening is like it's emotional. A lot of people like it, algorithm amplifies it and now it just kind of grows exponentially. So there could be ways in which you could counter some of these some of these tendencies. And then do you really want to post this? I think that could be nice. I think I don't know if people would also get used to it at some point and it just kind of very quickly click, click it away. 

Steven Parton [00:36:34] And it goes back to your earlier point, I think, a little bit, which is where you said, I love this quote. I might steal it from you. But she said, you know, data is permanent, but leaders change. And it's like even if you implemented some of these things, you might regret having the ability to slow down certain, you know, news articles or something from spreading that seems potentially dangerous in the wrong hands. You said, and I think some somewhere I read I'm pretty sure you were talking about how technology is basically changing how we think about morality. Can you unpack that a little bit or how you think it's changing how we think about morality? 

Sandra Matz [00:37:16] I think it changes a little bit of how morality gets expressed. And so that's not my research, but it's a whole bunch of really cool research by Billy Brady was a who's a Kellogg and he's or Jillian Jordan, who's at HBO. So they have research essentially showing Molly Crockett. So there's so many good researchers I could just bombarded with names, but they're kind of looking at what how does the structure of social media, how has it changed the way that we express it and what's the signal? What's the function, for example, of moral outrage? Now? It's much it's kind of in a way, a way for us to signal our values in all virtue. And it makes it a lot easier in an offline world where a lot of it is just much more distant, right? So if I express moral outrage in a face to face, there's still another person on the other side. So there's also a cost to potentially accusing someone of something, and also especially doing so in a relatively hostile and emotional way. Now that is very much removed in in the online world. Right. So it's it's very easy to post something. It's very easy to jump on the bandwagon and everybody's more outraged about something. And I'm now also going to become extreme to to stand out and show that I'm even more virtuous than than everybody else is. So I think that is kind of has just made it much more extreme. And the extent to which just some of them actually call it like online lynching takes place. And I think that's in a way a problem, right? Because you do want to signal what you stand for, what your values are, what the moral foundations of how you think about the world are. But the online world has just made it a lot easier to share these thoughts and also has encouraged sharing it in an extreme way because that's what gets shared. That's what gets picked up by the algorithms. So I think that the positive function of morality that we oftentimes have just gets bastardized a little bit in the in the online world. And I think that's a shame. 

Steven Parton [00:39:20] Yeah. Well, and I'm going to ask you kind of, I guess, before we get into it, but I know you mentioned in our emails that you're working on a new project called Mind Masters. Do you want to talk about that yet? Are you ready to talk about that? 

Sandra Matz [00:39:33] Sure. And so it's this is going to be my project for the next year and I'm very excited. So this is a pop science book that I'm writing, and it's very much related to the topics that we've been talking about. So it's essentially the question of what can we learn about people by observing their behavior? What is the power that this gives you? Right? So once they understand what your values are, what your psychology is, your personality, whatever it is that I can predict, to what extent does that give me power over the choices that you make, the decisions that you make, and for better or worse? So on some level, you can imagine you mentioned marketing. Well, if I know that you're an extrovert, I can probably tell you more stuff, which is true, so that some of my my own research and but then at the same time, which is also some of my own research, is, well, I can use the same strategy to help you save more. So once I understand your motivations, what you care about, I can use exactly the same technique to say, Well, just think about what does saving mean to you If you put some money to the side right now, if you're an agreeable person, that means that you're making sure that your family and the people that you love are safe in the future because you're essentially building a safety net for you to be able to protect them and to look after them. No, that's probably not what I would tell a person who's more competitive. Right. A competitive person doesn't necessarily care that they can protect the loved ones, and they're probably much more interested in outperforming someone else. So it's really thinking about what's the motivation of different people and how could we use potentially technology in a way that taps into these motivations, which, if you like, the way that I think about it is it's actually a very fundamental human quality, right? So the way that we have personal relationships is very much tailored and personalized. So I know, for example, what my friends like, who they are. So the conversations that we have all tailored to their preferences, my preferences, the way that we we vibe with each other. And I think on some level this has gotten lost with the transition to the online world because suddenly you were a number and you had all this data. But it doesn't necessarily mean that I understand who you are. So this question of how do we how do we reap the benefits of understanding people? Why mitigating some of the challenges And the way that I talk about it in the book? It's essentially kind of taking it back to my experience of growing up in the village. So I grew up in a village of 500 people and very small, which meant that everybody knew everything about everyone else. And on some level, that was amazing because on some level that just meant they could give amazing advice. They connected me with opportunities when I needed them. And so it felt like you're actually seen and understood, and on some level that felt great. But then also, obviously on some level, it felt like, great, everybody knows everything about me. They are just kind of trying to manipulate me in different directions or trying to set me up with this person and they're making sure that I'm not going to do this and this and this. So this, this, this tension between if someone understands you, that can be amazingly helpful or it can be really terrible because it's manipulative. So but the third part of the book is essentially trying to say, well, if that's if both of them are true or can be extremely helpful, can be extremely dangerous. How should we be navigating this? How can we set up structures, technologies in a way that allows us to get the benefits without necessarily the costs? And that's again, brings it back to the village and is talking a little bit about how do you have these smaller communities of people who share and manage your own data. So this notion of data clubs where, let's say expecting mothers, they all share an interest of figuring out what, what, what should I do to make sure that my baby is healthy and safe. And it's really difficult because you get advice from all different kind of directions, and it's oftentimes not based on a lot of data. So you could imagine expecting mothers pooling their data, but doing so in a way that essentially allows them to own the data, first of all, and then also have management or whatever that looks like, look after the data with their best interest in mind. So to say, would have banks have fiduciary responsibilities. You could imagine that there's an entity data trust data club that has fiduciary responsibility is to the people who pool the data for a specific purpose. And so that's broadly speaking, I would say the idea of the book, like what can we learn? What does it mean? And then how should we be dealing with it in a way that that groups the benefits. 

Steven Parton [00:44:09] So if understand you correctly, one of the ways that we could go to make it more beneficial is reducing the amount of people into a tighter circle of trust. Is that kind of the idea? Yeah. 

Sandra Matz [00:44:21] Somebody even thought about calling it Circle of Trust. And that's. 

Steven Parton [00:44:25] Good that you've explained it well and. 

Sandra Matz [00:44:27] It worked. And yeah, so I think that's a so it's like trying to see do we have different ways of managing data. Because the problem with so regulation I think is somewhat needed because you want to kind of established the ground rules and but regulation usually works on averages, right? So for the average person, this is the best thing to do. And that is a huge variation in terms of what do people want for their data, what are the questions that they're asking, what do they want to use it for? So these smaller communities allow essentially you to express within a community. Here's something that you know that I have and here's what I want from this. And you can team up with people from anywhere around the world, right? It's amazing because it's because to be somewhat completely different, but they have the same interest. So one of the data clubs in the US is essentially drivers. So Uber drivers, they can just kind of pull data and then they can collectively get insights from the data. But it's their decision to give away to data and they also are the direct beneficiaries of all of the data that they that they share. And I think that's d that's what I'm kind of trying to say. Is there a way that we can more efficiently? Manage our data in a way that allows us to say, Here's what we need and here's what we're willing to to give up. 

Steven Parton [00:45:44] It sounds like you're letting each little group have its own ethics review board. They decide. 

Sandra Matz [00:45:49] Exactly. 

Steven Parton [00:45:50] They decide what's ethical and what not. 

Sandra Matz [00:45:52] I think that's very much like it. 

Steven Parton [00:45:54] One thing you mention there that I would I think would be interesting to myself and anyone listening is talking about how much this information can be used to, you know, manipulate people or change decisions. I guess my question is, how much do you think that this can change people's minds? You know, a lot of people feel like, yeah, I so what if you have my data, I know what I want. I know what I like an ad or talking a certain way in a speech for a presidential election or something like that. Is it going to sway me at all? People don't like not having complete perfect free will at all times, but obviously that's not the case. How much do you actually think that there is an influence on people's behavior from this data? 

Sandra Matz [00:46:34] Yeah, I think that's a great question. And so first of all, it's probably not a brainwashing machine. So I think the Cambridge Analytica was an amazing marketing PR company and just selling whatever they thought that they were doing. So can you turn a Hillary voter into a Trump supporter just by sending them personalized ads? Probably not. So you're not going to change the entire identity of a person with just a few of these personalized. But can you change? Can you shift attitudes? I think that is certainly the case. So I have research in the space, so like I already mentioned, like marketing. So one of the projects that we did was essentially predictive people's personality based on their Facebook likes and then sent them different ads. In this case, we looked at extroversion. So extroverted or introverted either. And we teamed up with a beauty retailer and wanted to studies, done a different product and other studies with a beauty of each other. And we just it was the same product. So we just told people, click on us. The goal was to have people click on an ad, go to us and buy something, and we just had different ads. So we had ads that word. It was multiple people. There was like a lot going on, people partying, and the ad copy would say something like, Dance, like no one's watching, but they totally are so playing with a need of extra words to be the center of attention. And then you had ones that were much more reserved, and Clay was like one person at home doing like a face mask, and a copy would say something like, and you just have to shout. So again, same product and you're just marketing it to different motivations. And you saw that essentially once you tailored to personality, you saw like a 50% increase in the extent to which people purchase. Now that's a very small manipulation, right? So in this case, we actually the way that we designed extroversion and and introversion was relatively crude. So there's far better models out there. And it was really just one ad that we had once that looked exactly the same from everybody. So this notion that you can shift people in certain directions and I think that is true and some of it is just like what is salient? What is if I put something in front of you all at a time, the chances are that at some point you become interested. It's just much higher. So it's not even just you now doing something that otherwise you would have never done. It's just increasing the likelihood of like, here's the stuff that you see. You're finally going to do some of that stuff versus I'm just going to shut out the things that you don't even get to see because it doesn't fall into your into your profile that I've created of you. And I think that's these small changes that over time can really push you in a certain direction because also they make you more extreme. I like you shift a little bit towards extroversion and you see more of this kind of shift even more So I think that's the that's the change that is is possible, very unlikely that you are completely flipping someone's character. 

Steven Parton [00:49:24] Yeah. But you can use the stimulus to kind of lead people down a trail towards different attitude changes. 

Sandra Matz [00:49:30] Yeah. Mm hmm. 

Steven Parton [00:49:31] Sandra, this is my favorite topic, and I wish we could keep talking, but we did schedule a limited amount of time here, so I want to give you a chance before we officially in this conversation to just kind of lay out any last ideas, thoughts you want. You want to give people I can talk about the book, where to find it or anything. You're working on whatever you'd like to discuss. 

Sandra Matz [00:49:51] So that's a great question. Well, it's a it's a dangerous question because like, if academics, the stage like talk about whatever you want, take the stage for like another half an hour. I'm good. I think this there's one thing that came out just kind of popped up in my mind as we were having this conversation. I've written about this a little bit before, is this notion of. Consumer control, which I think is such an interesting one. Right? Because if you think of personal data, I think a lot of the regulations these days have placed very strong emphasis on, well, the user should be in control. We just have to increase transparency and give users control and they will know what to do with their data. So it's their data is there their right to to navigate that that world. And I think that one is so the reason for why I talked about these small communities is that I want you to have a say over what happens with your data, but I want you to have allies that help you navigate it properly, right? Because so the way that I think about the GDPR, which is the data protection regulations in Europe, it's in a way which is throwing you into this raging sea and we expect you to magically know how to navigate your data. And it's impossible, right? So it's impossible to fully understand what companies could do with your data if you really wanted to do that. And first of all, I think it's possible and it would be a full time job. Even if you want to get close, you would have to spend 24 seven trying to understand with every data point that you generate what's happening and people don't. Even when you download an app on your phone, people don't even check if the app wants to tap into the microphone. All of the pictures into every single intimate piece that's on the iPhone, right? So nobody is going to take the time to make sure they understand fully which data is collected, what is happening with the data. So that's why I feel like we can't just give people the right to control their own data, which I think is a lot more of a responsibility right now. I think we just really need to have this this expert crew, an expert team of other people, because once you have a small community, you can also put people in place who know what they're doing. So you can hire full time people to look after your data in a way that is that is beneficial to use. It is like this. That's why I love the small communities. It's it's not as average as regulation, but it also doesn't mean that you're suddenly all by yourself trying to navigate a world that you just stand absolutely no chance. 

Steven Parton [00:52:12] And yeah, I mean, I love these ideas. What I'm taking away is have a small circle of trust, but also be an explorer. Go and explorer. Yeah. See, this is a great combination. 

Sandra Matz [00:52:23] Yeah. I'm glad that that's what you took away. 

the future delivered to your inbox

Subscribe to stay ahead of the curve (and your friends) with new episodes and exclusive content from the Singularity Podcast Network.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.