Button TextButton Text
Download the asset

Techno-Optimism & the Equality Machine

Techno-Optimism & the Equality Machine

This week our guest is law professor, author, and distinguished speaker, Orly Lobel, who recently published her latest of three books, The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future.

In this episode we explore the thread that weaves through Orly’s work, which often emphasizes the way economic markets, technology, and the human condition interact. Specifically, we take a deep dive into the limitations and future of intellectual property and monopolies; the way in which cynicism and unrealistic standards hold us back from seeing and implementing humanistic technological solutions; the way in which technology could bring more equality to our societies, and much more.

You can find more about Orly and purchase her book at orlylobel.com, or follow her at twitter.com/OrlyLobel


Host: Steven Parton - LinkedIn / Twitter

Music by: Amine el Filali


Orly Lobel [00:00:00] We've had always a history of human bias, and that's actually a really important part of the equality machine where we should not be measuring automation against some kind of ideal perfection. You know, having some system that is unbiased, but rather we should always be asking about comparative advantage. 

Steven Parton [00:00:36] Hello everyone. My name is Steven Parton and you are listening to the feedback loop on Singularity Radio this week. My guest is law professor, author and distinguished speaker Orly Lobel, who recently published her latest of three books, The Equality Machine Harnessing Digital Technology for a Brighter and More Inclusive Future. And this episode we explore the thread that weaves through or leads work, which often emphasizes the way that economic markets, technology and the human condition interact. Specifically, we take a deep dove into the limitations and future of intellectual property and monopolies, the way in which cynicism and unrealistic standards hold us back from seeing and implementing humanistic technological solutions and the way in which technology could bring more equality to our societies. So without further ado, everyone, please welcome to the feedback loop Orly Lobel. So I'm going to go ahead and pick up on a topic. Then I'm going to start with what we were just talking about before we started recording, which was that all of your books kind of go together. So I would love to know maybe what story you're trying to tell with those three books that ties them together with that single thread. 

Orly Lobel [00:01:57] Yeah. All the books are about how we shape markets and shape our culture and our society through decisions that we make collectively, individually. Our social norms and ethics. But also policy. And how we can actually make our markets better. 

Steven Parton [00:02:22] And so has the latest book equality machine planned at this because it feels like that has a much more technology driven focus. 

Orly Lobel [00:02:31] Right. So the equality machine is about technology and our digital future. I will say that talent wants to be free. My first book and You Don't Own Me, my second book. I do have a technology tilt in the sense that they are about intellectual property, innovation, policy, market competition and competitiveness in general, about who invents and who owns innovation. But the Equality Machine, my new book is definitely the most technology oriented in that I look at the current developments and automation of the progress that we've made, been making with artificial intelligence, with data collection and data mining. And I suggest that our conversations, the conversations we've been having as a public, as a community in the media and also by government and kind of what we see in policy have been pretty flat and skewed and in a lot of ways. So if you look at kind of the trajectory and the public perception of where we're going with technology, there has been recently a very alarmist take on where we are and where we're going. So in the book, I show that we need to make sense of what we really fear, what you know, what is concerning, what are the risks. But we also very much need to emphasize the potential and the opportunities that are coming with automation and that a lot of times that's missed in the public conversation that we're having. 

Steven Parton [00:04:33] Yeah, it seems like you're kind of saying that we're living between two polar extremes here, especially in terms of policy and culture, which is one, it's flat and we do basically nothing. It's kind of like an impotent approach to policy. And then the other side is pure panic. The AI is going to destroy us. Social media is destroying our brains as this kind of capricious and like very extreme approach to technology, kind of what you're rallying against here. 

Orly Lobel [00:05:04] But I think that's exactly right. But they're connected. So there is that impotence where we see. Government and policy makers are kind of not understanding what's going on and then saying, you know, we can't do anything. So very recently, I actually critiqued a long Federal Trade Commission report that was actually commissioned by Congress. Congress asked the FTC to look at how we can use AI to battle online harms. And you see this report that is nearly 90 pages long saying, oh, there's all these tools which, you know, is happening, industry is adopting it. But then the conclusion is, but it's too rudimentary. We don't know enough. It can cause harm. There's a bias we all know about AI bias. So how about we do nothing Congress. And that's the recommendation. And then you see also the reports that are even worse in the kind of the press releases that are telling the public about the report. They just kind of go to they go all the way. I actually show this in my writing that they they, in fact, allude to kind of Hollywood images of the AI rising and killing its programmer. And it's kind of explaining why the FTC said do nothing. And so those two polar opposites of the alarmist and then the kind of nonintervention, I actually argue they're very much connected and interrelated in the sense that we are letting kind of people a very, very small set of people, kind of the insiders shape the direction of technology and everybody kind of on the outside. Our critics are talking about the the wrongs, about the fails and not really having skin in the game. And really, the book, The Equality Machine, is all about how we all need to have skin in the game because we all need to shape our future and have, you know, the the vocabulary, the common sense, the kind of the ability to challenge conventional wisdom about where we are, where we're going, what are the advantages, what are really the risks of different systems that are being adopted? And only then are we actually going to have this like rich conversation of, you know, there are certain things that, you know, are are better than others. You know, there's certain technologies that are safer, that are that are more accurate, that are more fair, and others, you know, maybe we should opt out of or or prohibit or ban, but we can't have this kind of very flat an idea of like, oh, we we're fearing everything. It's all biased and we shouldn't use it. And then and then, you know, the train has left the station, as you know very well, like it's happening anyway. So, you know, there's that paradox exactly like you described, kind of that paradox of the two colors of like there's so much going on, but we fear it. And then, you know, there's a small set of people and a very small set of places of of companies and, and actually regions that are shaping our future. 

Steven Parton [00:08:53] Yeah. And how do we, I guess open up the doors to those kind of walled gardens, so to speak? Because when I think about the technological landscape as it exist right now, you know, we I think was the book The Big Five or the Big Four or whatever it was talking about Facebook, Google, Apple, you know, you have Twitter. And these companies are dominant. They're very much you know, there's there's a handful of them. Yes. But they're basically monopolies on the space. And how do we take those important decisions that they're making with these billion dollar, you know, revenue accounts and spread them out to the masses so that there there is more ability for other people to join the conversation? Do you know what I mean? 

Orly Lobel [00:09:37] Yeah, no. I mean, those are really tough questions. And, you know, competition policy is something that is a frontier right now. Antitrust law kind of rethinking what makes these companies this these big five, these category kings as the kind of the puzzle that has been bubbling with economists and people like me who study platforms, online platforms, whether it's social media or search engines or apps like writing apps or. In a kind of sharing economy gig apps. What makes it it just turns out that in each category, these companies reach a certain maybe not monopoly, but at least the very dominant place in the space. And I think there are a lot that we can do there. There is a lot that we can do. So I mentioned my first book, Talent Wants to Be Free. It's actually all about how we need much more competition and labor mobility, job mobility and in between jobs. And we you know, there are there is a role for policy there. So, for example, we should not allow kind of these draconian do not hire or do not poach non-compete over to finding what proprietary information is, what trade secrecy is, innovation assignments that, you know, a lot of probably a lot of the listeners know about, you know, like signing away pre innovation, you know, pre ever conceptualizing new ideas, signing away so much of your knowledge. So this is kind of like the classic in a Facebook Zuckerberg starts you know Facebook and the Winklevoss is want his his platform because he had informally worked for them. Actually my second book on On Me is also about these kinds of dynamics with over collusive employment contracts. So that's a rule for policy, but there's a lot more. So in the equality machine I look at public option so it not having so so it goes back to the question that we talked about before of like if if we're not really nudging, pushing, training more people to have the capacities to be in the game to to be part of the conversation, then these technologies are very concentrated on used in, in kind of very narrow ways. But we can spread, you know, so we need to scale successes and we need to scale the kind of AI for good that is really. Each chapter in the equality machine is pointing to. So I go to examples in help and advancements in hiring and software for pay equity and. Online content, moderation that we mentioned before, and search engines and digital personal assistants. The more people that are kind of have a voice of saying like, this is what we actually want. This is, you know, this can be better. We don't want to accept the defaults that are given to us by by Google or Amazon. We want something that is more open source. We want something that has many more choices. You know, we will be seeing and we are seeing, you know, more options and kind of a more competitive and broader market and in a lot of those contexts. But yeah, it's something that we're kind of constantly battling against of having, you know, too much concentration of power and understanding what shapes that. 

Steven Parton [00:14:17] So do you think those IP laws are going to be something that changes in the near future or are going to have to change maybe as we move towards that decentralized open source future, if we're going to bring people into the conversation and maybe address some of these issues. 

Orly Lobel [00:14:35] Yeah, I think already that there's a real reason and role for rethinking a lot of our intellectual property laws, even in the low tech context. You know, there's, I think, years of research that shows that we've been over expanding and overly entrenching knowledge and ideas through kind of policing the remixing and the policing, the uses of so much valuable human progress. And I can tell this story on the planet front and the copyright front and trade secrecy, which is an area that I've done a lot of research and and also trademark. There's a lot of empirical research that we've just been overly using. It's like the tragedy of the anti comments really kind of, oh, we're really using the registration of marks and the the the sticks and the swords of, you know, the patent system to to really kind of get rid of entrepreneurs, to slow down, you know, startups and, you know, the people that have more, more of the resources and kind of the longevity and the stamina to even bring really frivolous cases that scare investors, that scare off collaborators will use those sticks. So so again, I can tell this story even without now, you know, we're going to get to this with are leaps the leaps that we're making in technology. But certainly now when when we do have in every single field automation that is becoming so good and we need to talk about how good this is. You know, the equality machine is all about like we need to talk about how good, you know, a lot of this technology is getting. I think, again, the Singularity U is is really kind of exceptional. And and actually you talking about this. But the public is you know, the public debates, the kind of mainstream debates that I've researched are more about like how bad it is, like, you know, how you know, unready it is or, you know, then making the leap about how it's good in a scary way. So there's kind of that paradox, too. But now that we have automated systems that can create music, that can create art, novels, that can solve all sorts of problems, we really need to rethink the that idea behind the actual property of incentivizing. A single individual or corporation and investing in and research and development and then getting a monopoly. You know, the patent system is really a monopoly. The copyright system is a monopoly. It's a temporary monopoly of exclusivity over a body of information and knowledge for what is supposed to be a limited period, but has been extended definitely with copyright. It's it's really not a limited period because it goes over a lifetime from a human perspective. So it really is a moment that is ripe for rethinking all of these systems of, you know, how does innovation happen? What are our human incentives, what are our collective incentives to invent, to create, and how technology is reshaping that. 

Steven Parton [00:18:52] Yeah, I love that because I don't think really until you started talking about this, that I made that connection between IP law as a form and let's say of oppression. But like it doesn't provide equality because it keeps people from being able to express themselves, to start businesses, to try to build a life for themselves outside of one of these monopolistic companies, for example. Are there other ways that you see? IP law may be holding people back. Beyond that, the economics of things. Is there an equality aspect to IP that we haven't talked about here? 

Orly Lobel [00:19:31] Well, I mean, when you look statistically about who gets. Parents registered their names and who then profits from it. First of all, there's an inequality here between individuals. Kind of the standalone image of the individual creator versus the big corporations. And so that's part of that. And of course, the market is less competitive than we're going to see even when an individual is the inventor, the assignment goes to big corporations. But there's also an inequality in terms of gender and race and in patenting and in other forms of natural property. And that to how it's I mean, there's multiple sources to these problems. So I talk in the inequality machine about the ability to find out about where you're valued, where your talent is most valued, and where you're going to be, that your human capital is going to be utilized and the best play the best way. And again, I think technology really helps with that. So for years, the talent market has been not only unequal, you know, and kind of valuing that, you know, having these biases and this valuing certain profiles, certain demographics. But it's also been very concentrated and kind of bringing people through word of mouth, not really allowing a lot of job hopping for certain people who kind of don't know about other jobs, who maybe fear a breach of a non-compete. And digital platforms are changing. So we you know, we're also connected now with information about jobs that are that's really global. So, you know, kind of think about LinkedIn, LinkedIn 3.0, you know, like, you know, different kinds of platforms that let you know your worth. So one of the puzzles has been why don't well, why does the market correct for pay gaps for people? And and again, you know, people don't really know their worth. There's there's this kind of information asymmetry that's always existed between talent and the companies that hire them. And we had kind of these social norms that are infused with a taboo about talking about your salary or your compensation schemes, including your kind of what are you awarded for, whether you're awarded when you invent something within a company, when when you have a talent to your name. And again, so much of this is changing with it's kind of an endogenous cycle where you see the platforms kind of crowdsourcing information. There's actual apps that are called Know Your Worth, and you can kind of find out if you're underpaid. And then we also are changing our norms because of that, and we're kind of more willing to job to demand more. We saw this, you know, I guess in the extreme with the great resignation, it's really the great reshuffling of people really understanding what they value or think, rethinking what they value in terms of their careers and their professional development. COVID has also accelerated our abilities to work remotely, to telecommute, you know, to work globally, to travel the world while we're working. So all of this is an enabler by technology, but it's also part of our norms. You know, what do we value and how do we share information and and how do we create. I'll say one more thing about this, is that, again, we kind of have differences in social norms on whether we want to patent and to contain, to kind of draw circles like walls on the data that we collect, the the information, the the innovation that we have versus sharing. And, you know, as you know, there's like different communities that are much more about sharing. So this is you know, we see this in research institutions, you know, at the norm. So like you have choices where you can present a leap in like an algorithm that you've been working on you. You can present it in a scholarly setting or at a conference, or you can hide it. You can DMA proprietary, you can, you know, try to patented if it's patentable. And, you know, I think that we have we have different social norms in different communities and different even in different countries, different demographics. So there is some evidence that women, for example, have this kind of other looking, you know, kind of more interesting altruistic wanting to collaborate more and maybe, like, think about whatever innovation is there. They're they're happy to share it more freely and not demand not be the kind of homo economicus of like I'll just share it if you, you know, signed an NDA and pay me, you know, some, some royalties for it. Yeah. There's lots of things that are kind of interesting and these corporate culture is and other cultures of institutions and how power sharing happens. But all of this, I think, again, is being challenged by the the the speed of technological innovation and how technology is enabling so many more leaps and and creativity and inventive inventions. 

Steven Parton [00:26:30] So you were talking there about I feel like incentives maybe being one of the big issues behind where technology is going. Do you do you feel like that's a big part of what's at play here is that these economic incentives are making people feel like they have to not share this information to make money off of it so that they're hypercompetitive. 

Orly Lobel [00:26:54] No, I don't think I would describe it that way. I think, you know, the it's good to have economic incentives. You know, it's the the for profit motivation is definitely a, you know, a good motivation to have in the mix. And, you know, in competitive industries and markets, it's more of that. It's kind of this push and pull where, you know, rationally, of course, if you have a profit motivation, you should be actually hiring the best talent and you should look at for the overlooked talent to include. We've had always a history of human bias, and that's actually a really important part of the equality machine where, you know, we talk so much about algorithmic bias. There's all these. Bestselling books that are called Automating Inequality, Algorithmic Bias, The New Jim Code and Improving that algorithms are known have racial and gender bias. The books that are called Weapons of Mass Destruction that again, kind of describe to us all the the biases and inaccuracies and plus kind of peer that with this idea of surveillance capitalism, that data is extracted and used in harmful ways. And the equality machine is not denying that there are risks of algorithmic bias. There have been definitely fails. And there's bad ways to design an algorithm to just replicate and amplify, you know, past wrongs or existing inequities in our society. But first of all, the you know, the very important point that we need to kind of keep with us in every discussion when we're looking at systems of, let's say, bot radiologists, you know, automating x ray screenings or identifying breast cancer with mammograms, or if we're automating the process of hiring and promotion and pay and looking at who's evaluating different talent for different purposes or, you know, for like for for education, for credit and loans. With all of these processes, you know, it's so important to remember that we're, you know, we're not or we should not be measuring automation against some kind of ideal perfection, you know, having some system that is unbiased, but rather we should always be asking about comparative advantage. So, you know, we've had disparities and biases and gender and race discrimination in our markets, in our employment settings for forever, for, you know, for and for, you know, when I whenever human history really started and we as someone so, you know, I here at University of San Diego, I'm the director of the Center for Employment and Labor Policy. And I oversee all the employment law and policy programs and also serve as an expert witness in employment discrimination cases. You know, it's it's so frustrating to see companies that. Try to introduce diversity, but try to kind of hold this idea of equality. But inevitably, we we all hold biases as humans. So when we talk about the black box of algorithms, you know, I, I ask in the equality machine, you know, which is better in the black box of humans, you know, our tiny brains that are also AI algorithms versus when we, you know, we have some check on a system we introduce and like resume passing or whatever it is. And we can look at the outcomes, we can look, we can improve it, we can have it learn from past mistakes. And and we have kind of a digital paper trail that keeps track of, you know, whether we're doing better on diversity, on equality, on accuracy, on safety and all the, you know, the norms and values that we want to hold close. So so always kind of keeping that question of comparative advantage, remembering that, you know, we always need to decide in some way. We there is no kind of decision free world. There's no and a status quo that is free from problems and that there's a cost when when we're not introducing automation into some systems, let's say, you know, one of the things that really frustrates me with some reports when you look at, for example, I mentioned radiology and, you know, new systems that can screen for for different diseases, for different cancers. You'll see a report saying, oh, there, you know, there's it's only like 70% accurate. And so, you know, we shouldn't use it until it's 100% accurate. So we're holding this double standard for humans and machines. And also, you know, even when it's compared to human radiologists, you see these reports of like, well, we take two of the most expert human radiologists. They are as good or maybe they are slightly outperform this new system that's, you know, an automated system. So the question that we need to ask from inequality perspective is, you know, is it realistic that most people have, you know, access to to radiologists or to expert radiologists? And and, of course, the answer is no. So, again, from a kind of equality perspective, thinking about the potential of digital technology, we need to ask a, the comparative advantage question and B, the kind of realistic question of cost and ability to scale systems, to improve systems, to have access to systems around the world, globally, you know, improving. Health care and digital literacy and participation in our you know, and our markets and our economic systems. So all of these questions are very near and dear to my heart, and that's kind of what I try to work through in the book. 

Steven Parton [00:34:36] Yeah. So do you feel like this cynicism that comes at technology that kind of undermines the fact that it's not perfect and that, you know, it has some sort of bias and a sense maybe even like the cult, the way the culture wars and the politicization of all this plays into it. So do you think that that's kind of what's core here is rather than seeing the bright side, you know, as you talk about of what could happen, you know, this could be medical care for somebody who couldn't afford it at all. And 70% is better than nothing if that person can get access. Like, are we losing our sight of all of these fantastic routes that we could go to because we want to criticize that technology's not perfect and that it has these flaws being completely blind to the fact that if we use it, we might open the door for lots of treatment for people or lots of help for people that currently doesn't exist. 

Orly Lobel [00:35:38] Yeah, I think that's that's right. I think there's a cynicism and there's kind of these cognitive fails. And so, you know, from a behavioral researcher standpoint, we would call it like a status quo bias and loss aversion where like losses loom larger than gains when we're thinking about change. So that is in play. I think when we're thinking about introducing new systems and new processes, there's also very much and maybe this is the cynicism, but kind of in its focus, there's very much a mindset of. I think it's a very individualistic and probably mostly are more American than even in other countries. This idea of like anti surveillance, anti sharing of your individual information. So I show on the equality machine that oftentimes we have this privileging of privacy in ways that are really harmful for the common good. So, again, not that privacy is not important, not that we shouldn't value certain instances of privacy, but we have to have a much richer understanding of how privacy over the years has served to exclude, to cover up, you know, wrongdoings by the powerful against people that are more vulnerable. So this assumption that that automation and that data collection is going to harm in patterned ways, the more vulnerable is simply not true. I show this at every step of the book that oftentimes it's really the people at the edge of that are the people who don't have access that are really being harmed by by not being included and counted. And we should count what matters. And so all of it is kind of this mindset of, you know, if we understand that we can use technology as a force for good, then we have a much better chance to actually, you know, carve the path forward in very productive and brighter ways. As you alluded to, it's the subtitle of the book Harnessing Digital Technology for a Brighter, More Inclusive Future. You know, paradoxically, when we're just kind of in this mindset of criticizing and being fearful, as we mentioned before, you know, the the technology is going to be you know, it's going to be used, but it's going to be used in ways that most of us don't have a say in. And, you know, donors understand. 

Steven Parton [00:38:46] You spoke there a little bit about the difference between the American way and maybe the European way, for instance. And it makes me think one of my first interviews with a woman named Jenny Klieman, and she wrote a book called Sex Robots and Vegan Meat. And she was concerned about artificial wombs and the way automation was affecting women because she saw it, I believe, and you don't want to speak for her, but it seemed like she strongly believed it would devalue women. But I believe when I was reading your book, that you actually think maybe the opposite, that it might free women from this idea of women's work and kind of open up the landscape for them. Is that fair? And again, could you speak on that a bit? 

Orly Lobel [00:39:29] Yeah, I think that's absolutely fair. So I mean, there's a lot to unpack in your question because I can talk also about a lot of cultural and kind of global differences. So I do actually look in the book to. Understand why Japan has. Has been so much of a leader in robotics and automation and kind of their postwar culture. So a lot of it is kind of cultural histories of how do we understand human machine relationships and how we we can embrace rather than fear some of these developments. So I look to Korea and Japan and Europe, but and Europe is a whole other story I can I can talk about. But on your specific question, and this, you know, is is a very interesting set of questions about our biological differences from from a gender perspective, from sex. I as you seen in the book, I have a whole chapter about sex robots and, you know, what do we think about sex for robots and how we think we know what we think about sex robots until we, you know, actually try or engage with these accelerating developments in sex tech in general and then specifically about reproductive technology. So I think you're absolutely right that my stance is that any technology that frees us to re-imagine ourselves separate from the. Biological constraints that we've been born with is an engine for equality. It is part of an equality machine in the sense that we've been so indoctrinated, you know, whether it's nature or nurture, that we have, again, kind of this very stagnant, very sticky divide in our society of the role of mothers. And, of course, we see this. And very, very alarming and really tragic ways right now in the United States of, you know, wanting to control women's bodies, wanting to limit reproductive rights and choice. If we leverage technology to, again, give us the the liberties, the freedoms, to have more options, to have more choices, I think that's you know, it's scary in some ways. That's no denying that it can be misused and abused. And there's a lot of ethics that, you know, ethical questions that we need to grapple with. You know, it's a fascinating conversation. But that's what we need to be doing. We need to, you know, to be reimagining and to be, again, with that kind of idea of skin in the game, understanding what the right boundaries are and not just saying, you know, like, oh, I don't want to think about that. That's too much. You know, artificial womb is is to either sci fi or kind of categorically unethical in some way. We really need to say much more to understand what what what do we what can we envision? What do we have as part of our humanity that will allow versus constrain us, you know, and in moving forward with some of these options. 

Steven Parton [00:43:51] Would it be fair to say you think we need to stare into the abyss a bit more so that we can find the gold that's hidden there? Because that's kind of the vibe that I'm getting from this, is that we we're so afraid of looking into the darkness to find the good that we don't find the good. And we only see darkness. 

Orly Lobel [00:44:08] I love that. Yeah, no, I. I love that framing. And I think that that's always true, right? Like, it's a, I think it's part of our humanity to live an examined life and to and examine, you know, like, be part of. A collective that looks at. What we are, who we are, really asks what we want. And so my argument throughout is that we've always had a lot of things that we value. You know, you asked me before if it's like money that corrupts. It's, you know, individual incentives and for profit motive that corrupt. And I said, yeah, look, we have we have a lot of different competing values and competing wants and desires in our society. And and that's a good thing. You know, that's part of our, you know, a democratic free society that we value community and we value individualism, we value privacy, but we value safety. And, you know, you see some of these tensions. I talk about them in the context of COVID, for example. You know, we yeah, you want to protect your privacy about your movements. But maybe there are moments where you have to give up your privacy to stop a, you know, an infectious global pandemic from spreading. We value free speech, but we also value equality and inclusion. And those are in tension. And that's always been the reality. So as a policymaker, as a law professor, I teach these things. I feel them very deeply that we we always need to have some balancing act, some kind of richness. And, you know, how do we hold all of these values and norms and a future wants as one unified blueprint for for our society? And I see technology as helping with all of that. So not disrupting it so much as allowing us actually to retrieve to maybe to stare at the abyss, as you called it, in actually more sophisticated ways. So one thing is that we actually can know so much more with technology about the root causes of inequities. We can see them. You know, the data gives us so much more information. So I start the book actually with describing some new insights that that happen from just having machine learning, you know, put on data from platforms like scraping different data that that is on platforms like eBay or Airbnb and just kind of seeing it in an illuminating way. Like we, we kind of have a sense that, you know, some platform design will create less gender equality or more inequities, more racial bias or less. And and we can see this we can study this in so much more consistent, robust ways than than we've ever been able to before. And so we we can see it, we can study it, we can detect, and then we can actually improve. And so. You know, kind of putting together that, you know, call for staring at the base, which I love the way that you describe it, with the fact that we have now much more of the capability to do that with my call of, you know, don't always privilege privacy because we need that data to actually combat things that we don't like and to improve our algorithms to become more accurate. We don't recognize that tension in and of itself often enough that, you know, there's when you look at the EU policies on privacy like DPR and the EU air draft and we look at the policies before Congress, there's a kind of like double framing of we need to slow down automation because it's inaccurate and biased and we should have a human in the loop always. And also we need more privacy and data, momentum and minimization. And if there is, there's that kind of link between that, you know, the more data we have, the more accurate our algorithms become. And we need to recognize that. And we need to talk about that, too. 

Steven Parton [00:49:19] And it seems like in a world that is obviously becoming increasingly complex because of technology, if we don't have some data to track that complexity, we're going to probably get left behind. There's going to be a chasm that forms between our understanding and the complexity of the world, I would think. 

Orly Lobel [00:49:36] Exactly. And but but so much good also to celebrate. So, you know. Again, one of the the pleasures of research being in this field and then writing the quality machine has been finding so much to celebrate, like the advancements in AI that is tackling climate change and environmental issues and and health we talked about and endangered species and like poverty and poverty alleviation and agtech agricultural, you know, advances for us around the world. And so it's actually, you know, again, we there's something about like you called it cynicism, but maybe just in terms of human temperament and sometimes the media, they want to report kind of all the wrongs more much more frequently than the goods. And I think this is a moment where people are thirsty to actually learn much more about all the good and and that, again, it's kind of a virtuous cycle where the more we talk about the the potential and the good, I think that more people will be inclined to, you know, to be part of that. 

Steven Parton [00:50:58] Yeah, absolutely. Well, I know we're coming up on time here and unfortunately, we haven't been able to get into all that good that you talked about. I mean, there's so much here that you do that there's a lot of topics and tangents I think we've been able to explore. But with these last few minutes, I guess, that we have. Do you have any closing thoughts or ideas that you'd like to just share and let people know about the book? Anything at all that you'd like to talk about? 

Orly Lobel [00:51:24] Well, I love your podcast and I love all the projects and so I'm eager to hear from listeners about. It's kind of like, you know, the book is a blueprint and also a call for, you know, just more of a conversation. And so you can connect with me on, you know, all social media platforms or Lobell and LinkedIn and Instagram and Facebook and Twitter and the rest of them. And yeah, I would look forward to hearing more thoughts and reactions. 

Steven Parton [00:52:06] Yeah, absolutely. Well, really I really do appreciate your time and thank you for joining us. 

Orly Lobel [00:52:11] Thank you. It's a pleasure. 


Singularity's team of internal thought leadership works to develop interesting resources, articles and insights about our core areas of expertise, programs and global community.

Download the asset