This week our guest is Rob Reich, a professor of political science at Stanford University and co-author of the recently published book System Error: Where Big Tech Went Wrong and How We Can Reboot.
In this episode, we focus heavily on how the tech industry’s obsession with efficiency and optimization has often meant sacrificing our values and even democracy itself. This includes conversations about data privacy, the tension between recklessly fast innovation and mindful but slow progress, concerns over China, the job market, and much much more. Additionally, we discuss some optimistic and very actionable steps that individuals, universities, and businesses can take to help society reboot our failed relationship with Big Tech.
Find the book on Amazon, follow Rob at twitter.com/robreich, and stay tuned for Rob's upcoming class on these subjects at systemerrorbook.com
Host: Steven Parton - LinkedIn / Twitter
Music by: Amine el Filali
Rob Reich [00:00:00] The question to draw me to a technologist is, is there a uniquely optimal point in which we get the right values between privacy, innovation, personal safety and national security? Or is it more likely that in the face of ever shifting social circumstances, we're groping for a tolerable balance among them? This is not an optimization problem. It's a social problem to solve where it's very unlikely we ever have an optimal point.
Steven Parton [00:00:41] Hello everyone. My name is Steven Parton. And you're listening to the feedback loop on Singularity Radio this week. Our guest is Rob Reich, a professor of political science at Stanford University and coauthor of the recently published book System Error Where Big Tech Went Wrong and How We Can Reboot. In this episode, we focus heavily on how the tech industry's obsession with efficiency and optimization has meant sacrificing our values and potentially even democracy itself. This includes having conversations about data privacy, the tensions between recklessly, fast innovation and mindful but slow progress. Concerns over China and which of those two approaches to technology that they'll be taking the pipeline of jobs that go from some of the most educated university students to either the government or the tech industry and much, much more. Additionally, I'd like to add that I found this conversation especially enjoyable because Rob brought a lot of optimism and actionable solutions, both of which are things that I feel like we often lack when we're talking about some of these deep societal issues around technology. And with that being said, I'll let us get straight to some of these ideas that Rob has. So, everyone, please welcome to the feedback loop. Rob Reich. Usually with authors, as you might know, and as listeners definitely know, I like to ask one question, which is what was the motivation to write the book?
Rob Reich [00:02:16] Oh, yeah. Well, that's a actually an easy one, maybe hard to ask to answer that question in general, but for this particular project, it's pretty simple. I showed up as a, you know, assistant professor at Stanford a little more than 20 years ago and had nothing to do with technology, you know, do work in political philosophy. But living in Silicon Valley for the first time had a clear view into the extraordinary transformations that were being championed and pioneered by not merely the tech companies around campus, but by many of the faculty and students on campus. And then about about ten years ago, computer science turned into the largest major on campus, both for men and for women. And there was, you know, what you would call a kind of brain drain of undergraduate talent, voting with their feet, majoring in political science or in social sciences in much smaller numbers, and choosing instead to major in computer science. And I didn't have much of it. It wasn't much of a mystery about why that might be. I didn't need any big explanation about what was going on, but I kind of also wanted to learn how it was that the students who reported to me that computer science is a hard major, not an easy major. Unlike premed courses, which were known among the students as, you know, weeding out the talented from the untalented computer science somehow became known as a major that you could do independent of the training you'd had in high school. So if you've never taken coding classes, this was a major that would still find a way to bring you into it, which seemed to me like a pretty extraordinary accomplishment. And I wanted to learn more about how the computer science department was basically carrying out this extraordinary project. And I got to know now my coauthor and collaborator, Miron Souhami, who has been the lead teacher of the introductory course in computer science for more than a decade. 6106a as it's known at Stanford. And we started talking a bit and found that we had some pedagogical interests that aligned and then brought our third collaborator into the mix at a certain point. And when it became very obvious that some of the ethical and social considerations of big tech in particular were impossible to ignore. About five years ago, you know, he cooked up the idea of teaching together. So the book grew out of an initial decision to collaborate on a class. And we didn't intend at the start to write a book, but we found ourselves working together in a way that we thought we had developed some really good material. We'd met with some success in teaching a large course that was kind of meant as a cultural intervention on campus, so that the extraordinary technical skills that you can get at Stanford would be supplemented with ethical and and public policy frameworks for thinking about the power of being a technologist and reciprocally so that people who were philosophy majors or political science majors didn't view technology as a kind of object to interpret from afar, but gained some technical skills themselves so that they were in dialog with people who were, you know, at the forefront. Of course. And I'll say just one more thing. The kind of bet we made on the class was that we wanted this to be a class which had technical assignments, policy memos or assignments and a philosophy paper. And to the best of our knowledge, it's the only class of its kind that integrates all three of those things into the single class, rather than the kind of thing in which you get a case education or technical education. And then you take your ethical reasoning requirement by enrolling in a small seminar over in the philosophy department. We wanted everything to be bundled together and integrated, and that's then what the book attempts as well. So the distinctive, I hope aspect of the book is this intermixing of a technical policy and ethical framework.
Steven Parton [00:06:50] Yeah, I love that. I think in the book you mentioned that the programmers and engineers of today are equivalent to the political philosophers and the economists of the mid 1900s. But those economists and philosophers. I kind of had a big picture thinking about the world. They were kind of integrated into these complex systems, critical thought kind of thinking dynamic, whereas computer science typically, I mean, I actually have a computer science degree, so I'm familiar with it is very much like, here's a problem, fix that problem. Don't worry about the network effects too much just right. And that feels like is that kind of where a lot of these problems you think might have been arising is that we are kind of churning out problem solvers who don't have that big picture. THARP Yeah, that's.
Rob Reich [00:07:35] That's a good way to put it. You know, that tell me if this resonates with your own computer science training, but I think you captured it pretty well in my own mind. One gets trained as a technologist to be a problem solver, and the secondary effects or tertiary effects or broader network effects aren't part of the initial objective function or problem statement. And so, you know, those can be solved for later or in some broader social division of labor. It's someone else's responsibility to figure that stuff out. And, you know, in the 1970s and eighties, when CSS education didn't give you extraordinary power in the world quite yet, you know, being a 23 year old founder of a company or, you know, being able to simply be a coder and then roll out your ear, your code to a millions or potentially even billions of users. Now, when being a computer scientist comes with an invitation very quickly to acquire social power through your technical skills, and then, of course, the money and status that goes along with it quite frequently. Well, it seems, but, you know, almost a responsibility of the computer scientist then to take on board these larger social questions. And the class was an effort to try to instill that framework or that orientation at the earliest possible stages of the formation of the technologists mindset. And then we also began teaching a version of this class to people in the tech industry and, you know, trying to reach out to people who are already more mature technologists, so to speak.
Steven Parton [00:09:18] Yeah. And I mean, it feels like that narrow problem solving focus gets to the point of the book that you and your co-authors were focusing on a lot, which is the drive towards efficiency and optimization at all cost for that single small problem without the larger awareness. What are some of the things that you and your coauthor saw in terms of the things I guess we might be blind to that come from that focus solely on optimization?
Rob Reich [00:09:47] Yeah, perfect. So, I mean, as a philosopher, I don't think anything I'm about to say would be surprising or possibly even interesting to someone else who was a philosopher. But I was kind of astonished when I began to talk with people who were 18, 20 year old, even some of the faculty in the computer science department, that the following seemed like an important insight or something, which is efficiency and optimization are not first order or primary goods. They're not intrinsically valuable. You only want to make something more efficient if the thing you're making more efficient is independently valuable as well. And I think in many cases, the technologist is kind of taught that this extraordinary skill set of optimizing for some and a solution to a problem, an objective function gets optimized, is itself intrinsically valuable, like the optimizers show up and we're the problem solvers and let us get to work. And I want to ensure that the technologists pay as much attention to the worthiness of the problem or the objective function as to the optimizing solution. And that includes then thinking about the second and third order effects. So I mean, there's lots of examples of this in the book. But you know, just to give a few kind of simplistic examples are, you know, we install speed bumps in the roads that go by schoolhouses because although it would be more efficient transport people at higher speed from point A to point B, there are other considerations at stake. And so we sacrifice, as it were, some of the efficiency of, you know, going from point A to point B in order to get some of the safety value we think is important as well. That's another way of saying in the technologist language for the kinds of things that you're solving for in the world. There are often multiple values at stake. And if your problem statement only incorporates one or two of the relevant values, optimizing for that value will. Upset a larger social balance that's been, you know, sometimes implicitly, but other times explicitly negotiated. We tell people in the jury box, don't go and deliberate as quickly as you possibly can so we can get on with it. We say take your time and deliberate. And we filled in, as it were, a certain type of inefficiency in the name of ensuring that we get the wider balance of values that is so important. And, you know, the optimizing mindset is super powerful when deployed or directed at a narrow technical solution. But again, as you acquire social power, as your products are used for, platforms are used by millions or billions of people and all of these second and third order effects and the wider social values that come up in those questions, that optimizing mindset can go awry. And I mean, at some highest order level, this is the kind of thing that really arrested by attention, you know, four or five years ago was in a conversation with a bunch of other venture capitalists and, you know, kind of prominent technologists. The dinner table conversation turned to what would it mean if we could find a plot of land on earth that was devoted to the maximal progress of science and technology? And there was a conversation about where would you do it, an island, or how would you find a good plot of land? And, you know, what would the what would the citizenship requirements be? Who would we admit to this plot of land? And eventually I raised my hand and said, you know, I can't figure out whether people are what's the governance arrangement here? Is this the democracy we're talking about? And pretty, pretty quickly, people said, can't be a democracy. Democracy holds holds progress back and. You know, if the optimizing mindset when directed at political affairs themselves gives the technologists, you know, a not especially powerful reason to value democratic governance because it slows things down and it's better to maximize the progress of science and technology. That's a classic, to my mind, misunderstanding about how it is that optimization fits within a broader social and political order. Democracy's value, in my view, is not to maximize or optimize anything in particular, but it's a setting in which people who disagree with one another can be treated as free and equals and through a constant negotiation process. Legislation and elections find ways to get a wider balance of values that's temporarily satisfactory to a number of people and a constant possibility of updating and revising as you go along. There's no social output that one is maximizing.
Steven Parton [00:14:53] Hmm. How do you think we reconcile that dynamic? Because you're getting into a realm here where we're talking about tech businesses that have more data potentially in some ways, and the U.S. government does on its citizens. That's right. Has potentially more power, but has shareholders who are incentivized to profit. You know, arguably that's a byproduct of efficiency and optimization rather than these social impacts. And when we consider the fact that in many ways these platforms are now the town square through which we organize our governance, this feels like at this moment, legally, at least, a really irreconcilable struggle.
Rob Reich [00:15:36] And when you put it that way, my my inclination is to just situate the current moment in a longer historical pattern of scientific discovery or technological innovation that then gets brought to the marketplace through some type of commercial enterprise. There's typically a sorting in the marketplace over a course of time and a consolidation with a few dominant players. You think back to the railroad era, for example, and Stanford not not coincidentally here, founded by one of the great robber barons, railroad barons of the first Gilded Age. And and then these negative externalities or civic externalities become clearer because we get kind of congestion in the innovation opportunity itself. And startups are either acquired by the dominant players or are snuffed out by them. And then the second order effects become ever more visible. So obvious cases of the current moment in big tech. You know, we point to things like algorithmic bias and unfairness and privacy abuses through the basic ad tech model of many of the big players. Automation that displaces and transforms the experience of human labor in the workplace. And then misinformation, disinformation, hate speech in the social, social media networks. And, you know, those are just the obvious ones. There's lots of other ones. And insofar as you take a maximizing approach to profit making, then it's rational behavior for these companies to continue doing many of the things that they're doing because they're not tasked by law with looking out for the civic externalities that their platforms or products have. And so we typically have a kind of over the course of time, regulation comes into place to try to internalize some of these externalities and then wider kinds of pressure get put on the company, sometimes from within by their own employees and sometimes by civil society organizations or antitrust approaches, etc., etc.. And, you know, our our diagnosis is that we are in the early moments in the United States of a window for that type of reaction to the recent consolidation of power in big tech.
Steven Parton [00:18:01] Do you feel like that? You know, everyone thinks that their their age is different. And I think we all try to argue that this age is, in fact, uniquely different. But there does seem to be something about the fact that let's take the Overton Window, for example, for listeners who might not know, it's basically like what the acceptable range of discord that's allowed or discourse that's allowed by society. It seems like social media in a lot of ways controls the Overton Window and increasingly seems to narrow it. But that seems to be antithetical to the kind of policy shifts that you're talking about. Like it seems like that could potentially usurp the ability for us to take that historical precedent and bring about change. Do you worry about that?
Rob Reich [00:18:47] Absolutely. And again, here's where in the abstract. But looking historically, I can give examples for this in the abstract. Democracies tend over the course of time to be able to be problem solving entities or institutional arrangements. It's an institutional design for social problem solving that's slow and inefficient and often broken. And at the current moment, of course, our our U.S. institutions are deeply dysfunctional. And so there's a genuine challenge there that's actually might what might be distinct about our era. It's been a long time since our democratic institutions have been as broken and dysfunctional as they are today. Yeah, but we get to take different sort of tries at social problem solving through an array of different institutional mechanisms that, you know, begin to try to contain the problem, define the problem, and then make some modest progress rather than transformational progress. That's the kind of technocratic vision of a small number of people. Anyway, that's all vague and abstract. So let me just say something much more specific, which is that. You're right that the social media players these days are the platforms on on which permissible social discourse is channeled and even constructed. And so it's not simply the power to down rank or delete content that's that marks their power. It's the ways in which the information is channeled and disseminated that gives the platforms their extraordinary power. Virality over veracity is one way to put it. And, you know, in the bigger picture, part of what I think about this is that the value of freedom of expression itself, the First Amendment and freedom of speech is understandable, at least in some part, in relation to the scarcity of information or content in the current current language. It was hard for many, many, many decades, too, to speak or express yourself and then have it distributed and heard by hundreds or thousands or millions of people. And really, the age of social media, the Internet age is what's made the distribution and consumption of speech or content easy. And we need search engines to locate relevant things online because there's an ocean of content. We need an algorithmic curation mechanism to service the things that are relevant to us rather than irrelevant. And we we now live in an age of super abundant content. And despite the super abundant content, which you might think opens up the Overton Window, because you can in principle find any a much wider array of information than you could in earlier eras. The painful irony is what you started with is that it actually seems to narrow things, in part because of the opaque, algorithmic curation, which we don't as a consumer or a producer understand all that well. And then in part because the algorithms themselves are not being, to use the technical language here, they're not optimizing for civic health. They're optimizing for attention and engagement. And that has these second order effects that have been quite damaging.
Steven Parton [00:22:18] Do you think we could see a shift in that? You know, in the sense that I'm sure we'll talk about surveillance later. But as you're talking, I'm thinking we kind of have a panopticon, the government in a sense, as well as they have one on us where we have all of these citizen journalists basically who are looking for every single mistake, potential, you know, connection for conspiracy theories, any way to score points for their team. And they're generating this constant panopticon of information on the government. Do you think that could pressure us to to maybe do some of these policy shifts, maybe we'll do something like put speed bumps on social media by limiting the number of shares. You can have a day to decrease that information saturation or require government I.D. like can you see that being part of the dynamic or do you think that's not really the kind of pressure that's going to do it?
Rob Reich [00:23:18] Well, I think that'll be one piece of it. So some of the things you mentioned putting speed bumps in the you know, the the speed with which one can amplify information. Those are happy kind of let's call them technical fixes on a larger problem. And through some combination of social pressure, maybe internal labor resistance within the companies themselves and then external regulatory pressure, the companies might be motivated to do some of this stuff, as it were, voluntarily in reaction to these different countervailing pressures. But internally to the company install some speed bumps say. But I think part of the more interesting and greater challenge is to think of this not just as a series of technical fixes, but as a set of wider questions. So, for example, you know, just to bring up a kind of classic idea here, as happened with the era of telephone company consolidation and then the regulatory demand that your number be interoperable or exchangeable to increase competition, could we have data interoperability and even, you know your social graph the exchangeable so that people could choose whether to migrate from one currently dominant platform to another that might suit their their own preferences more. And we have a wider array of social networks that might be somewhat smaller in size, but where migrating from one to another is made much easier through what would be a regulatory and technical fix. So it wouldn't necessarily be a technical solution just within a company about speed bumps, but but reconstituting the the players, as it were, through some type of greater competitive dynamic.
Steven Parton [00:25:22] And that gets into I feel like more data privacy issues at that point.
Rob Reich [00:25:26] That's right. That's right.
Steven Parton [00:25:27] What are your thoughts on the current data privacy dynamic and maybe where you would like to see it go or where you're terrified that it is going?
Rob Reich [00:25:37] Yeah, well. It does seem to me that the the typical business model for your, you know, startup and big tech company is an ad tech driven. We take your data, we sell it to advertisers and we build ever more powerful algorithmic models in terms of their predictive accuracy based upon these oceans, oceans of data that they collect. And I think there at some point has to be a reckoning with the set of value tradeoffs between privacy, the value of innovation. So if we guaranteed privacy and didn't collect data, that would be bad for innovating around a set of other things, particularly say like health care, health care data is super valuable when you aggregate it and then try to discover new things from it. Personal safety as well as, you know, sort of national security are also rival values. You know, when Apple decided that it was going to upload photos and scan them in in the cloud in order to try to detect child pornography, it was trying to balance the value of safety for children against privacy for its users in the photos. And, you know, the same the iPhone case, classic, you know, FBI lawsuit with the San Bernardino terrorism, where the government wants access to the iPhone and Apple says it won't provide it in the name of privacy, just illuminates these value tradeoffs. And, you know, we all have in our back pockets a end to end encrypted messaging app, whether it's WhatsApp or Signal or anything. Those are all in on one value privacy and the neglect, the presence of other values. So I don't think there's I don't think there's a technical solution. I'd like to go back to the optimization point. The question to it for me to a technologist is, is there a uniquely optimal point in which we get the right values between privacy, innovation, personal safety and national security? Or is it more likely that in the face of ever shifting social circumstances, we're groping for a tolerable balance among them? This is not an optimization problem. It's a social problem to solve where it's very unlikely we ever have an optimal point. So as a consequence of that, I want to be able to know that technologists are thinking about things not merely as an optimization approach, but as a social question in which they're trying to strike the tolerable balance among genuinely valuable outcomes or, you know, the things that we all care about. And I hope that a new generation of technologists, as well as a wider understanding of these questions from ordinary citizens and policymakers, yeah, it brings about a change to see technologies as introducing social problems for us to collectively grow up with.
Steven Parton [00:28:50] Yeah, let's zoom out from from the West for a minute here because that makes me think specifically of China and then maybe our relationship with them. And as we're talking about this and as you're talking about that, you know, that tension for the sake of a good democracy that that we struggle with in the States that while might be challenging is is a big part of making democracy work. It does seem to me that there are going to be countries like China who tip the scales on the things that you talked about and just say, let's go for data collection. Yes, completely get rid of privacy.
Rob Reich [00:29:24] That's right.
Steven Parton [00:29:25] It feels like. We might struggle to hold true to our values. In that sense, if we know that, that's going to make us lose the, honestly, the hegemonic U.S. role over the globe. Yes. And handed over to China. I mean, what are your thoughts on that? On that?
Rob Reich [00:29:41] I mean, in a certain respect, we call this in the book the but China objection, where any time someone introduces a seemingly reasonable regulatory approach, someone says, well, but China, if we slow down this will have these geopolitical dynamics. And I understand the point, and I don't want to deny that there is indeed a geopolitical dimension to it, especially an arms race of a certain type, including these rival models of know government surveillance. So my answer to that is to say something like what we need geopolitically is a model of a robust and vibrant tech sector that has delivered products to a wide audience that are, call it, democracy supporting or democracy compatible. Rather than tipping the, you know, the scale in favor of autocratic or non-democratic regimes, for instance, that are super, super interested in in government surveillance. So to put it slightly differently, what we need geopolitically is a rival model for technological innovation that supports democratic values so that what's on offer, just as in the Cold War, was a kind of, you know, political model in which authoritarianism or, you know, could deliver certain types of benefits, but also with some certain costs. And then a model in which open societies, democratic societies, produce other kinds of benefits and sometimes with other costs involved. And rather than have, as you know, the saying often goes, and Silicon Valley America innovates and Europe regulates. And then over there is China on the rise, which puts democratic countries into competition with one another. We ought to have some broader understanding about a rival to China that's Democratically supporting. That would be then a kind of beacon to nondemocratic peoples, ordinary citizens, about what an alternative offering was rather than everyone converging, as it were, to the surveillance authoritarian model.
Steven Parton [00:32:02] Perhaps in the same sense that we we see China now very much adopting capitalism, albeit in their own way.
Rob Reich [00:32:09] In their own right.
Steven Parton [00:32:10] They're they're still we made something look so appealing and worked so well, but they had to adopt it.
Rob Reich [00:32:15] Exactly.
Steven Parton [00:32:16] Yeah, that makes sense. Do you see ways in which we might also change democracy or government with technology? Like, are there certain ways that you think we could upgrade?
Rob Reich [00:32:28] Absolutely. So I'm glad you asked that, because that's in certain respects. One of the things I'm most excited about these days, you know, as a political philosopher. And living in the, you know, the precincts of academic political philosophy or democratic theory, democratic theorists aren't typically up on the latest social challenges. You know, we're still commenting on and on Plato, as well as things more recent. And I think there's a small and growing and important element of call it critical imagination, where rather than viewing technological innovation as a as simply a threat or a challenge to democracy, it's also an opportunity to try to reinvent democratic institutions that will serve the interests of democracy even better. So let me give a concrete example of this. As a colleague of mine with whom I edited a book called Digital Technology and Democratic Theory is a French democratic theorist named Helen Landau. She teaches at Yale, and she has a new book out called Open Democracy and Open Democracies. Basic premise is that the very mechanisms of represent representative democracy are the problem with the performance of democratic societies. Today, our societies are too large at scale. The moments of active citizen participation are too infrequent at the ballot box, and often nothing more than that. And digital tools and platforms offer us an opportunity to reinvent civic and democratic institutions themselves, to coordinate harvest and expressed civic voice with far greater frequency that might even in certain circumstances, allow us to move beyond representative democracy itself. That's a really bold vision. Yeah, but far shy of something as radical as that there. You know, we look to the example of Audrey Tang in Taiwan as a way to think about how it is one can incorporate a technically savvy integration of a variety of different platforms and tools into the delivery of government services.
Steven Parton [00:34:45] Can you say more about them? I'm not familiar with Audrey and their work.
Rob Reich [00:34:49] Yeah. So Audrey Tang is is something like the digital minister of Taiwan and has found a variety of ways. It was trained herself in, in, in Silicon Valley, I think worked at Apple at one point and has brought into government affairs a variety of different ways of trying to connect to citizen voice. During during COVID, for example, there was much more transparent forms of digital health surveillance that were meant to then mitigate the effects of COVID. They had extremely clever ways of handling electoral misinformation and disinformation that involve, you know, hiring people to do counter meme programing on social networks rather than block the speech in some way. And basically, I think are just more forward thinking about the opportunities that digital tools present rather than rather than the threats they also are and. You know, one other example just to point to it is that Iceland a few years ago, this is described by her led landmark in her book and elsewhere, tried out an experiment in holding a constitutional convention to revise their constitution. And they they threw the constitutional draft up online as a kind of wiki constitution where anyone from across the world who didn't even have to be a citizen of Iceland could comment and update and revise it. And ultimately, then it had to be voted on by the citizens of Iceland. But the kind of forms of participation were much larger and much more interesting. So again, I think we're at the early days of re-imagining how government itself, democratic institutions, can be updated. I'll give you a really low hanging fruit. So I was long ago, one of the early members of Teach for America was my like what I did just after I graduated college. So, you know, you you sign up for two years of teaching in a public school with a little bit of summer training before you you you become an ordinary public school teacher. And part of the way that that program worked then and still works now is it's a kind of civic call to to to people to to serve the country by trying to make a difference in public education and social mobility for the for the for the most disadvantaged school districts in the states. And, you know, I would love to think that there's an opportunity to do something. Here's a crazy idea. Like a I for the IRS. Imagine if, you know, talent graduating from Stanford and Berkeley and Washington and MIT, who had high skills, went and worked in the IRS for a couple of years in order to sort of automate the auditing processes of the IRS. I would I mean, I'm no expert on this, but this is seems like relatively low hanging fruit, doesn't require extraordinary technical skills. It's not signing up for a lifetime of work in the IRS and would have a huge civic benefit. I mean, this is the sort of thing where, again, instead of seeing you get technical talent and government is your enemy or you're your problem because they regulate and they're slow, they're inefficient. Imagine having a chapter of your career, even a brief one devoted to some form of civic or public service. Ala. Teach for America. You know, we have code for America too now. But imagine this on steroids. I think that would be exciting.
Steven Parton [00:38:36] I mean, I love that. What do you think the resistance is to that? Is it because the government doesn't pay as well? Is it potentially you know, honestly, because maybe if you go to Stanford and you're very liberal, you don't have the mindset for something so orderly or maybe seemingly conservative. You know what I mean? Yeah. I feel like the liberal minded students in Stanford might feel very. What's the word started, I guess. Just, you know, trapped by the system. But it does seem like. Like, why is there such a dearth of of that kind of integration between tax and government? Seems like the most obvious.
Rob Reich [00:39:17] Right? I mean, partly the bet I'd make is that for the Stanford undergraduate, they haven't been presented with role models through broader social discourse or at Stanford who illuminate that such a career path is even possible. It took I mean, like Teach for America was brought about because almost no one from a selective university was deciding at 21 years old, I'd like to go become a public school teacher. It was an extremely rare thing. And what Wendy Kopp did, which was extraordinary, like, imagine this. You know, she created this program in the early 1990s and she said, Teach for America will be successful when when we receive as many applications as people who want to work at Goldman Sachs or McKinsey, and they choose to go to do Teach for America over those things. And she actually accomplished that like five or six years into Teach for America's existence, even though the day was, you know, you got my pay when I went to Houston, Texas, was, I think, $19,000 a year to be a sixth grade teacher. And. Part of what she did was she provided a kind of civic call with some role models about how it was you could think about your your, you know, the pathway of your career. And so now the role models that students have at Stanford are, you know, Mark Zuckerberg or Evan SPIEGEL or, you know, the conventional icons of the day who are tech heroes. And that's fine. I'm not looking. I'm not looking for people to say, I don't want to go into the marketplace or go into go into a startup company. I just want there to be and an alternative pantheon of heroes. The book system starts with a contrast between this, you know, recent graduate who creates a startup company that automates the process of getting out, of paying parking tickets. And then the story of Aaron Swartz, who self-consciously at an early age imagines that the whole point of getting technical skills was to have a civic or political effect with them. And, you know, Aaron Swartz was part of the team behind the Creative Commons licensing program, was among the main organizers of the net neutrality debates, and deployed his extraordinary technical skills for the sake of some civic impact. I want I want, you know, your 19 year old student to think Aaron Swartz is as interesting and viable, a pathway for me as Evan SPIEGEL. And by giving him more and more examples of that and having those people celebrated more widely in the culture, I think would be great.
Steven Parton [00:42:07] Yeah, absolutely. Do you think that we can use technologies like. Like Blockchain or are you looking into things like daos and whatnot as viable paths forward to to some of the things we're talking about here? Yeah.
Rob Reich [00:42:20] I mean, I think that's the new frontier in certain respects. And so that the class that was the main part, the inspiration for the book I'm teaching again with with my colleagues Jeremy and Miron in January and we're just now updating the class to include a whole bunch of material on Web3 blockchain crypto Daos decentralized technical architectures is the umbrella we're using for it. And, you know, I do think that there is an opportunity there. And, you know, in certain respects what I'd say is we're at a kind of constitutional moment for blockchain or these decentralized architectures, the very. Technical slash governance standards for this alternative architecture are being designed and developed and debated now. And, you know, to my mind, we need a kind of Federalist Papers approach to this. We need people to think this is not just a set of technical questions where you can make a buck by selling and fees, but rather, if this really is an alternative architecture that's far more transparent and privacy protecting allows for a whole bunch of different governance arrangements. We should be debating the secondary effects of what our governance and decisions imply. And, you know, that's the kind of angle or frame I want to bring to that question. I'll put it crudely, if we get a world in which, you know, blockchain and and these kind of decentralized ledgers is the architecture for a new type of experience online. And the only fruit we get from it is there are like private property rights in our data and in, you know, digital, digital tokens that like NFT. So, you know, if what we get is Disney branded avatars for the metaverse that I can sell to you and you can sell to somebody else, that will just be such a disappointment where if the commercial incentives for providing, you know, profit making turn the blockchain into into avatars and nfts, that's just that that's like artificial scarcity in a world which doesn't require it. The whole promise of the digital revolution was, was abundance, super abundance. And there's a danger that blockchain and everything else turns into artificial scarcity for. For for making a cheap buck.
Steven Parton [00:44:57] Yeah. Do you think that these potential changes that we might implement are. Things that are going to just happen organically? Or do you see a required a forced response, I guess you could say, to solve these issues? I think specifically of automation as I'm thinking of this, like we might run into an issue where a conversation around something like basic income or something like a cryptocurrency being used to handle this might be required to deal with the fact that people just don't have as many jobs. Do you do you think that this is something that we're going to see happen in this way where these changes might not come along until they're forced? Is it going to be something organic? Or maybe you would just like to address the idea of automation?
Rob Reich [00:45:43] Yeah, well, we already see that the idea of basic income is gaining some steam. And it's, I think, a really important and interesting set of social experiments that are being run to try out different basic income arrangements at small scale with the idea of learning in a social scientific way what designs are better rather than worse, and maybe where they could be then scaled up and used more widely. And I'd add that's garden variety democratic experimentation at work and and like far better that we do it that way and learn from these these these widely distributed experiments that think some technocrat could arrive on the scene and offer to us the uniquely correct solution on UBI and then just inactive. That just seems like a recipe for disaster. And I also think at the same time that, you know, we're going to face a set of choices about how to think about, you know, ownership of the automation itself because it's going to further consolidate and, you know, accelerate the widening wealth gap because the value of labor will become even lower in in in a world in which the machines are much more frequent in carrying out the labor. So, you know, as it were, this requires a broader set of questions at the level of thinking about the relative value of capital versus labor. A lot of our I think it's not accidental that we're seeing a small renaissance in in unionization. And, you know, the labor movement, in part because we've had 30 years of almost exclusive focus on the capital side of the equation rather than the labor side of the equation. And automation is now bringing into focus that that neglect. So UBI is just one manifestation of that. I think we'll also see a variety of other things that include union movements, various ways in which labor power will reassert itself, rather than thinking that there's a a legislative solution through UBI alone.
Steven Parton [00:47:58] Yeah, it seems like what I'm hearing as a as a thread through a lot of this is that there needs to be almost just like a cultural shift in, in not just priorities, but almost like you're like you were talking about with China, really. Like we create something that's so attractive and work so well, but they decide to incorporate it. Well, it feels like you're saying we need to do the same thing. We need to make celebrities or paths of, you know, fame or renown or make ideas of using blockchain attractive from like a, you know, rather than just an NFT making money perspective. But like from a perspective of changing the world, it feels like a lot of what you're saying is we to reboot the system, you know, to to your book, we need to really focus on these cultural shifts and make them so attractive that it starts to draw attention away from efficiency towards values.
Rob Reich [00:48:52] That's right. And for what it's worth, I don't think this is fantastical or or in certain respects, even that difficult. It's a it's a kind of frame shift for people, but one in which the the arguments I want to think on behalf of it are quite persuasive or powerful. I mean, for example, I know what I'll often tell a 19 year old is you can go work in a startup company with your technical skills and maybe have a chance of making an extraordinary contribution to a product that could maybe be used by millions or billions of people. And let's be charitable and think with all kinds of social benefits, not just, you know, money making, but it's a low probability that the startup you join is going to yield that outcome. You could bring your technical talent to Sacramento and work on behalf of the fifth largest economy in the world in a place with a dearth of technical talent and understanding and maybe make a contribution there. Like if you were probabilistically judging your likelihood of impact, which place seems more likely? And I don't think there's like an obvious answer to that question, but so few people try the Sacramento route that it just seems if you begin to build a pipeline there, the opportunity for impact is potentially huge. So again, I think about this at the level of simple sociology of the campus. How does recruitment work for startup companies and big tech? And there's a well-oiled machine here in which the big companies get privileged access to the best students early on. And Sacramento never shows up on campus to recruit anybody. So those are, in certain respects, easily solved problems. That's not rocket science. And and, you know, that's part of the small ways I'm trying to intervene on campus in various ways. And it's not just me. There's a variety of other people doing this, too. One idea is public interest technology has this this is a phrase that you've heard before.
Steven Parton [00:50:58] I don't think so, not really.
Rob Reich [00:50:59] So you've heard the phrase public interest law. So, yeah, like back in the 1960s and seventies, there was an attempt to change the way legal education works, because when you got a JD, you basically had to. Career paths. You could go into a corporate law firm familiar enough, or you can go work in the Department of Justice or the DA's office in a public agency. And there was a decision to try to create public interest law, which was clinics within law school. So as you're getting your JD, you work in the East Palo Alto Community Law Clinic, you work in the Environmental Law Clinic, you work in the Civil Rights Clinic, and you're, you know, taking cases on behalf of the poor or the marginalized or disadvantages your disadvantaged, your arguing cases in court, you're writing briefs, and then you get a summer internship and a public and a professional pathway. As a civil rights lawyer in a public interest law firm or over civil rights organization at the moment, imagine two schools of engineering had engineering clinics and so that you could bring your technical skills for a summer internship to the East Palo Alto City Council. And then there were ways in which you can devote your talent for some period of your professional life in a public agency or a nonprofit organization, building out into the curricular and pedagogical infrastructure of a technical education, a different set of pathways to diversify the talent pipeline. It has been done before in other places. And is it is within our grasp to do here?
Steven Parton [00:52:37] Yeah. It doesn't even sound like there are that many hurdles. There's just not open doors.
Rob Reich [00:52:41] Right. It's it's like a failure of imagination. And then the money to try to power the experiments which will get refined along the way. And, and yeah, I don't, I don't think we've tried really. Which is why, why I have some optimism at the end of the day about this. I'd say, like I say, the main message of the book is that a a window of opportunity is opening for the first time. And we have a ten or 20 or 30 year run right now at a much different social arrangement set of social arrangements about the power of technology. I don't want to diminish the power of technology. I want to harness it for its great benefits and find ways in which it can be delivered to other things than startup companies and big tech.
Steven Parton [00:53:24] Yeah, I mean, that's one of the more optimistic takes I've heard in a long time. Just just opened some doors. That's. Yes, inspiring. Well, as we as we wrap up on time here, I know we're coming up close to it, but we haven't said too much specifically about rebooting the system. And so now but to finish, maybe you have some closing thoughts, maybe about some solutions, what the average person can do, maybe about other ways that you see us steering this insane momentum that's coming down the line.
Rob Reich [00:53:54] Sure. All right. I'll give you three quick responses on that. So number one is, is just a version of where we came to, namely that there's a window of opportunity opening right now. So we're leaving behind an era of leaving it to the technologists alone to fix the problems and techno utopianism in the early days and techno dystopian ism in the past five years. Now I think we're going to have an array of different countervailing forces and opportunities for people to think about the great digital revolution. And it's an extraordinary promises and how to ensure it goes in a socially positive, democratically supporting direction. Now, there's no guarantee this happens, but that's what the window is just opening. So as a consequence, number two, you know, I think there is a rising appetite and interest in ordinary forms of policymaking and regulation. So Leena Khan, going off to the FTC and leading some antitrust efforts, the kinds of things we see discussions about privacy legislation in the United States, it will be in the interest of big tech companies once we have privacy legislation in five or ten or 15 different states California, Florida, Illinois, it will be far better to have federal privacy legislation rather than have big tech companies comply with 50 different privacy regimes across the 50 states. So I think the incentives are moving slowly in the right direction there. Similarly with automation, we already mentioned universal basic income and other kinds of things like a robot tax. There's other like garden variety things that economists always have known the power of. If you think the ad tech business model is part of the problem, then create a gray dated digital tax on advertising, a tax on digital advertising revenue in order to provide an ordinary incentive structure for shifting to different types of business models. I think that will be tried in various ways. So regulatory experiments and efforts, of course, in certain respects pioneered by or at least led by the European Union. And third, which I think is where universities are particularly well situated, we need to accelerate the development of what we call in the book an ethic of responsibility among technologists. Or to put it slightly differently, a set of professional norms that coordinate the the responsible practice of being an A.I. researcher or, you know, a developer programmer in a company. And, you know, it doesn't take much to it's not news to say, like any big tech company has an AI ethics team in a framework that's somewhere on their website. But individualized frameworks don't amount to anything. What we need is a set of communal standards or community norms. And, you know, what I point to often is that computer science as a young discipline is only created in the fifties and sixties, and as we discussed earlier, only came to some social power in the past 30, 40 years, which is not a lot of time to develop a kind of robust set of professional norms for what counts as the responsible practice of being a computer scientist. And if you look to other fields, the law, biomedical research, there are indeed powerful professional norms that fall short of, you know, legislation and policy, but that actually do coordinate the, you know, the behavior of people who are professionals. So the example in the book we give and it's relevant on campus is that, you know, the other big scientific discovery that determined the shape of the 21st century is CRISPR and genetic engineering. So Jennifer Doudna was a professor in up in San Francisco, was one of the co discoverers of this extraordinary gene editing technology. And, you know, a few months in her own telling, she she woke up from a nightmare and and Hitler had appeared to her in a nightmare and said, basically, you know, Professor Doudna, I would really like access to this gene editing technique. And it after this nightmare, she organized other high status biomedical researchers to install a norm in which there was a moratorium on using CRISPR on human embryos or on humans until people could gain a greater understanding about this multipurpose, extraordinarily powerful tool. And when a Chinese biomedical researcher did use CRISPR on humans, he was disinvited from professional conferences. He couldn't publish anywhere. And ultimately, China actually put him in prison. And I often ask people, can you think of any AI researcher who has done something that's not against the law, but that was against the norms of what counted as the responsible practice of AI science or research and suffered some reputational cost for it. And we're in the early days there as well. So stimulating the set of professional norms and in AI or in this is also like just part of the developmental trajectory of a discipline or field that's as young as computer science is. And high status people are going to be the people who lead the way on this. And there happened to be a bunch of them around Stanford and Silicon Valley. So that's part of my own. I don't want to say full blown optimism there, but there is a great opportunity there that is also some low hanging fruit.
Steven Parton [00:59:35] Well, this was a surprisingly hopeful conversation and I really appreciate or expected. Yes. Yeah. Before I let you go, I mean, I want to give you a chance if you have any thing you want to tell me about. Obviously, we're going to promote the book, but maybe if you have any event studies or anything at all. Yeah.
Rob Reich [00:59:51] The paperback comes out in September and then this class that we're teaching on which the book was from, which the book was inspired, there's an opportunity for people who work in the tech, you know, in a tech company to enroll in an online version of that class starting in January. And there'll be a website you can point people to system error. Booking.com will have the information.
Steven Parton [01:00:15] Wonderful. We'll put that in the show net so everyone can find it. Rob, thank you so much for the conversation, man.