< return

How Humans and Machines Co-Evolve

June 5, 2023
ep
103
with
Edward A. Lee

description

This week our guest is distinguished professor in Electrical Engineering and Computer Sciences at UC Berkeley, Edward A. Lee, who has written extensively about the relationship between humans and technology in books such as Plato and the Nerd and The Coevolution.

In this episode, Edward lays out his argument against the status quo of "digital creationism," which states that humans are the gods shaping technology, and proposes an alternative narrative where humans and technology are symbiotic entities navigating a very Darwinian relationship. This takes on a tour of the many different facets of this relationship, including the pros and cons, the philosophical implications, the regulatory ramifications, and much more.

Find out more about Edward's work at his Berkeley website, or follow him at twitter.com/LeeEdwardA


**

Learn more about Singularity: ⁠⁠⁠⁠⁠⁠⁠⁠⁠su.org⁠⁠⁠⁠⁠⁠⁠⁠⁠

Host:⁠⁠⁠⁠⁠⁠⁠⁠⁠ Steven Parton⁠⁠⁠⁠⁠⁠⁠⁠⁠ - ⁠⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠ /⁠⁠⁠⁠⁠⁠⁠⁠⁠ Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠

Music by: Amine el Filali

transcript

The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

Edward Lee [00:00:01] We're much less in control than we realize. And I think that unless we accept that, we're going to have continue to have difficulty managing the process, regulating the process and making sure that technology develops in ways that is beneficial to humans. 

Steven Parton [00:00:33] Hello, everyone. My name is Steven Parton and you're listening to the feedback loop by Singularity. This week our guest is Distinguished Professor in Electrical Engineering and Computer Sciences at UC Berkeley, Edward A. Lee, who has written extensively about the relationship between humans and technology, and his books entitled Plato and the Nerd and the Coevolution. In this episode, Edward lays out his argument against the status quo of digital creationism, which states that humans are the gods shaping technology and instead proposes an alternative narrative where humans and technology are symbiotic entities navigating a very Darwinian relationship. This takes us on a tour of the many facets of this coevolution, including the pros and cons, the philosophical implications, the impacts on the regulatory landscape, and much, much more. So without further ado, everyone, please welcome to the feedback loop. Edward Lee. So before we get into the perspective that you're supporting and that I think will be the bulk of this conversation, I thought it might be useful to actually start with the perspective that I would say you're somewhat countering, which feels like that of digital creationism. Could you explain what that concept of digital creationism is and why it's potentially a detrimental framing for our relationship with technology? 

Edward Lee [00:02:05] Sure. So a little bit of background might be helpful, I'm sure. An engineer. I've been an engineer for my entire career, working on embedded software, mostly. For about 40 years and for the bulk of my career, I lived under this illusion that everything that I created as part of my engineering activities was my own product, that it was something that came out of my own cognitive mind. And I have come to realize more recently that everything that I've created is actually. Almost completely defined by the cultural context within which I was creating it. If I write a program and wrote, you know, some programing language, the programing language and the tools around that programing language profoundly affect my the outcome of my of my programing exercise. And the fact is that every programing exercise involves pulling in pieces of code from a lot of a lot of different sources and putting them together. And so I came to realize that that in fact, what I do as an engineer is more like act like an agent of mutation in an evolutionary process, and that the product of my activities survives or doesn't survive for largely evolutionary reasons. And I think that the problem with the view that I held before, which is that, you know, the outcome of what an engineer does, is top down intelligent design. The problem with that view, I think, is that it really distorts the amount. The idea that we as humans actually control the development of technology, we it gives us an illusion, a false sense of confidence that we're actually in control of the direction in which things are going. And I think that we're much less in control than we realize. And I think that unless we accept that, we're going to have continue to have difficulty managing the process, regulating the process and making sure that technology develops in ways that is beneficial to humans. 

Steven Parton [00:04:47] Yeah. And if we are out of control or let's say not as in control as we like to think, what is that relationship dynamic look like? You know, you talk about Darwinian coevolution between human and machine. What are, what are some of the principal dynamics that, you know, we each partake in or bring to the table in that dynamic? 

Edward Lee [00:05:10] I think the catch phrase that I would use is that we are agents of mutation. And I think, you know, if we look at what has happened in the last year with generative AI, for example, the emergence of the large language models, which I think is really quite a dramatic event. and in in many ways, it's it's quite striking that. I know I know a lot of people who are top experts in the field of. I don't know anybody who isn't astonished by what has happened in the last year. Even the top experts are surprised to some degree. You know, the top experts in AI have the advantage that they are not surprised to be surprised. They they kind of. Expected unexpected outcomes, but this was very much an unexpected outcome. And it's it's very hard to predict where it's going from here. And so I think the simplistic notion that, you know. A bunch of smart people that open AI figured out how to make an intelligent machine and that it was the result of intelligent design decisions by humans is just a misunderstanding of the process. I mean, they they're certainly intelligent humans that were involved in this. But the outcome wasn't anticipated. It wasn't designed in. It wasn't, it's not top down intelligent design and it's. And so I think we really if we're going to figure out, for example, how to regulate A.I., we have to recognize that we can't just sort of. Try to pin the blame on the individuals involved in the process, because I don't think that's going to be an effective form of regulation. 

Steven Parton [00:07:21] Yeah, well, aside from the surprise that I think many of us felt, and I'm sure you did as well. Were there other aspects about the large language models, mainstream adoption and the power that they kind of brought to the table that might have changed any perspectives that you had or, you know, was there something specifically, I guess, that surprised you or impressed you with with what we've seen? 

Edward Lee [00:07:48] Yeah, there's one aspect that to me was was really quite striking, which was that the ability that the machines have to to do reasoning and particularly mathematical reasoning to reason about numbers, to reason about equations, to be able to have an intelligent discussion about some mathematical result, to be able to solve mathematical problems. In in some ways, you know, we think of computers as being very good at mathematics. Right. They are. That's they they do arithmetic certainly extremely well. The amazing thing about the large language models is that they're not using the machines ability to do arithmetic. So, you know. Two years ago when we had, you know, GPT two, you could ask questions about arithmetic and it would give you a confident answer that was often wrong. I would characterize it as being roughly the kind of answer you might expect from a four year old. 

Edward Lee [00:08:59] And then, you know, GPT three and 3.5, which are the basis for Chat GPT. Give answers that are sometimes wrong, but they're really much better than I would characterize the mistakes that they make as roughly equivalent to that of a smart high school student. 

Edward Lee [00:09:17] GPT 4 also makes mistakes, but the mistakes that it makes are kind of like mistakes I might see from a really, really bright Berkeley graduate student. And. That's progress in roughly you know, 1 to 2 years, which is really quite astonishing. And the fact that this reasoning ability, the ability to reason about mathematical problems emerges from this prediction machine that is the large language model, and it's not making any direct use of the arithmetic capabilities of the machine that it's running on. That was a big surprise to me. 

Edward Lee [00:10:00] And to me it it tells me a lot about. Possibly how the human brain works, that we may have invented a machine here that can give us tremendous insights into how our own reasoning ability has emerged from the mechanics of the brain. And so that aspect of it was was really a big surprise. 

Steven Parton [00:10:27] Well, speaking of the the benefit that we gain as well, you have this line on the blurb in your book that I love for Plato, and the nerd that says complimentary complementarity and symbiosis are more likely than confrontation and annihilation. And do you feel like this is a really good sign of that symbiotic relationship that we're forming with our technology, that we're kind of finding this potentially mutually beneficial relationship where we prompt, you know, act as maybe a selection process and then return that we we get this benefit of increased knowledge about ourselves as well. Does this feel like a good step in that direction of symbiosis? 

Edward Lee [00:11:12] Possibly. I mean, certainly I hope so. I think that, you know, there's a lot of intellectuals who have been sort of busy on the in the media these days trying to find reasons for dismissing what we're seeing and, you know, saying that machines don't truly understand like we humans do, as if we understood how humans understand, which I don't think anyone does. And, you know, they say, well, they're just stochastic parrots. They're just plagiarizing the content of the web. I think anyone who's worked with Jeffrey Beatty can I mean, yes, if you ask it a very simple question, it'll often give you something back that you could find almost verbatim in Wikipedia. But as with anything nontrivial, and you're not going to find the response out there on the web. So it really isn't plagiarizing. And so I think that, you know, these kind of dismissals. I think are a aefensive mechanism in some ways, then it's a denial mechanism. But I think that, you know, technology has been for thousands of years an intellectual prosthetic for for humans. When the Sumerians invented a writing system and when you know. When scholars finally decoded some of the tablets that had been found in Sumerian writing, they were profoundly disappointed because, you know, they expected to find wise philosophical thoughts or stories or something like that. And mostly what they found was bookkeeping. Then, you know. These tablets were records of things that were having that they were having to keep track of and so writing as a technology is a prosthetic. It complements human abilities and gives us mechanisms by which we can create societies that involve more than a few dozen people, which is very hard to do if you don't have a writing system. And, you know, we've seen other big transformations. In intellectual prosthetics, you know, the printing press is also something that has had a tremendous impact on the cognitive mind of humans. I think the smartphone revolution has had a profound effect on the cognitive function of humans and society. While some effects are beneficial, some effects are not right. And I think that large language models are almost certainly the next step in this. And I think we need to learn to use them as a cognitive prosthetic to be able to do what we do better. And I don't think that's necessarily going to be that easy to do. And there's going to be we're going to trip and fall. And there's going to be mess ups and there's going to be abuses. I think they're inevitable. So but I'm hoping that we can find a way to use this technology to beneficial ends. 

Steven Parton [00:14:34] Yeah. I mean, you touched there on the end of the question I was wanting to get into, which is do you think that symbiotic relationship, that historical arc that you're laying out there, is it kind of a given because it is so evolutionary advantageous, at least in the long term? You know, you know, if there's local valleys, it's going to inevitably go, you know, towards a peak or is this something that we really have to fight for if we're going to rein in this symbiotic relationship, if we're going to harness it for good? 

Edward Lee [00:15:07] I think we we do have a lot of work to do. And I think that the risks are enormous. I don't feel like it's an existential threat in the usual sense that, you know, Elon Musk or Vladimir Putin, you know, like to say. I don't I don't really buy into those kinds of doomsday scenarios. But I think that there is a real potential for the technology to be hugely transformative and transformation when we have I you know I feel like that 8 billion people on this planet are really in a pretty precarious situation. Right. We're we're we're iIn a very precarious situation with climate change. And, you know, when when you have a billion people vying for resources and within this limited scope, if something goes wrong, the results could be just monstrously disastrous. My guess is that, you know, it's more of the human agents that we should be afraid of than the agents, but you know, there's any any huge transformation. In a culture of this scale. And at this scale, the state of precariousness is is very worrisome. And so I think we really need to put our effort into trying to deeply understand what's happening and try to figure out ways to steer it in the right direction. 

Steven Parton [00:16:53] Yeah. Do you think in that sense that air is going to help eradicate or at least inhibit the more irrational and capricious behaviors that we engage in and make us ultimately better actors in the world? Or, you know, given that the air right now, at least these large language models are pulling data from the human world, are basically learning to model our mistakes and our failings, that they will end up repeating those failings and maybe even empowering them. How do you reconcile that dynamic? 

Edward Lee [00:17:33] I think that both both of those kinds of scenarios are very real and have actually shown up you know when Microsoft first released the chat bot known as Tay a few years ago. You know, malicious players on and on the Twittersphere were able to train Tay very quickly to become a racist, bigot. And you know, to me, it actually, you know, the way these machines work, I think has the potential to teach us a lot about humans, right? I mean, the same Twitter sphere teaches humans to be racist bigots and in very much the same way. And so, you know. It may be that by seeing how these machines work, we could understand a little bit better how humans work as well in these in these kinds of contexts. On the other hand, in the other direction, I was astonished when I read an article published about a week ago, I think. That described a study of using chatbots to provide medical advice. So it took queries from you know, these services where you email your doctor right now, ask some some questions, and they set up a controlled trial. Where they had you know, they had doctors answering questions and they had Chat GPT answering questions from patients. And then they had doctors evaluating the answers and the doctors doing the evaluation didn't know which answers were coming from the from Chat GPT and which ones were coming from from real human doctors. 

Edward Lee [00:19:28] And Chat GPT outscored the human doctors in virtually every category in accuracy and most astonishingly in empathy. The responses from ChatGPT, the doctors evaluated them as being more empathetic than the answers from human doctors. Yeah, and that to me says, well, okay, with, you know, fine tuning of the models. It can be possible to, you know, get them to respond in a certain way that can potentially mediate some of the nastier sides of human instinct. And, you know, so imagine I mean, I would personally appreciate a an eye as long as it preserved my privacy, which might be challenging. But an AI that, you know, before I hit the send button on the email or maybe when I hit the send button. It gives me a chance to rethink. This right view. It could say, you know. Maybe based on your previous conversation with this person, this is a little harsh or something like that. Right. Or, you know, there's. A possibility that this email that you're about to send could be taken the wrong way or something like that. I mean, I think that would be a fantastic use of these eyes. And it could really help. It could really help interpersonal relationships between humans to have such a thing. Because. I mean. I think that we all know that, you know, communication by email is much less effective than speaking to a person directly and that a lot of things happen in email and in chat rooms that really wouldn't happen. The same people would not behave the same way if they were standing face to face and talking to each other. And that, you know, suggests to me that there's a mismatch between that technology and the way that humans naturally interact. And so when we have a mismatch in technology, one of the possible approaches is to improve the technology and make it a better match. And I think there is a real possibility for that here. 

Steven Parton [00:21:47] You spoke there about the model being used in relationship to the doctor and how accurate and even empathetic it was and interjecting in the emailing process. And I think a lot of people right now fear that what you presented there is just a hair away from the human actually being replaced or unnecessary. Right. Why not just have the AI write the email? Why not just let the doctor? Why not just let the AI be the doctor? So in essence, what what do you say to those people who feel like these symbiotic dynamics that you're alluding to here that would be beneficial are kind of the first step into phasing out the human? 

Edward Lee [00:22:32] Yeah, I actually wrote about a scenario like that in in my Coevolution book where, you know. I talked about using these agents that are based on the technology that Google called duplex where they. Can, you know, take a relatively small sample of your voice. And then create a machine that can replicate your voice and sound just like you and speak for you. And you know. Ways that you could use this to interact with people. And how that could really go quite awry. And you could find, you know, your your duplex talking to someone else's duplex while you just sit home alone and binge watch something on TV. So there there is real risk of that. I mean, you know, people joke about how, you know, you can ask ChatGPT to take, you know, a real quick, curt statement and expand it into a long email and then at the receiving end so they could take the long email and ask ChatGPT to give you a quick summary of it. And I think these are, you know, phenomena that we have to watch out for. And there might be. Beneficial uses, there might be risky uses. But yes, there is there is a risk of a kind of runaway phenomenon here where the humans participate less in the process well. I think that risk is real. 

Steven Parton [00:24:17] And this is kind of a transhumanist phrasing here. But do you think it is part of our skin bag bias to hold on to kind of that messy, nuanced version of us that might not say the right things in the email and that maybe it's okay to to have the AI summarize it and gauge like, do you have a moral stance or a thought on how how far we can go in terms of what we concede to the machines? 

Edward Lee [00:24:52] Well, I think that everything that we do should be oriented towards maximizing the value for humans and I don't think we have any assurance that even humans will act that way. Okay. So I guess, you know, simplistically to the extent that I'm willing to take a moral stance, I would say that if you know, with actions that are clearly not beneficial to humans are things that I think we should discourage. But. Yeah. So in that sense, I'm being specious here. I mean, I talk in the coevolution about whether we should be considering these machines to be living digital beings. And being, you know a new life form on the planet. And this is, you know, a concept that I originally heard from George Dyson, who's a historian of technology. And I think, you know, he's written quite eloquently about the point of view of the technology as being a new life form on the planet. And, you know, if you view it from that perspective, it could, you know, become something that has its own rights. But I'm willing to be specious and say, I think we should put our rights ahead of the rights of any other species. And that's just perhaps a reflection of my own species. 

Steven Parton [00:26:39] Yeah. I mean, I think that's pretty natural. Well, you spoke there about the the living digital beings. And I believe at one point and correct me if I'm wrong here, but you even make an argument that something like Wikipedia could be seen as a living digital being. If I am correct on that, could you just justify what you really mean by that? How you why you could say that Wikipedia is a living digital being? 

Edward Lee [00:27:05] Yeah, I think, you know there's a lot of debate among philosophers and biologists and all kinds of people about what it means to be living. And there's disagreements in you know, what constitutes is a virus, a live, for example? Well, you know, a virus is unable to reproduce on its own. It has to hijack the reproduction mechanism of of another cell in order to replicate itself. And so, you know, is it is it a living thing or is it just a chemical machine? And where do we draw the line between you know, a chemical machine and a living thing? And I think it's a very difficult line to draw. But if you look at the characteristics that we usually ascribe to living things you know, they sit in an environment that they react to stimulus from the environment and they grow. Over time. They reproduce. They there's as part of reproduction, there's heredity where, you know, the characteristics of the descendants is partially influenced by their ancestors. There's homeostasis, you know, maintaining internal conditions that are maintained steady in the presence of environmental changes. There's metabolism. You know, these are all things that we think of as being associated. With, with being living and every one of these things is also a property of Wikipedia you know. Wikipedia has been reacting to its environment more or less continuously since I think around 2000 when it started up running on a single server. And that server no longer exists. Just like, you know, many of the cells in our body. Worked that, you know. That we were born with are no longer there. But there's you know, and it's grown considerably. And it's continues to react to its environment. It does have homeostasis, right? It maintains internal conditions and adapts things just because the temperature regulation in the data centers where the machines are running, there's reproduction and heredity. There's a lot of wikis out there that are direct descendants of Wikipedia. And in fact, I've acquired a lot of the same code from Wikipedia and adapted it, mutated it further into other forms. So there's heredity as well. If metabolism is a little weirder. Right. Because what does what does Wikipedia eat? Well, it eats electricity. And, you know, today most of that electricity is generated by burning fossil fuels. So, you know, I guess in a at a level of indirection, it eats natural gas. And but that's perhaps a bit of a stretch to think of it that way. But, you know, the fact is that so, you know, if you define life in such a way that you require it to be biological. Then these things are clearly not living. But if you define life in terms of these processes that the biological machines happen to have, then the analogies become an awful lot stronger and it becomes much more defensible to think of them as living things. 

Steven Parton [00:30:40] Yeah, if we leave behind the biological though, and we we shift to the computational I believe youre not super fond of viewing everything in the world as computational, correct? 

Edward Lee [00:30:52] Yes, that's right. I think that one of the kind of more nuanced arguments that I make in the Coevolution book is that in and previously in Plato in the nerd, is that there's a prevailing view among computer scientists about the universality of computation, that everything can ultimately be modeled as a computation. And I don't think that's true. And I think I give rather extensive arguments about why that is probably not the case. I also give arguments that the hypothesis is actually testable by experiment. And so we can't really consider it to be a scientific hypothesis and then construct experiments that try to support or refute it. I have argued that that is actually not possible and so when you have a hypothesis like that that is untestable, ultimately it becomes a question of faith rather than rather than a question about whether it's a reality. And I think that the prevailing view among many computer scientists that everything and by the way, many physicists as well, that everything is ultimately computation, I think is really better viewed as a faith than than a scientific hypothesis. And I personally feel like it's a rather inefficient way to think of the world. And so as models go. I don't find it particularly effective. 

Steven Parton [00:32:41] Do you think we're just viewing it as computational because that's the latest technological advance. So we you know, we typically think in the metaphor of our latest technology or do you have and you're just holding the door open, I guess, for the next metaphor that we'll use? Or do you actually have an alternative to the computational narrative, something that you you think that we can grasp onto now? 

Edward Lee [00:33:06] Well, I, I think that so there's there's potential for a lot of confusion here about terminology in in some ways philosophically, I'm very much a mechanistic. I believe that everything that happens in the physical world is a consequence of physical processes and that includes the cognitive mind. And so in some sense you can view everything that happens in the physical world as a as machines. But equating machines with computation is where I, I differ with many of my colleagues that, you know, there are many machines that I don't think are reasonably modeled. In the Turing Church view of computation. They're not algorithmic processes in the sense that they consist of a sequence of discrete steps where each step is some logical action. Mm hmm. I don't think that there's any reason to believe that they're operating on discrete data, that there's no reason that the physical world should be constraining itself to only. The, you know, discrete possibilities. There's no physical evidence for that. There's lots of physicists who are trying to prove this, that ultimately the world is discrete. And as I stated before, I think that similar to the hypothesis about computation, I think that hypothesis is also untestable by experiment. And so ultimately, you know, it becomes a question of faith and a question of how useful are the models. And I am of the opinion that my, my, my faith is I would say that I, I'm very hesitant to assume that the natural world has tied its hands behind its back, and for reasons that we can't possibly explain, has limited itself to a discrete universe. Like, yeah. To me, that's just really far fetched, that there would have been such a limitation. And I can't think of any reason why there why there would have been. 

Steven Parton [00:35:31] Yeah. Let's, let's take a step into the world of speculation. I guess if you're comfortable with that and run this idea through a bit of a basic thought experiment. But what happens when we attempt to upload consciousness? Is that something that you think we can do? Is the machines? Is the machine, I guess a medium that could host this in your mind eventually. 

Edward Lee [00:35:57] So I talked quite a bit about this in the Coevolution book and I critique quite a number of thought leaders who just sort of assume that this is in principle possible. And they're basing it on a mechanistic view of the world. The assumption is that since consciousness, we assume that consciousness arises from the physical processes of biological mass in the brain. Then you there's an assumption that you could replicate that in a computer. But there's a number of problems, some of which are really deeply technical. And I talk about a classic result due to Claude Shannon, who in 1948 proved that if you have a communication medium, a way of getting information from point A to point B, if it if that communication medium is in any way imperfect. Then the amount of information that can be conveyed is limited. It's there's, there could be a great deal more information in the source than you can possibly convey over that imperfect medium. And every medium of communication is imperfect. In fact, I have worked on, you know really pretty deeply technical arguments that show that it is physically impossible to do. To assume that noiseless measurements can be made, for example. And so if you understand Claude Shannon's theory here, which is he was the creator of what is now known as information theory, then the only way that we could upload our consciousness is that if it is, it is if it is in fact a digital computational process. And again I have shown that that's an untestable hypothesis. And so. You know. If someone tries to sell you a machine that they promise that yes, it will kill you but your consciousness will now exist in the machine. And they demonstrated for you on some poor victim, you will still have no evidence that it in fact worked. Even if the machine starts to talk. Like, you know, like. Who is it? Johnny Depp in Inception. 

Edward Lee [00:38:38] Right. Right. 

Edward Lee [00:38:41] Even if the machine shows all overt notions of having acquired your personality and so forth, there's no evidence that it acquired your identity. And so you may, in fact, have ceased to exist. And there's no way to determine whether that has been the case. It's an untestable hypothesis. So I would be very reluctant to buy that machine myself. If we take the humans out of the equation, then and look strictly at maybe the advancing forms of AI, do you think they'll attain some level of quality? I mean, if we give them this living digital being label and status and we potentially even give them rights, you know, as we were talking about before, is this because they've attained something akin to a conscious experience that we need to be very mindful of? So here we're getting into the realm of speculation, but I am willing to predict that in a very short period of time, you know, maybe within a year or two, we're going to have machines that are that are indistinguishable from conscious beings. And I think that the key step which is currently largely missing, but is soming very quickly, is the ability that the machines have to act in the physical world. So one of the things I talk about in the Coevolution book, while I was researching for this book, I learned a lot about the concept of embodied cognition, which is a. thesis in psychology that was, I think, first really best described by Esther Phelan, who argued that the conscious mind isn't just a process going on inside the brain, but rather is the interaction of that process with its environment. The environment, including the body. And psychologists and neuroscientists have developed have identified a number of mechanisms that are really central to all animals. That that are a key part of this. And it's a feedback mechanism that the term that's used in neuroscience is efference copy. And so you can think of it very simply is when your brain tells your body to do something like, you know, wave your hand next to your face, it it feeds back into your sensory system. That the fact that it's telling your muscles to do. This so that your eyes kind of learn to expect to see motion in your peripheral vision. So when you wave your own hand next to your face, you don't suddenly you don't suddenly panic because you're seeing something in your peripheral vision. You expected to see it. So this feedback mechanism has been identified even in the simplest animals. So there's a probably one of the best studied organisms on the planet is a tiny worm called C elegans. It has fewer than a thousand cells. Approximately one third of the cells are neurons. And so it's on the order of 300 neurons. And biologists have neuroscientists have mapped out these neurons quite in quite a bit of detail. So we know more about the structure of this nervous system than any other, probably any other organism on the planet. But it has this efference copy mechanism and it gives it the ability to distinguish self from non-self. So if you know, the worm curls up and its tail touches itself, it doesn't panic and start moving. But on the other hand, if some external event touches it in the same way at the same place, it does panic and start moving. And this ability to distinguish self from non-self, I think, is at the core of consciousness. And as soon as the machines are given hands, not just eyes and ears, but an ability to act in the physical world and to act in a way that then reflects back into a sensory system. They have the potential then to acquire some form of embodied cognition and to a limited degree. They have this already in the sense that, you know, they're they're able to act in what we might call the nose. We are right there on the Internet. They can they can act by producing stimulus to humans and then they sense the responses. So that's a very limited form of this kind of feedback. But with self-driving cars, with robots, they're going to be acquiring much more direct abilities to act in the physical world and to act in much more physical ways. And given that feedback mechanism, which will be an inevitable part of these machines, there is a real possibility of them being able to develop some form of embodied cognition. 

Steven Parton [00:44:33] So I have a bit of a multi-layered response to that as we are kind of bringing all of this together. I'm thinking of mathematics and the idea that we're going to have, you know, these digital living beings that are able to quickly create all kinds of informational memes throughout the Internet, we have embodied cognition. I might be able to actually move through the world. And then I have this thought in the back of my head of you saying, you know, we're not really top down controlling these mechanisms. We're kind of nudging or in a symbiotic relationship with. So that brings me, I guess, naturally to this thought of. How do we do this responsibly? How do we how do we guide this process in a way when there is so much power in these technologies and we're not really fully in control, but we need to somehow move forward in a reasonable way? How do we do that? I mean, is it policy? Is it culture Kind of what do you think? 

Edward Lee [00:45:43] I think that's a very hard question. And. You know, I guess I the only real answer I can give you is the answer from the perspective of a scientist, which is we need to try to understand the processes. We need to understand them better. And we need to really hesitate when we find that our, you know, previous assumptions are leading us to wrong conclusions about where this is headed. So, you know, you see people with simplistic answers out there. So one of the simplistic answers is, well, let's just. You know, I teach in an engineering college of engineering at Berkeley. Right. And so one of the simplistic answers is, well, we should just include ethics components in all of our engineering classes. And that, you know, as long as the. 

Edward Lee [00:46:41] Presumption is that as long as every engineer behaves ethically, nothing bad will happen. I personally believe that assumption is really farfetched. And, you know, we have seen so many technologies that have led us in completely unexpected directions that, I mean, I certainly don't want to say that we shouldn't encourage our engineers to behave ethically and to think about ethics that I think to me, it almost goes without saying. But is that a solution? I by itself, I don't think so. 

Steven Parton [00:47:22] Do you do you therefore favor maybe more? Strongly enforced regulation. Or do you think this is kind of we need to let the ecosystem play itself out? Like, what approach do you think is the best to kind of help rein that in? 

Edward Lee [00:47:41] I, I think that regulation has to be part of it just like you know. I mean, we were talking earlier about homeostasis in an organism. We need homeostasis in a society to right. We need feedback mechanisms such that we maintain stable conditions for the society. And these feedback mechanisms, you know, if you let anything powerful run open loop. It's kind of like the difference between. You know. An atomic electric power plant and an atomic bomb. 

Edward Lee [00:48:17] Yeah. 

Edward Lee [00:48:17] Basically the same underlying physical processes. Right. But one of them has a type regulatory feedback loop and the other one doesn't. The same thing could happen with this technology. If we fail to put in regulatory feedback loops. It could behave like an atomic bomb. 

Steven Parton [00:48:39] Yeah, well, we're getting close to our time here, and I want to kind of leave with. A real honoring of your perspective and the kind of your thoughts on this. So as we come to a close, can you just kind of tell us what we really gain by switching our perspective or what perspective you want us to switch to? We obviously talked about, you know, stepping away from the top down thought and embracing this more symbiotic relationship. Could you kind of summarize or I guess, point us in the direction that you think would be beneficial for us to start thinking and as a society so that we can maybe more responsibly, more reasonably navigate this transformation? 

Edward Lee [00:49:21] Well, one of the. One of the missions I think. That I'm on is I live and work in a highly technical culture, right? I teach in an engineering school. I work with engineers all the time. And I feel like. What were. What is happening with technology is complex enough. That we can't just focus on the technical problems that we need to, in fact, step up our game and think societally. Learn from our colleagues in the humanities and social sciences who I think are much better at understanding humans and human societies than we are as engineers. And so one of my missions is to get much more multidisciplinary in our thinking. And so rather than just, you know, sort of introduce a naive ethics component in courses, I think we should have all of our engineers, you know, study history, study sociology, study economics. Right. It's there's perspectives about how our society functions. That are not just engineering perspectives. And this we need more kind of renaissance people who can integrate that kind of thinking. And too often, I see a tremendous arrogance among my fellow computer scientists who dismiss all of all of the disciplines as Mickey Mouse disciplines. I mean, I've heard them use that term, and they're speaking from pure ignorance, right? They don't they don't get that many of these other disciplines are dealing with problems that in some ways are very much harder than the technical problems that we deal with where, you know, it's you. You know, you can often find a nice, simple path to a solution. That's not always the case in societal problems. And so we really need to be using our best minds and using them together, cooperating to try to figure out how to manage this beast. 

Steven Parton [00:51:39] Yeah. Lovely. Well, any any closing thoughts, Edward? I want to give you a chance. You know, obviously, you can tell us about your books or anything you're working on, but I want to just give you a chance here. If there's anything you'd like to promote or close that I didn't highlight for you. 

Edward Lee [00:51:55] Well. Not really. I'm not very good at promoting things. In fact, I guess the one thing that I might say is that I just made an arrangement with MIT Press to make both of my books, the Plato in the Nerd and the Coevolution Open access. 

Steven Parton [00:52:11] Oh, wonderful. 

Edward Lee [00:52:12] So, yeah. So. They. You know, hopefully that'll make them more widely available. I feel like they. These two books have become much more relevant with the developments of the last couple of years than perhaps they were when they were published three years ago and five years ago. And so, you know, I. And I feel like they do they give a a coherent philosophy, but it's not a usual philosophy, I think. And. You know, for many people, particularly technical people, there are some things in these books that are going to be hard to swallow. The non universality of computation, for example, is something that most computer scientists will resist horribly. The fact that the physical world may not be digital is something that most physicists these days seem to be very hostile to that idea. They've really bought into the they've drunk the Kool-Aid of this digital physics idea and I counter it. I think the world is more complex than that. 

the future delivered to your inbox

Subscribe to stay ahead of the curve (and your friends) with new episodes and exclusive content from the Singularity Podcast Network.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.