< return

Principles of the Singularity

November 21, 2022
ep
80
with
David Wood

description

This week our guest is David Wood, a long-time futurist and renowned transhumanist thinker. David has authored 10 books on the subject of our technological future, including his recently published book Singularity Principles: Anticipating and Managing Cataclysmically Disruptive Technologies.

In addition to exploring some of the principles and ideas from David’s latest publication, this episode takes a wide but succinct tour of the singularity. This includes (but is certainly not limited to) the rise of artificial general intelligence, and whether we should merge with AI or if it will be a conscious entity separate from humans. We also discuss the variety of challenges that could push us towards a negative Singularity, as well as the many opportunities that could propel us toward an abundant and thriving future.

**

Find more of David's work at deltawisdom.com or follow him at twitter.com/dw2.

transcript

The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

David Wood [00:00:01] A.I will have better general knowledge than us, better common sense than us, better ability to adapt to surprising developments. And when that happens, we humans will no longer be the smartest species on the planet. We will be in second place to sort of species an A.I., which therefore will by default, dominate decisions by virtue of its superior intelligence. 

Steven Parton [00:00:41] Hello everyone. My name is Steven Parton and you are listening to the feedback loop on Singularity Radio this week. My guest is David Wood, a long time futurist and renowned transhumanist thinker. David has authored ten books on the subject of our technological future, including his recently published book, Singularity Principles Anticipating and Managing Cataclysmically Disruptive Technologies. In addition to exploring some of the principles and ideas from David's latest publication, this episode takes a wide but succinct tour of the singularity. This includes, but is certainly not limited to the rise of artificial intelligence and whether we should merge with A.I. or if it will become a conscious entity separate from humans. We also discuss the variety of challenges that could push us towards a negative singularity, as well as the many opportunities that could propel us toward an abundant and thriving future. Ultimately, David seeks to add clarity and understanding to the concept of the singularity, and I think that he does quite a good job of doing just that in this episode. So without further ado, please welcome to the feedback loop. David would. I feel like anyone who has ever really dug into the singularity or transhumanism inevitably kind of sees your name somewhere along the way. You've written prolifically, you've hosted a ton of events. You have a lot of wonderful thoughts on the subject. So I would love to start with just hearing about what attracted you to this world. What was it about the singularity and about transhumanism that made you think that this was something that you were wanting to dedicate so much of your life to? 

David Wood [00:02:29] Well, if we take the claims of the Singularity community seriously, this is going to change everything. If we take the claims of the transhumanist community seriously again, we're going to see enormous changes in the human condition. And in both cases, the communities generally point out the transition could be terrible or it could be wonderful. And the choice of whether we end up terrible or wonderful depends in significant measure on what we humans do, whether we pay enough attention or whether we get distracted, whether we understand things profoundly, or whether we are content with a superficial understanding, whether we act as individuals relatively ineffectively, or whether we're able to build a larger momentum that can impact the course of history. So if these claims are correct, then it's the most important thing to do. 

Steven Parton [00:03:26] Jen, is that what you're trying to do with your latest book, Singularity Principles, is bring some unifying theory and clearing away some of the confusion to to the concepts. 

David Wood [00:03:37] Exactly. So, unfortunately, there is a lot of popular chit chat about the singularity. It's an exciting thing. It's a funny thing. Many people make claims which are attempts to change the public conversation, but they are sometimes too flimsy, there isn't sufficient backing, and therefore there is only reaction to that. There is a critical movement which says that talk of the singularity is deluded, it's egotistical, it's what the rapture of the nerds. It's a distraction from real present issues, real pressing issues of the present day. So there is a reaction against the singularity and against transhumanism, because both these movements have got what I have termed a shadow, which is a group of people who may not actually be core members of either community, but they are associated in the public mind with them, and they will say things that raise alarm bells, like take the timescales to simplistically that such and such a thing will be happening in a certain timescale, or they downplay the risks. Although I've emphasized that the community as a whole regards the outcome, I still up for grabs and there could be a dark age emerging rather than a sustainable superabundance. There are others in the community who only give lip service to the risks of things going wrong, and they'll say, Yeah, this could go wrong, but don't worry, we'll use this technology to make it better, and hence it accumulates in this shadow which causes confusion. So I'm trying to highlight what I consider to be the best arguments, the best conceptualization of both transhumanism and the singularity to try and get more people to say, actually, there's some really solid and important discussions here. They should be part of it as well. 

Steven Parton [00:05:33] Yeah. And as we attempt to navigate whether we're going to go towards a positive singularity or a negative singularity, how much influence do you think that these people who we might call bad actors, the people who are maybe viewing the singularity as a religion or being too rigid about it or coming up with fantastical ideas. How much of a threat do you think they are in terms of pushing us towards the negative singularity? 

David Wood [00:06:03] Well, it's part of a rich interplay of many forces. If we look at what could stop us getting to the positive singularity, there are many things. It could be as simple as global warming accelerating faster than people had generally expected. It could be something like nuclear war triggered in part because we've deployed artificial intelligence in an unwise way with partial oversight over some hair trigger mechanisms. It could be because there is a decline in respect for democratic norms, and instead of having checks and balances in our system, we give would be autocrats. Would be strongmen. Too much power. And then they corrupt. Same things happened in the past. They corrupted the political processes so that the right decisions are not made. But it's also dependent upon the narratives that are prevalent inside society. So I want society to be animated by a powerful narrative of a profoundly better future within our grasp in which we humans don't just come to the level of the best of the past generations. But we could have a very significantly improved life in multiple dimensions. Super longevity, superintelligence. Super happiness, sustainable. Super abundance. So that's possible. But that narrative does get confused if people think they've heard it already, and they think that when they explored it before, it was flimsy, it was superficial, it was, say, groundless. So I want to win that battle, the battle of ideas. I want to win the battle of narratives so that more people are comfortable saying, Hey, I am a single authoritarian or I am a transhumanist, and they're comfortable and giving the arguments in favor of these outcomes. 

Steven Parton [00:07:56] Yeah. Among all of that noise around the narrative, could you just give us a clear definition of how you view the singularity? There's a lot of different ways people define it, but what is your specific definition? 

David Wood [00:08:10] The first place is it's the emergence of artificial intelligence which exceeds our reasoning powers in every aspect. It's not just better than ours in narrow domains, such as playing games of skill or navigating between A and B on a map, or many other individual things in which A.I. already exceeds our abilities. It will have better general knowledge than us, better common sense than us, better ability to adapt to surprising developments. And when that happens, we humans will no longer be the smartest species on the planet. We will be in second place to sort of species an A.I., which therefore will by default dominate decisions by virtue of its superior intelligence. So this emergence will. First of all, in a way, it's impossible to predict what's going to happen next. And here I refer to the analysis by science fiction writer, computer science and mathematics professor Vernon Vinci, who said that when you have super intelligent beings, they're likely to have ambitions, they like to have motivations, which we can hardly glimpse. It would be like dogs trying to understand what humans are doing in the House of Representatives and Congress. You know, they can only have a vague idea of what's going on. Hey, these super dogs or whatever dogs think of us as humans. They're doing some get together thing and we'll just have to wait and see the outcome. So they can't really conceive of what we are doing in our or human nature. And likewise, we can't fully anticipate, but we can't say it's going to be a seismic transition in the history of humankind, more drastic than anything that's happened before. So people point to the Industrial Revolution as changing huge aspects of our lives. They point earlier in history to the emergence of fire or to the wheel. But in some ways, the emergence of artificial general intelligence will be more consequential for the future of humanity than any of these previous jumps up. 

Steven Parton [00:10:30] And so do you think that AGI, artificial general intelligence is inevitable because it feels like there is a paradox and human motivations here and that we are endlessly curious and refuse to turn away from anything that could give us more power. But simultaneously, we run into what I think is commonly called the Neanderthal or Neanderthal dilemma here, which is that we don't want to be replaced by something more powerful than us. So we have this this combating motivation between not wanting to be destroyed and wanting to be infinitely curious. How do you think we'll navigate that? 

David Wood [00:11:10] Well, that's the key question. Yeah, that is such an important question. And I like that term, the Neanderthal dilemma. I don't think I've come across that before. One complication is that there may be some people who say, well, this is too dangerous, let's abstain, let's prevent any progress being made. But first of all, there are many other people who will be more curious, who will be less worried about risks, who will think, hey, we're going to control it. So such a policy would need to be adopted across the whole world. And the second complication here is it's not clear what improvements in any will actually trigger the emergence of this artificial general intelligence. And many people are saying, well, we're just going to make our existing systems a bit faster, a bit more efficient, a bit more capable, a bit more robust against the cases which currently they get wrong. But it may be that some of these apparently relatively innocent improvements will push us to a whole new level, a bit like when you keep on increasing the temperature of water. Something surprising happens at 100 degrees. It has this phase transition. So a qualitative change emerges from what was previously a quantitative change. So it might be that some relatively innocent improvements in any turn out to push us into now being in second place. Now, is it inevitable? Some people say, and I do I will argue that there are many forces which are accelerating the rate of improvement in AI, which include the fact that A.I. is so commercially important these days, most of the most valuable companies in the world, or a significant part of their market dominance to their mastery of A.I.. I do. Every field of industry, in every field of life, almost the companies that can master and apply innovations in A.I. are likely to overcome, and they leave behind their less A.I. savvy competitors, as in the phrase it used to be uttered by Marc Andreessen. Software is eating the world, which meant that whichever your field, unless you were expert in software, you would lose out. Well, it's now software written by A.I. is eating the world, both in terms of which companies are commercially successful, but also in terms of whether we humans will be around or not. So there's a strong commercial pressure. There are also strong geopolitical pressures. China, understandably, does not want to be dominated in the world by Western created software when their best ever gold player was resoundingly beaten in 2017 by AlphaGo, which had previously beaten the famous Korean player. But the Chinese thought the young Korean champ, their young go champion, was even better. But he was beaten comprehensively in a closed door match early 2017. The Chinese leadership said, Right, we have to ramp up our investment in AI and we must be sure that our AI is world leading even the best in the world by 2030. So many other trees in various ways are having the same kind of shock and the same kind of attention to improving the rate of increase in A.I.. So there are many powerful factors that are leading us in that direction, but it could still go wrong. We could still have a decline in respect for science, which we're seeing in many parts of the world. Instead of a genuine understanding of science, we are in many places being led astray by fake science or what philosophers of science used to call pseudo science, people who say scientific sounding things, but actually they don't have real respect for science. And in many parts of the world, that often seems more important now. When you hear somebody talking about a possible cure for COVID, you want to find out, did Donald Trump advocate this if he did? Well, we're not going to look at the science. We know for sure we're going to oppose it. Or depending on your views on Donald Trump, you might say, well, if he he supports it, it's bound to be true. In other words, our scientific decisions are being superseded by political or tribal decisions. So that part of our society may feel and more generally, our respect for openness in society may fail, because many of the great breakthroughs in science have come from mavericks. They have come from heretics, in a sense, people who had different views from the orthodoxy, and we have tolerated them and we have encouraged them, and that has led us forward. So in many parts of the world now, there is less tolerance for openness. There is instead more support for populism. Let the big, powerful man dominate and what he says is good science wins. And if you are backing something else, then you will lose your research funding. 

Steven Parton [00:16:19] With. With all that being said, what do you think about the regulatory landscape? I mean, because you could certainly have it be the case where one of these anti-science populists become the world leader, and suddenly any regulations that they put on becomes very ignorant to the actual development of AI. But at the same time, you're not going to stop a country like China, as you said, who wants to make sure that they're dominating the AI landscape. So how do you think we approach the development of AI in a way that has any sort of caution or regulation attached to it? 

David Wood [00:16:58] Well, getting the right regulations is pretty fundamental. Mm hmm. I'm not in favor of saying no regulations. I am in favor of the view that enlightened government, enlightened regulatory systems can make a difference. We can point to numerous examples. We can point to how the ozone hole, the hole in the ozone layer was fixed by government interventions. Government saying we have to avoid using these CFCs. And at first, the big chemical companies said, no, there's no alternative. And by the way, they're not doing damage. They're innocent. And why does it matter if a few small people, number of people, get skin cancer from ultraviolet rays coming through the weakened ozone layer? But the governments of the world, including George Bush, George H.W. Bush and America, with support from Margaret Thatcher, who used to be a chemist before she became a lawyer, before she became prime minister in Britain, they said, no, we are going to set some framework here and regulations. And the Montreal Protocol was signed, which had a big role to play. So sometimes these regulations can be put in place. Now, you've pointed out that countries may be disinclined to respect them if they think it's a matter of life or death for them. So that's why we also need to improve the general level of understanding, not only the understanding of the wonderful, successful outcomes that could take place, but what realistically the risks involved. That's why we need to move away from just talking about the universe being converted into paperclips. And that was a useful thought experiment. But we need to talk in ways that are more meaningful to ordinary decision makers, and that has happened in the past with other threats. So I referred to a terrible risk that was in existence in the 1980s. I remember it well. I was at university at the time. More and more people were worried that there was an accumulation of nuclear weapons, not just the huge intercontinental ballistic missiles, but also the short range missiles that were being put into Europe, the SS TS from the Soviet Union and the cruise missiles which were being put in place by NATO. And it looked like there was more and more risks of accidental nuclear war. And how did we pull back from that brink? In part by a futurist, Carl Sagan, a brilliant scientist, by showing that there was a risk that had not been understood from these nuclear weapons. It was well understood that if nuclear bombs were exploded, lots of people would die from the heat and radiation. But he produced models based on the work that he and his colleagues had done on the atmosphere of Venus and the atmosphere of Mars. He produced this model of the nuclear winter and is still not entirely sure whether it's a completely correct model or some people say it won't be a nuclear winter, it'll be a nuclear fall or a nuclear autumn instead. But it's certainly possible that the accumulation of dust in the stratosphere from a certain number of these nuclear bombs going off would blot out the sun and would terminate forth a synthesis and would lead humanity to a much greater loss of life than military planners had calculated from a possible limited exchange. So Ronald Reagan's people got worried and Gorbachev's people got worried. They both heard of this, and it provided the impetus for them to say, Well, let's look again, could we do something differently? And that's what did happen. And in part, it's due to the unique personal chemistry of Ronald Reagan and Mikhail Gorbachev. They're both very special people in their own right. But in part, it was because of the explanation of Carl Sagan's on colleagues that there was a risk here which they could understand. That wasn't some abstract, theoretical thing. It was scenarios that were plausible and engaging and horrifying. So we must do the same with explaining the risks from artificial superintelligence. 

Steven Parton [00:21:13] I find myself thinking as he as you're talking about this and I know this is golden age thinking and it's irrational, but it feels like in some ways that was a bygone era where people could have more reasonable conversations. And it feels like as we move towards the singularity, you know, as technology speeds up and new technologies come online like smartphones, like social media, it feels like in some ways we're. Creating this society that's less capable of making those kinds of decisions. We talked about the anti-science stuff, but, you know, we also live in this post-truth world where people live inside their echo chambers and everything's very politicized. Our understanding of a consensus reality is diminishing. Do you do you feel like we're in a bit of a struggle here as technology changes things so rapidly to to keep our heads clear? You know, it feels like a lot of people suffer from future future shock, you know, where they're just literally unable to cope with how fast things are changing. And that actually makes our decision making worse. Do you feel like that's a thing that's taking place? 

David Wood [00:22:22] It is, yeah. In some ways, our decision making capability is eroding and going backwards. But at the same time, we have a better understanding of many things than ever before. So I claim that we're living in both the best time ever and also the most dangerous time ever. By many metrics, we are in an age of abundance compared to what was in the past. On the other hand, we can point to the number of people with mental distress committing suicide deaths of depression. We can point to lots of people who lose any hope for economic progress. They think that they're they're being passed by. They are they're left behind. And, yeah, there are stresses on the environment. Some parts of our impact on the environment are less than before. This is fascinating analysis by Andrew McAfee from MIT in his recent book More From Less How many things that we have today depend on fewer ingredients and involve fewer acres of farm, for example. And we're growing more farm crops, more farm produce. So some aspects of our impact on the environment are better bought. There are a number of ways in which we are dangerously close to tipping points. So we need to be able to candidly assess both the good trends and the bad trends. And yes, the bad trend is that we are losing ability to have grown up discussions in which we are able to disagree without feeling violated, without feeling some existential dread that these other guys are somehow getting ahead of us and they'll leave us behind. Democracy depends on both sides signing up to the democratic rule that when you lose in an election, then you hand over and it's never very nice to hand over. But you believe you'll have a chance to get back in because there are sufficient checks and balances. But if you believe, on the other hand, that the voting system has been rigged, if you believe that there has been terrible gerrymandering, then maybe your faith in that democratic process erodes. And when some people's faith erodes, there's a nasty negative cycle of which they trust erodes. So we've got to fight hard to bring it back. And that includes finding oasis of agreement. Oasis of Tranquility. Just like Carl Sagan managed to find some point of agreement between Ronald Reagan and Mikhail Gorbachev, despite many parts of their worldviews being deeply opposed to each other. After all, Ronald Reagan is on record from the 1950s and 1960s as being a staunch critic of the deceitfulness of Marxist Leninism. You can find his speeches online from early 1960s and in some ways they're masterful, but they are certainly deeply opposed to anything the Soviet Union might do. But that was turned around. And so let's carve out similar oases of tranquility, which requires emotional intelligence. It requires us to proceed not just with battering each other with facts, which often doesn't change our mind. We need a better understanding of what does change people's minds. And so it's a rounder appreciation of psychology. It's a bigger appreciation of emotional intelligence. And I think I could help us here, too. Just as when I typed something in today into my word processor, it will put a red squiggly underline if I typed it, misspelt it a green squiggly underline if I've got the grammar wrong, a blue squiggly, underline it. Maybe I should use less formal or more formal text. Increasingly, it's going to give us advice on well, what you've said is factually correct, but how about expressing it like this? And we sometimes do this to each other. We often give advice. Well, you may be justified in writing that angry email, but why don't you express it like this and you can turn a potential bad situation into a positive friendship? Well, maybe I can sit on our shoulders or live in our ears and give us advice on that. If we can program it right, and if we can be assured that the algorithms are genuinely on our side and are secretly on the side of somebody who wants to promote candidate X to be president or promote their. Share price above all other priorities. 

Steven Parton [00:26:44] Yeah. I'm wondering, do you think that will be resistant to taking that advice from A.I. when it starts to become that advance? Like, I feel like with that, I love that analogy of the text editor and correcting our spelling mistakes and our grammar and everything. But I wonder if that is innocuous enough that we don't think much about it. But when it comes to something that feels more like an attitude adjustment, if we will be willing to accept that, I mean, I guess I guess this is kind of like my earlier question a bit, I guess. But how much do you think we're going to be willing to accept a guy as a primary decision maker in our society more so than ourselves? 

David Wood [00:27:29] So the key point here is the A.I. needs to be explainable A.I. So when I get ready quickly underline, I can click on it. I thought this word was spelt differently. Ah, it's advising me to spell it like this because the rest of this document is set in American English, not in British English. And in American English. It's spelt with one L instead of two ls. That makes sense? Yeah. I'll accept the decision, but if it just says you're wrong, that's not really satisfying. And I think it's very dangerous if we accustom ourselves just to being told I'm the I am right, we might start to trust it because it's got things right in the past. But it's like our satnav and our satnav usually centers in a sensible direction. And initially when we are puzzled by it, we find out, oh, the reason it centers around here is there's a shortcut I didn't even know existed or there were some road. Roadworks on the other line. So you start trusting it and then one day it sends you in a very wrong decision because of a bug in its software. Happens less often these days, but still sometimes it happens. So we must insist on explainable A.I. We must also insist that the explanations are good. They must not just be fabrications or rationalizations. So we must have our eye being verifiable as far as possible. Because when we humans often we use justification. Oh, why am I doing this? Oh, I'm doing this because of Reason X, which we may actually consciously believe is the reason. But deep down there's another psychological factor which is leading us to do that thing, and I might operate in the same way. So we need to emphasize explainability. So one of the singularity principles I advocate is reject opacity and another one is prioritize verifiability, that we can have confidence that when software is giving us advice, then other things being equal, it's very likely to be good advice. 

Steven Parton [00:29:28] Do you think that given that our black box A.I. seemed to be the most accurate ones right now, that that might actually be the direction we go in the future? So this desire for transparency may not be possible to get the kind of power that AI is that we hope will unlock for us. 

David Wood [00:29:48] So that is a risk. And that's why this is a key issue to elevate. And we must keep pushing hard for explainable alternatives. And when it turns out that the non explainable the black box system is better, then we have to figure out, well, how else could we monitor? It could be have at least another piece of software that is verifying independently what it's proposing. So rather than just say, well, the we have to take it or leave it and. Therefore we have to take it because otherwise we are going to lose out. We could have another piece of software written differently with different principles that could verify whether what the advice is is plausible. So if you go back to DeepMind Gold playing computer, there were two very famous moves in its match with Lisa, all one in the second game when it seemed to blunder. And most of the commentators wondered, Hmm, it's gone wrong, hasn't it? And then they realized, Oh, this is actually a very unusual move, which is good genius. But then I think in the fourth game, it did again, a strange move, a move that people thought looks like a blunder. But last time we thought it was a blunder. It was actually smarter than all of us. So probably this is another genius move. So in this case, it did turn out to be a blunder. And that was the game in which the human these it all won. And if we'd bet all of humanity on that second dubious move, that would have been the wrong decision. So at least some of the team knew it was a blunder because they had other pieces of software doing an independent calculation of the game. So they inside knew, oh, alpha go is hallucinating now, they said. So they knew there was something wrong. And so I think we need that kind of multiple checks and balances like we have with politics. You know, there should be no one party government that is all powerful. There should be checks and balances. So that's part of what we might do to avoid that risk. But frankly, I don't think there's ever going to be a single answer that's going to ensure that A.I. is safe and beneficial. We're going to have to work hard. There are lots of things we need to put in place, just like government has many checks and balances which sometimes frustrate us because it slows things down. But on the whole, it's actually much better. So we need to get the whole world leadership in various ways bought in to that principle of second or independent checks before we commit to anything irreversible. Mm hmm. 

Steven Parton [00:32:29] Yeah. And do you think that the AIS are ever going to become conscious? Is that something that you think is realistic as we head down this trajectory? Will AGI actually be able to have Qualia or some kind of conscious experience similar to humans? 

David Wood [00:32:45] So I don't rule it out. In fact, I'm fairly sure that in due course we could have an ally that has the same broadly, the same sort of qualia in the feelings as we do. I don't think it's inevitable. I think there are probably architectures of A.I. that are very good at calculating, very good at figuring out what to do to advance various goals without having the same inner feelings that that we do have. But I need to keep an open mind on this because I don't claim to understand qualia. I know that I have them. I don't know what they are. I'm prepared to accept that my introspection of my qualia is misleading. The same as I can sometimes be misled by my eyes with optical illusions. Many of the confident claims we make about our own consciousness are probably similarly suspect because we don't have a clear perception. So I'm prepared to admit there are things we don't understand about our consciousness. To me, the hard problem of consciousness is a real dilemma. And so. I see two possible futures. I see a possible future in which an AGI gets cleverer than us and then explains to me, right, here's what consciousnesses look. It's when this and this and this happens. This is what consciousness is. It is not. It could either conclude, well, its own architecture doesn't have consciousness because we could see it, or it might point out, well, it's an architecture falls into that pattern. So when it claims to be conscious is actually conscious, but that's looking ahead. I'm conscious about creating beings with sentience because it may be that they will experience a more awful pain than anything. If they are truly larger minds, they may have more terrible feelings than us if we get things wrong. So that's the reason I'm not going to rush into creating artificial sentience. But it could happen as an unexpected byproduct. Just as people talk about convergent secondary goals or convergent instrumental goals, whatever ultimate goals you give an AI, it will probably figure out there are things that are useful for it to do, like acquire more resources, prevent itself from being switched off, prevent its goal from being tampered with from outside and so on. So in a similar way, regardless of what we ask an AI to do, it might turn out that it's more effective for it to have some of the same consciousness as we've got. For whatever reason we've got consciousness. There must be an evolutionary reason for it, probably so. There might be a similar reason for AI's being better if they have that consciousness. So that could happen without us intending it. 

Steven Parton [00:35:26] Yeah. Let's talk about that. The landscape a little bit, the reverberations of something as dramatic as an AI that could tell us what consciousness is. Obviously, when an AI or an Ajai gets to the point where it's solving these fundamental questions about reality, society and the lives of individuals is going to be changed dramatically. What kind of consequences or world do you foresee coming to bear when that AGI starts to come online? What kind of societal changes do you think we're going to see humanity shift to? 

David Wood [00:36:06] So that I, if it goes well, will transform every field of industry so that all the goods we need for a high quality of life will be abundantly available at very low cost. We will solve the problem of clean energy, perhaps by nuclear fusion, which is already making steps towards enabling. Perhaps by having solar panels out in space and beaming down energy. So we'll have abundance of clean energy. We will have an abundance of nutritious food created probably by different agricultural means from today, possibly by cultivated meat. Among other things, we will have an abundance of free education and abundance of free health care. So we will be in a very different society in which we will no longer need to work for a living. And so that depresses and worries people because their self-conception is tied up with the image of them working. While many people understandably look forward to retirement. Many people enjoy greatly their lives in retirement so long as they are still physically active and fit because they are able to pursue the things they really enjoy. Whether it's studying something new, studying literature, studying music, studying mathematics. In my case, that's what I look forward to. I only glimpse the mathematics universe in the four years I studied mathematics at Cambridge and there's so much more I want to get my head around those exploring planets, those exploring games, those exploring virtual reality. So there's so much that could in principle be very fulfilling, be very inspiring. There would still be stress in such a world, but it wouldn't be the kind of overbearing stress that leaves people broken and completely dispirited. There would be challenges in it, but there would not be existential challenges. So that's the kind of world I envision, which is enabled by these superintelligent A.I., which would in some sense be like the angels or the deities of the sum of all religious narratives from the past. 

Steven Parton [00:38:19] Do you think that they would help glued together our desperate nations, or will we just be operating in different little silos? Because to me, it to some extent there has to be a grand unification of the species almost in that world for it to become that utopian. Otherwise you end up in a potential war between nations who want the power of the AGI or something to that effect. So do you see, I don't want to say a one world government per se, but do you see something like that taking place when when that abundant future comes to pass, do you think we'll merge into one society? 

David Wood [00:39:02] I believe there will be some level of international governance, but it needn't control every aspect of life. And on the contrary, I believe strongly there will be support and encouragement for diversity. So there will be different societies that work around different principles, different goals, different cultures. But that must still be within an overall framework in which none of these cultures or groups threatens the overall stability. So it could be a bit like FIFA. FIFA is the International Federation for the Management of Football. It's not perfect by any means, but regardless of whether you are coming from a democracy or coming from a state with no Democratic votes, regardless of whether your culture is Islam or secular or Christian, you know, you only have 11 players on the pitch and you have all got to accept the same definition of the offside rule. Otherwise you couldn't cooperate. And when people bring in video assisted refereeing, they've all got to agree on the same rules for that. So FIFA makes these decisions, but it doesn't dictate other parts of people's lives either. So when they go home off the football pitch, they can do things differently. So I envision some improvements in global coordination with the United Nations or if not the United Nations and other new body, gradually exercising more regulatory oversight, gradually ensuring that no entity within the world is risking the well-being of the entire planet by polluting too much, or by conducting dangerous geoengineering experiments or dangerous research onto gain of function, new pathogens and so forth. So those should be this regulations better than we've got today. The U.N. fails in many aspects because it's got countries in it start to just don't agree on fundamentals. Perhaps we need to build up something different. Perhaps the G7 will evolve and have more reason to sign up to the agreed democratic norms than those who refuse to sign up. They will be subject to economic sanctions. 

Steven Parton [00:41:12] You've talked before about active transhumanism. What role do you think that active transhumanism plays and this steering us towards the singularity, towards a positive or a negative one? 

David Wood [00:41:25] So I used the term active transhumanism as a contrast to sideline cheering transhumanists the ones to say, Yeah, isn't it great? Oh, look at this. This is amazing. Exponential transformation. Hip, hip, hooray. Which can sometimes turn into hero worship of companies or individuals. Oh, it's fine to cheer from the sidelines, but I think we all need to get involved, get our hands dirty. We need to get involved in politics. Sadly, it's a very messy thing, but unless we improve the political processes, we are likely to be governed by worse politicians. We need to get involved in economic considerations as well. So rather than just an ideology as to what the world could be like, we must have programs to actually improve our understanding and measure things and then identify where we're falling short of where we'd like to be and then identify actions. It means getting involved in people who aren't transhumanists, but who could be at risk partners in one or more collaborations. 

Steven Parton [00:42:34] And when I think of transhumanism, I can't help but think of the smartphone as one of the first real transhumanist technologies. It's the first time I think we kind of enter this world where it really, truly feels like Star Trek, and we have this thing that we carry on us that gives us the world's answers. Now, you have a very you have one of the best resumes, I think probably exist for somebody who was there during the development of the smartphone. How do you think that the smartphone has evolved and the kind of impact that it's had on society in terms of moving us towards a positive or a negative singularity? 

David Wood [00:43:14] Well, I've thought about this a lot because sign shed, blood, sweat and tears almost literally for many years as we struggled to make the first successful smartphones in partnership with Nokia, Ericsson and Motorola. The first three investors in the company I co-founded, Symbian, which spun out of another UK manufacturer, Psion, who made these handheld computers. So this is I've got in my hand device from 1999, which I used to carry around in my pocket, but it's just a bit too heavy and the hardware fails, even though the software still runs beautifully. So I forecast a lot, and I was in the business of encouraging people to believe that one day the whole world would use smartphones. And there was a lot of pushback. A lot of people said, we don't need mobile phones, or if we do need mobile phones, we just need dumb phones. We don't need phones with apps. And I would try and make the case that these things would be incredibly useful. So I did have a smartphone in my pocket since 1999, a very simple one from Ericsson, the prototype A380. And all these years, I've seen the smartphones getting better. But in 2001, I committed an argument to paper, which is still on my personal website, the Delta Wisdom dot com site. There's an Insight tab there. And this article, which I wrote, taking account of insight from many of my colleagues, said that although we have sold less than 1 million smartphones so far in 2001, we can look ahead to around about 2007 when we will sell 100 million smartphones, and it'll happen in these stages. And I set out what I thought by stages would happen, and I argued in favor of an open operating system and that you would need a step up. And a lot of what I wrote in the article is profoundly correct. But in one aspect, I was profoundly wrong. I said at one point, the question of which operating system succeeds here is billion dollar question. And I was trying to persuade our investors to invest more in what we were doing. If we'd gone faster, we could have been more successful. Now, it wasn't $1,000,000,000 question. It turned out to be $1,000,000,000,000 question. Given that the companies who were successful with smartphones, Apple and Google, reached $1,000,000,000,000 valuation, in part because they were able to have their successful smartphone operating system. So I did not anticipate quite how widely these devices would be used. I think it's the same with I and it was my realization, in fact, about 2005, 2006, 20,000. It was my realization that the same path that we were following for smartphone technology would likely trailblazer what would happen not by editing our silicon, but editing our biology, that is biotech and editing our artificial intelligence that they would go through the same slow, disappointing phase before a fast and furious phase that likely would keep on improving and improving. That's what led me to track down the other Transhumanists in London. There was a small number at the time, the meeting in a pub in Holborn called the Penders or the venerable old pub. So I met them and I was fairly soon convinced, you know, having these discussions is even more important, dare I say it, than making successful smartphones. 

Steven Parton [00:46:44] And with and with the, you know, power that the smartphone has brought to society, it's also obviously brought a lot of issues as well. And I'm thinking specifically of the, you know, things like surveillance capitalism, things that we'll end up seeing, you know, more of in the future, like facial recognition. The issues with Cambridge Analytica. Obviously, there's a lot of things that, you know, on the road to the negative singularity can take place. So along that line of thought, what are some of the obstacles or concerns that you have as we move towards the singularity? What are some of the biggest things that you think could maybe push us in a wrong direction? 

David Wood [00:47:30] So we could have carelessness with regard to our current technology, which would then lead into a terrible backlash in which we would then say, Well, we're not going to use this at all. We should be a terrible shame rather than not using smartphones because they lead people sometimes to commit suicide, as happened in a court case in the U.K. recently, it was determined that a 14 year old girl, Molly Russell, one of the reasons she had committed suicide is because she had spent too much time wrapped up on Instagram and other sites that had fed her negative information. So it is helping. Sadly, people to commit suicide is depressing people. It is in other parts of the world leading to genocide. There is a case in Myanmar, formerly known as Burma, in which there was a huge amount of viral, nasty information shared on Burmese language, Facebook, about the Rohingya muslim minority there. And many of the Buddhist majority, well, so incensed and so angered by these fake news stories and these exaggerations that they went on a rampage. And there was, you can call a genocide, in part because Facebook did not adequately monitor what was happening in their local language site. So there are terrible things that are happening. There could be more terrible things are happening. And unless we manage it, there could be such a backlash in which we're just going to not take advantage of this technology at all. So that's one scenario. And the other scenario is that we fail to manage it and it drives more of us into even madder, even more angry. So what happened in Myanmar might be a prelude to something worse happening here, in which we are maddened, we are exacerbated, we are incensed by information which appears to show the other side, whoever the other side is in terrible light and makes us part terrible things to try and stop them. And then the worst example is what's happening in Russia and Ukraine when there is this horrible conflict. Thankfully, so far it hasn't gone nuclear. Thankfully, there seem to be constraints, even in Russia, against escalation to nuclear. But if things go haywire there and things have gone haywire in the past with sites that are meant to be monitoring for incoming nuclear weapons, there have been cases in which it was only due to the common sense of human overseers that they didn't escalate the need to strike back against apparent incoming missiles. If the situation in Russia gets really chaotic. If Putin is pushed from power, or if he fears losing power, then social media whipping things up by not completely operating well could push us into an exchange of nuclear weapons, which goodness knows would be the outcome. So that could stop us getting to the wonderful singularity and lead us into a wasteland, whether it's a nuclear winter, a nuclear fall or just, I say just a hundreds of millions of people dying. 

Steven Parton [00:50:44] Yeah. Given that capriciousness of of human nature. Do you support the development of AI as something that takes place externally beyond the human flesh? Or are you interested in the collaboration between human and AI entities? Do you think we should be integrating through brain computer interfaces, maybe even mind uploading at some point, some kind of a symbiotic relationship between humans and machines? 

David Wood [00:51:13] We already do have a symbiotic relationship when we have smartphones in our hands, when we have earbuds in our ears. Speaking potentially the voice of God or at least the voice of Google in our head, turn left, slow down, helping us to understand what somebody in another language may have said to us. We may have it in our glasses before long. That hasn't happened as quickly as many of us predicted. But due course, there will be Apple Glass or Microsoft Glass or something. So there is that element of improvement. I think there will also be improvements in brain computer interface, especially for people who are paraplegic in one form or another or disabled. We already have deep brain stimulation to help people who are afflicted by Parkinson's. So there are chips and electric currents in people's heads already. We will see more of that probably going slowly. Just like Google Glass turned out to be not fit for purpose and the advent of smart glasses has taken much longer. I think there will be delays with having wireless chips in the heads of normal people. Going to take a long time before we are going to consent to have things drilled into our heads. But in due course, it may well happen, but I don't think that's going to mean we will be as fast and as capable as the pure silicon computers. I think it will help us speed up a bit, but I think the pure silicon computers, without the constraints of the skull and the human biology will probably be calculating much faster than we do. So that leads us to the other option you mentioned, which is getting the mind out of the skull altogether and getting the mind with all our patterns of memory and consciousness on a computer. So I'm not in a hurry to do that first because I don't think it's going to be here anytime soon. We're going to need to get AGI faster than AGI may help us to do this because it's going to be so complicated to collect all the information from the brain and put it in the right form. And I'm not convinced that will be me in a meaningful sense either. It may be something that looks like me and has my memories and accepts it as me, but I'm not sure whether the the one part of me will be left behind, which I wouldn't be happy to be switched off. And I admit, again, I'm unsure here about philosophy of mind, the hard problem of consciousness. And I may change my mind about this once an AGI has explained it to me fully, and I have convinced myself that the AGI wasn't just telling a porky that I actually follow the whole line of thinking, in which case maybe I will consent at some stage 2060, perhaps to be uploaded into a new infrastructure. 

Steven Parton [00:53:58] Yeah. Well, David, we are coming close to our time here and I want to make sure that I respect our scheduled time. But before we go, obviously, I want to give you a chance to kind of just lay out any closing thoughts. Tell us maybe a little bit about the singularity principles, anything at all. This is your chance to do so. 

David Wood [00:54:18] Fine. Well, I write because I feel I have something special to say. I've read lots of books about A.I. and then lots from them, but I don't feel any of them is truly taking account of the pace of change on the one hand, the potential for rapid acceleration in the next few years. Or if they do take that into account, they generally have a naive you as to the human and economic and political factors which are constraining and influencing the development of A.I. and this deployment. So I try and cover that from my own perspective in that book, The Singularity Principles, and it's a book that I believe to be practical because it says here are principles that we should be advocating here. Now, it's not just let's wait until AGI is around and then apply the principles. We should be applying them to all sorts of technology today, including the gain of function, research and pathogens, including interventions to improve the climate and so on. So that books there. But I'm not just a writer. I think, as I've said earlier, we need to get involved in real discussions, concrete discussions, meaningful discussions. And so check out what the futurist does. One source of online webinars where there are opportunities to test each other's ideas, to kick the tires of each other's ideas and hopefully reach a higher consensus. And then I also believe we need to get involved in changing politics. I used to be the leader or co-leader of an organization called the UK Transhumanist Party. Well, that party still exists, but we have renamed it without the T word in it. So I no longer will knock on somebody's door and say, Hey, I'm a transhumanist, would you vote for me? I now call this Future Search, which is a new name, and it's still taking time to get used to it. But I encourage people to check out future search dot org and the projects that are taking place there, one of which is a project to revitalize the educational syllabus. So that's vital syllabus dot org, which tries to gather together, links, pointers to the most pertinent information for people as they feel lost, frankly, in this tumultuous world. What are the skills that are most important, such as learning how to learn emotional intelligence, agile development, and what are the skills in economics and technology in A.I. and indeed the management of the singularity, which everybody needs to bear in mind so that a project of future surge with its own website. Vital syllabus dot org. 

Steven Parton [00:57:00] Perfect will include all of that in the show notes. David. And I wish I wish we could talk for many more hours because there are so many things that I would love to ask you. And I love your thoughtful responses, but thank you for your time that you were able to spend with us. 

David Wood [00:57:14] Well, you had a bunch of great questions, and I think I came up with some new answers in the midst of all of that that I haven't used before. So I'll have to watch the show and remind myself what innovations arose. 

Steven Parton [00:57:25] Perfect. That's what we aim for.