< return

Using Nature to Rethink Artificial Intelligence

November 7, 2022
ep
78
with
James Bridle

description

This week our guest is writer, artist and technologist, James Bridle. Famous for trapping a self-driving car with salt in the mountains of Greece, his artworks have been commissioned by numerous galleries and institutions and have been exhibited worldwide. His writing on literature, culture and networks has appeared in magazines and newspapers including Wired, the Atlantic, the New Statesman, the Guardian, and the Financial Times. He is the author of 'New Dark Age' (2018) and his recently published 'Ways of Being' (2022). In this episode, we explore James’ books, with a particular emphasis on the lessons the natural world is teaching us about intelligence and how we can leverage that information to alter A.I.'s development towards something more humanistic and harmonious with the planet.

Find James’ work at jamesbridle.com, or get involved with his latest project at serverfarm.jamesbridle.com

**

Host: Steven Parton - LinkedIn / Twitter

Music by: Amine el Filali

transcript

The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

James Bridle [00:00:01] Contemporary computation is so complex and so opaque to most people and unknowable in its totality to anyone that we lack a lot of the kind of cognitive skills and the social structures to live healthily within it. 

Steven Parton [00:00:34] Hello everyone. My name is Steven Parton and you are listening to the feedback loop on Singularity Radio. This week our guest is writer, artist and technologist James Bridal. James's artwork has been commissioned by numerous galleries and institutions throughout the world, and his writing on literature, culture and networks has appeared in magazines and newspapers, including Wired, The Atlantic, The Guardian and the Financial Times. In 2018, he authored the book The New Dark Age, and just this year he recently published The Ways of Being. In this episode, we explore James's books with a particular emphasis on the lessons the natural world is teaching us about intelligence and how we can leverage that information to alter our development towards something that's more humanistic and more harmonious with the planet. And now let's jump into it. Everyone, please welcome to the podcast. James Bridle. The best place to start then is with this series of books that you recently wrote, The New Dark Age and the Ways of Being. And one thing that I'm interested in is that they're both kind of critical of technology. They they're quite challenging to the status quo and from very unique perspectives. Could you just talk a little bit about what made it motivated you to kind of explore maybe some of the failures of technology from these lenses? 

James Bridle [00:02:07] Certainly. And so I, I mean, I've been working in and around technology for 20 plus years now. My academic background, a very long time ago was in computer science. And when I also studied a lot of the kind of stuff that's particularly prominent in the latest book, Ways of Being. And it's kind of weird to reflect on that because I was studying it then at a time when it was really on the way out. It was it was very unpopular. It didn't seem to be going anywhere, and which I didn't realize when I started studying it become evident by the end. Um, and it's striking that actually not much has changed though. Like the underlying idea is that driving the kind of contemporary boom in A.I., much of the math is still the same as it was then, but the massive change has been in data and processing, essentially, which are also symptoms of the same thing. You know, we've had 20 years of vast data accumulation by social media companies and by governments. That's a huge driver of the kind of current capabilities of AI. And then you also have the increase in processing power, which comes at huge energy costs. So I think there's a kind of through line to some of the stuff I've been writing about, you know, more recently. And the first book came about. I've been making art work and working around technology for a long time. And I always wanted to write a book about the Internet, and I thought of a book about how great the Internet is. But I started writing that book, which came out in 2017, and I started writing it sort of between Brexit and Trump's election when it became, you know, unavoidable. That it was impossible to write a book about how great the Internet was because the Internet was so obviously contributing in quite important ways to these ruptures in culture. These things that really seemed to be damaging society in various ways. And having already studied a lot of the kind of weird effects of that, it felt necessary to write something that tried to account for those things in some way. And so New Dark Ages is a book about the current state of technology. Really not very much has changed in the last five years. Regarding what I wrote in that book, I don't think. Without trying to propose any kind of solutions. Indeed, thinking that a lot of the framing of technological issues as being amenable to simple solutions being a big part of the problem. But the main kind of thesis of those of that book is that contemporary computation is so complex and so opaque to most people and unknowable in its totality to anyone that we lack a lot of the kind of cognitive skills and the social structures to live healthily within it. And within that book, I look to all kinds of things, everything from kind of algorithmic systems, whether that's the thing that chooses what YouTube video you watch next, all the way up to systems used for kind of predictive policing or judicial sentencing, looking at the kind of biases within them, the kind of emergent effects that that trip us up or worse, and the just the kind of culture that those that the Internet in general, in these particular systems within it seem to be creating that we're just not very good at talking about or addressing. And one of the chapters in that book was also focused on on ecology and the climate. Talking about the impact of not just computational systems, which do generate vast amounts of kind of greenhouse gases and require huge amounts of energy and all kinds of things, but also in the cognitive aspects of it, which is also a big part of what New Dark Age was. I kind of sketched out this view of what I called computational thinking, which is what happens when we see the whole world as being like a computer. And we because of the, you know, complexity, but also power of the computational tools that we use all the time, we sort of shaped our way of understanding the world through those technologies, which is powerful in certain aspects, but hugely limiting and others. And writing that chapter scared me a little bit because it made me do quite a lot of research into the current kind of climate situation more than even I. He's a very interested person in this stuff had done previously. And it also seemed to hit a nerve with readers. It's one of the ones that kind of gets quoted back to me the most. And I think that's that's quite a key realization because it really expresses the extent to which, among all the other things these technologies do is they tend to separate us from the earth, from the ground on which we actually stand. And they're largely tools of abstraction. They create these kind of alternative and very limited models of the world in which we spend most of our time and thought and lead us to pay less attention to the world around us. And I don't just mean that in a kind of old man shouting it clouds way and kind of kids these days, I just think they have very fundamental impacts on the way that we think and how we imagine the world. And that desire for simple solutions to incredibly complex problems is one of them. And also just the general kind of malaise of the contemporary era, a kind of lack of trust in politics, even in community at large, or increased atomization. The growth of kind of fundamentalism, extremism, conspiracy theories and just increased division is partly a product of these kind of computational worldviews and partly result of a lack of agency knowing we live with inside these complex systems that we certainly don't control and barely understand and not knowing how to act meaningfully within them. And so in the last the last few years, you know, partly as a result of the reactions that climate chapter, I've refocused my practice around the environment and around ecology as I think probably the most important thing any of us could be working on if we can in the present moment, while at the same time trying to bring a little bit of what I know from my previous work or my previous focus to bear on questions of ecology. And so in the new book, there's a lot about ecology and, and the extraordinary abilities of kind of the more than human world. Everything that we share the world with that is not human and yet exhibits kind of extraordinary agency and intelligence in various ways. But I come to a little bit through the lens of technologies that we've created ourselves, and perhaps we talk a bit more about those ideas in a bit, and also try to think a little bit about how how technology could be otherwise, with particular attention to the question of if we are to go forward into a better world, whatever that looks like. You know, so often it feels like either we are on a kind of acceleration is part of in which these. Allergies will continue to grow in power and complexity until we reach some point of collapse. Or the alternative kind of deep ecological view is we will just have to reject all of these technologies and essentially go back to the caves in the sort of extreme formulation. And I'm always on the lookout for something slightly more hopeful and interesting than either of those proposals, but which will nevertheless require a kind of huge rethinking and adaptation on our part. And so that's really what ways of being is about, is about looking at the world through new eyes, but informed by a lot of what we do know and trying to think of new kind of structures and processes that we could generate together with everybody else that we share this planet with in the hope of kind of more just and equitable futures. 

Steven Parton [00:10:18] Yeah. And beyond just being more harmonious with nature, are there specific lessons that you think we could adopt from nature? I think at one point you mentioned like forests, fungi and octopus, octopi, you know, in their ability to some of the ways they express their intelligence. Do you think there's wisdom there that we can use to kind of break us out of that more binary computational worldview that you feel is harming us so much? 

James Bridle [00:10:47] Yeah. I mean, in the book I cite kind of many examples and I'll be happy to, to, to tell some of those pretty good stories. And but more broadly, the lesson really is that if we can descend to the human from some of our expectations of the world around us, stop seeing humanity and particularly human intelligence, as being both unique and kind of higher or more special than all other ways of of living in this world. Then we have vast amounts to learn about how to live with the Earth and with other species in ways that are more sustainable and that, you know, that really does involve treating everybody else. We share this planet with with far more kind of care and respects than we than we mostly have done for the last few centuries, if not longer. And and reimagining what our relationships would be, how we share space, how we make decisions, and simply how we how we think about other species, particularly as, you know, with trying to imagine them as collaborators and knowledge keepers for for huge amounts of things that we do very badly on this planet. But they have evolved many marvelous capabilities for doing otherwise. 

Steven Parton [00:12:06] Yeah. Are there cases of like biomimicry that you've you've seen or thought about that you think would be really great if we implemented? 

James Bridle [00:12:15] Well, I mean, it's less about biomimicry than outward. I don't think it's too much about biomimicry, but I do think about when we find examples in nature of creatures that seem to do things very well that we really struggle with. And so a really good example of this is slime molds, which are strange unicellular little critters somewhere between fungi and algae. They don't really fit into the established categories we have for other species. They some they spend part of their lifecycle as kind of individual little squiggly amoeba like creatures and sometimes just kind of big sacks of cytoplasm with free floating nuclei as kind of collaborative, cooperative agents. And they have some particularly extraordinary abilities. A few years ago, some researchers at the University of Tokyo made a sort of petri dish arrangement with oat flakes resembling oat flakes, which the slime mold really like to eat in the in the pattern of population centers of the Greater Tokyo area. And they also used light, which slime molds don't like very much to represent kind of rivers and mountains, kind of difficult geographical obstacles. And then they put slime mold on this plate. And what slime mold tries to do, one of the things it tries to do, seems to do is find the most efficient routes between food sources and within 24 hours, this slime mold that basically we created very clearly. And obviously the pattern of the Tokyo metro system is very cleverly engineered, highly efficient network system for this kind of particular geographical outline. And. And that's a neat trick, but it goes quite a lot further. Scientists subsequently discovered that they could set up an experiment to see how well slo mo could solve the traveling salesman problem, which is a really well, it's a very simple mathematical problem that's very, very hard to solve, which simply ask the question that if you have five or six cities to visit, what's the shortest route you can take to visit each of them? Only once? And this is incredibly hard for humans and for computers because it's what's called exponentially difficult, because if you have five cities, it's five times, four times three times, two times one possible routes. And you have to evaluate each one. There's no shortcut. But if you add one more city than it's six times, five times, four times it is times two times one. So this gets much, much harder, very, very quickly, which is exactly the kind of problems that humans and computers absolutely hate. But slow modes don't seem to have a problem with it. They solve it in linear time. It doesn't get harder for them each time. They just keep solving it on the same kind of straight line of time solving, and we don't know how they do it. It's not a mechanism that we're amenable to, and yet they have solved the problem that we spend, humanity collectively spends billions of pounds on. I mean, imagine if you're a big logistics company. Well, so it's simply we don't know a mechanism to solve this efficiently. So there's some other ability, but also some other way of thinking better. Another of my favorites is, you know, to illustrate it in a slightly different way is that for the last few years, I've been working with scientists in northern Greece. I live in Greece. And these scientists researching a a class of plants called hyper accumulators, hyper accumulators occur all over the world in various forms. But what they have in common is that they're capable of living in particularly metal rich soils, whatever that metal might be. In Greece, it's nickel, but they're found for all kinds of different substances around the world. And when a soil is very rich in metal, it's very high. That's toxic to most kinds of plants. But that means certain parts of evolve to to deal with it. And the way they they deal with it is that they are actually capable of kind of drawing up the metal into their stems, into their leaves and storing it there. The first research, big research on these kind of plants was actually done in the nineties by mining companies who thought that they could plant these plants on like old industrial sites and they'd essentially clean the soil. And that's called phyto remediation. It actually does work really, really well. So it's, you know, something that plants can help us do without us adding like additional damage or chemical load into the soils. And the researchers that I work with in Greece, they're actually doing something called phyto mining, which is that they're planting these plants on just open fields, not old industrial sites, but where there is this rich nickel content already there, there are like the plants grow and then they harvest the plants and then you can get the metal back out of the plants again. They can burn the plants. They can burn the plants. So there's a kind of circularity, more energy generation. The plant's roots stay on the ground, which retains carbon dioxide. So there's this kind of beautiful arrangement out of which we get massive loss. The plants get continual kind of growing cycles. And the plants that they're using there are really interesting. There's three different kinds. There's one that grows kind of all around the Mediterranean. There's one that only grows in kind of northern Greece and Albania. And there's one that only grows within 50 kilometers of the site. It's called Bombala Tim Fire. It's named after Mt. Timothy, the highest peak of this mountain range. And so they're called endemics. So they grow only in these are local areas. And what that means is they've evolved a particular knowledge of that place over millions and millions of years. And they figured out how to do something very specific, which is to extract quite a difficult, complex, even toxic substance from the ground without damaging the earth, and in fact, by making it more hospitable for other plants. And that given the way in which we typically mine new materials is is an extraordinary thing to see. The plants are capable of mining metals far more successfully and in harmony with the earth than we are. And so this for me, that's a really key example of a knowledge embedded in these creatures, developed by these creatures that we have so much to learn from. 

Steven Parton [00:18:29] Yeah. And you're touching on something that I think is really interesting as we, you know, build and develop artificial intelligence, which is that our technological intelligence is kind of made in a vacuum or in a lot of ways disconnected from the ecology of the planet. It's very much made, you know, with a narrow focus, with a very narrow set of data in most cases. And it's not really getting that millions of years of education that you talked about that the plant has as it develops its relationship with its environment. Do you think that that's part of what the issue here is that a lot of our technology is so well, I guess just isolated from the reality of the world and the human condition. 

James Bridle [00:19:17] I mean, yeah, not just the human condition, but a kind of global ecological condition. In the book, I describe the kind of air that we're largely developing at present as corporate corporatized, because it is largely developed by by large corporations, occasionally by governments. And this is a kind of self-fulfilling prophecy, because the way in which air currently functions is, as I mentioned earlier, is is largely as a function of having vast amounts of data and vast amounts of processing power, both of which cost huge amounts of money. So the ability to develop this kind of air in the present moment is very much limited to those with huge. Lots of money in advance. And so, of course, it's going to follow into that that that kind of desires, which are largely profit driven. The success metric for these eyes is ultimately how much money they can make. And if you think of an organism of any organism that's evolved within a system in which its primary aim is to make more money, you're going to end up with a pretty grim and very specific type of intelligence. You can you can imagine the kind of that framework of of digital corporate development as being like kind of ecological niche into which this this these forms of A.I. are fitting very, very neatly, but in a very, very narrow fashion and with very, very little connection to everything else that really matters upon this planet. And, you know, for me, this this you know, I got this sense of that very clearly from watching just even the very basic examples of how you see kind of emerging into the world. It's important, I think, to make the distinction here between the A.I. that we actually have, which is largely just quite complex algorithms, machine learning, which is brilliant, but it's not intelligence. And a distinction between that and the kind of public idea of A.I., the kind of science fiction popular imagination, which is also very interesting and possibly more powerful as a driver of our imaginations than the reality of AI, because it's been with us for much longer and is much more powerful, but again tends towards the idea of a something like the human and that idea of something like the human is a continual problem we have with thinking about these things. But when you, you know, just looking at the the places in which air has appeared in the popular imagination as a result of these strange parts of its development over the last couple of decades. You know, it's striking to which most of those examples are to do with essentially beating humans at things that we are either considered to be quite good at or particularly enjoy, like playing games or, you know, replacing us and automating us out of work and livelihoods in various ways because it is at heart largely competitive because of its kind of profit impulse. The way most AI is developed is, is in the form of essentially competing against human benchmarks in various ways. So breeds this quite so voracious form of intelligence that's designed to extract specific forms of knowledge and then iterate upon them in order to beat us that the things that we enjoy and ultimately to supplant us in various ways. But what I think is also very interesting about that is it quite often if you stare at it long enough, it really reveals the extent to which, despite the fact we're constantly trying to replicate human intelligence in various ways, we're also demonstrating the ways in which these this machine intelligence is deeply unlike the human. You know, one of the reasons that is successful a bit is it does come up with strategies that are not human, like, you know, it may be trained against humans or it may be sort of on its own, but it is doing something that is very unlike human intelligence. And that was another of the real realizations for me in, in, in writing ways of being was that there's something, there's something fascinating about a moment in which. We are being forced to recognize, having been blind to it for so long, that other forms of intelligence than the human exist, that if we're capable of creating a form of intelligence that is not like the human intelligence, then more than one kind of intelligence exists. And if more than one kind of intelligence exists, then a potentially infinite number of forms or ways of doing intelligence exist. It's that moment that we start to look around ourselves and realize, Oh, we're surrounded by these different kinds of intelligence, but we haven't given enough credence to waiting there, just, you know, to be heard and listened to if we actually seek for them yet. 

Steven Parton [00:24:07] What specific lessons do you think are there for us to learn with something like machine intelligence or the larger planetary forms of intelligence that maybe we've been blind to? Like, is there something that we can say you can put your finger on that is maybe a nuance of intelligence or. 

James Bridle [00:24:25] The main the main realizations about intelligence that I had writing this book. And I did not come to come to this as a as a psychologist or a neuroscientist. You know, I came to it as a an interested writer, an artist with a background in kind of computer science and the visual arts and just trying to get a handle on this. And so really, you know, what happened was a lot of my own kind of preconceptions were challenged in doing so. And the first of which is that, um, that there is really no good definition of intelligence, intelligence about these things that we say all the time and we mostly agree on what we're talking about in that particular moment, but is actually very poorly defined. You can look at various qualities. You know, there's a bunch of things like memory and planning. So I like to use certain ways of acting, certain kind of mental cognitive systems that we like to think of as intelligent behavior and any definition of intelligence. We usually take a kind of grab bag of those things and say, Well, that's intelligence. But really what we've always meant by intelligence is what humans do. So we already have this kind of blind spot to to other ways, other ways of thinking and realizing quite how stark that was was quite a realization, because it then allows you to look at other intelligences, intelligence as a new and when you start to do that. You start to see that a lot of the other assumptions that go along with with that don't really hold. And in particular, you know, my conception of intelligence now, as well as being something that is essentially more than human, is also to constantly be reminded that intelligence is embodied and it's relational. And what that means is it's something that doesn't just happen inside the head and it doesn't just happen within individuals. One of my favorite examples of how bad we are recognizing the intelligence of other beings is, is Gibbon's and Gibbon's a brilliant, you know, obviously clever, highly evolved apes very close to us and the kind of evolutionary history. And yet for years they presented a problem, which is that they consistently failed and refused to participate in one of the kind of standard tests of intelligence, which is tool use. And what, you know, behavioral scientists used to do is they put Gibbons along with a whole bunch of other apes into an enclosure, and they'd give them a tool like a stick, you know, just lying there on the ground and place some food out of reach. And they'd wait and see if the animal used the stick to get the food. And the thing is, most apes do this find the chimpanzees and the orangutans and gorillas who are sort of between us and the gibbon on this kind of semi imaginary scale, but also a bunch of ones that we consider to be lower like kind of baboons and macaque monkeys and bears, things like that. And so Gibbons presented this problem, like, why? Why do they not fit within our understanding? And it was only when the experiment was redesigned and the sticks were hung from the top of the Gibbons enclosure, that the Gibbons immediately grasped the sticks and use them as we might expect them to do, because they simply didn't see them the same way when they were lying on the ground. Because Gibbons are arboreal, they they live most of their time in the trees. And so they have an intelligence that is oriented upwards and they have a body pattern that is configured for making use of things that appear to them in a different way. They even have, like, particularly long fingers, which make it very hard for them to pick up things off the ground, but very good at picking things off the trees. And so it just in this moment that the gibbon, we changed our perspective in order to recognize how Gibbons intelligence might be configured differently. The Gibbon also showed us that intelligence is embodied, is not just about what happens in the head, but is actually about the entire pattern of the body and therefore one's whole life experience and surroundings. And and I say also that intelligence is relational because it is something that appears between bodies in situations. The brain is not a thing existing in event, which is also why most eyes are so incredibly limited because even having a few senses, but also particular actuators, ways of actually making or doing things in the world, changing the structure of the way of the world around us is also a huge part of our intelligence. I mean, anyone who's had the experience of kind of going to a place and remembering something or having a particular experience, they that relate to that place knows that human intelligence is essentially embedded in the world around us. We make use of the world as as part of our intelligence apparatus. You could say the same thing in mobile phones and computers. We outsource cognition to to other devices. And it only is only accessible to us when we're in relationship with those things. But you also see things like the way in which spiders store information and plans about the world in the form of their webs. The cognition extends beyond the body. So all of which is to say that the intelligence, as well as historically sort of it is not something internalized unique to humans and and and just in this kind of single clump, but is part of our ongoing connection with the world. 

Steven Parton [00:29:47] Yeah. Do you think we're in a situation now where we're maybe we've we've engineered ourselves into a maladaptive environment where our metaphorical stick is on the ground when it really should be up in the air. Do you mean like have we have we maybe thought about things in a way that have made the environment not conducive to us because we've. We didn't think like we did with the baboons. 

James Bridle [00:30:14] Yeah. I mean, well, we've certainly created a situation which through our own narrow view of what we think is good for us and which has made us given us the illusion that we can make use of everything else on the planet has certainly created a maladaptive and maladjusted world, which is kind of returning to damage us. And I think particularly with regard to our computational tools, we've constructed along them, them along very, very narrow line lines, but they've become so powerful, we imagine them as the only way to think at all. And, you know, it's very striking to me and I wrote about this in the book that, you know, 99.9, 99% of all computers in existence are of one type of computer. Right? They are. They're the universal Turing machine, this binary machine that was kind of first described by Alan Turing in the 1930s. And that has a very an incredibly powerful but very, very limited structure. Turing himself said that the automatic machine was a limited machine that could only do whatever you told it to do. It's not adaptive and like life is adaptive and is designed to operate only in very enclosed circumstances. And yet this tool has become the basis of pretty much all life on earth. It's what we used not only to to access the world and to gather information, but to organize that information, to categorize that knowledge. It reproduces its own kind of binary algorithmic form into our systems of thinking and knowledge. And that is relatively new. I mean, it's it's less than 100 years old at this point. And yet it has come to completely dominate human thinking because of its power. But a power that has is showing its limitations in really powerful ways. And, you know, one of the things I write about in the book is that other systems are available essentially, even if it's really extraordinary. If you go back to Turing's very first couple of papers on what he called the automatic machine, and we call the Turing Machine the universal Turing Machine, that is the basis of almost all of our computers. Turing himself says that this is only one type of computer and another type of computer as possible, and he doesn't say very much about it. He calls it a choice machine or an Oracle machine. But he says in a really confusing facility, a deflationary. And he just says about the the Oracle machine that whatever it is, it cannot be a machine. And then just kind of leaves that hanging and goes on with all of his other computation work. But what the Oracle machine is, is, is a a computer, which doesn't just work entirely internally, that doesn't just step through a set of preprogramed conditions and try to solve one single problem, as all of our machines do, and really as we've kind of imagined the brain to be doing, rather it stops at certain points in its process and looks for input from the outside, from the greater world, and attempts to communicate with something outside of itself in order to understand the system and the broader system in which it's operating better to understand its ecosystem and therefore to be capable of adjusting its programing, adjusting its course based on the actual situation it finds itself in. And that is the basis for a lot of thinking in disciplines like cybernetics that take these very narrow, rigid, fixed ideas of computation. And it's kind of quite interesting new directions. And they also present within disciplines like soft robotics and with various forms of alternative AI and kind of the biological systems and so on so forth that are starting to map out some alternatives to this very monolithic, narrow corpora I described. 

Steven Parton [00:34:06] Yeah. And I feel like this kind of gets into a point you make in the book as well that I'd love for you to expand on if you could, which is that it's basically really bad to let allies make decisions for us, that we just capitulate to their whatever decision they come up with without maybe that feedback loop that the Oracle machine might provide. So could you talk maybe a little bit about why you think, you know, even something as mundane as maybe like a GPS giving us directions creates what I think you call boredom and fear that kind of permeates society. 

James Bridle [00:34:39] Yeah. I mean, it doesn't even taken AI really to do this. There's. There's. There's something that happens when we are given assistance by something, particularly something fast and automated, that that kind of intercedes between us and the world. And there's a this is this is often called automation bias, in fact. And it's been quite well studied by psychologists. But the kind of popular example is, is what park rangers in the UK and the US called Death by GPS, which is the growing and not insignificant number of cases of people who are found dead in their vehicles because they followed inaccurate GPS advice against all the evidence of their senses. So people driving into a kind of Death Valley or other kind of inhospitable places and running out of gas, not covering food, or just going so far off road or even to the point of driving into rivers and lakes, because this bright line on the screen tells them there's a road there and they trust the machine over there are sins. And that is terrible and can also seem a little bit comedic, which is dangerous because this isn't a product of stupidity. Automation bias occurs in everybody, including people who are super highly trained and essentially should know better. There's a very famous study of airline pilots in simulators who, despite having thousands, tens of thousands of hours of flight experience and having that really highly trained pilots index, the knowledge of how a plane functions and what to do in every conceivable situation and all of the checklists have been developed and so on. So far, these experiments showed very clearly that if those pilots are given the wrong instruction by an automated system that they trust and even, you know, within just the right time frame of that decision making process, 99% of the time they will follow it because essentially and probably because our brains are designed or have at least evolved to take on those sort of cues because our brains try to do as least work as possible. And so it's a very efficient kind of hack on our cognition to provide us with easiest solutions at just the moment that we need them. And so much of contemporary technology has kind of crept up so close to the skin that that it can kind of guide us and mold us in those particular moments so powerfully. I think to have something like Pokémon Go, you know, which was and I think it's probably still going on, but certainly it's height that's kind of incredibly powerful and occasionally incredibly fun game that, you know, got people out onto the streets and running around and doing all these kind of interesting things. We saw some of the like the weird crowd effects of that where thousands of people would descend on a location. What was kind of less talked about was the fact that, you know, when that game was released as Nintendo and Niantic, I think the game company had pre-sold the locations of most of the the gyms, these places you need to go to kind of recharge or whatever it is to large brands to to Starbucks and McDonald's and so to others. And so people who are kind of running around playing this game were literally being kind of physically walked into these corporate locations without having any idea that this was kind of absolutely part of the programing of the game itself. So this kind of control over people's behavior in multiple ways is is so easy to do with our technologies that I think we have to be incredibly cautious of, of any technology that essentially removes our conscious choice and agency, whatever the intention of not doing it is our own kind of critical thinking around that will always matter far, far more for our health and safety and the health and safety of everything around us. Because for me, the the root of so much of the malaise that I mentioned earlier and that you've just put up with this kind of uncertainty and fear, shading into anger and hate that characterizes so much of kind of public discourse in the present moment is a result of our awareness, however, unconscious of that real lack of agency. And that becomes really critical when it comes to things like dealing with the climate emergency, because we all know something terrible is happening, and yet most of us have no idea how to deal with it. And we're paralyzed by fear on the one hand, because the acknowledgment of the climate emergency brings on a form of trauma that we are that we do not have the tools to most of us to to even acknowledge, let alone work through in the present. But it also strikes the heart of this lack of agency that we possess, that we know so little about the world around us that we feel. Lack of power over our own lives. And that and when you like that, that sense of agency, the knowledge that your actions are meaningful and can make a difference even within the limited sphere of your own life, then our ability to enact political change on a much greater scale is zero. And so there's a real like suppression of collective agency through this suppression of individual agency. What happens to these technologies? 

Steven Parton [00:40:16] When I was going to say, it feels like we're kind of trapped in a lot of ways now in these in these AI systems that are mostly being used, I feel like for surveillance capitalism to steer us away from thinking about those things that are more challenging. And we're happy to hand over that control because we don't want to think about those things because like you said, it's traumatic. 

James Bridle [00:40:39] Yeah. It's always easier not to think about these things. They are hard to think about. And and that's that's very clear. And that's something that is taken advantage of by a lot of these systems that then, you know, make lots of money by selling us on the alternatives. And that can be a critique of capitalism. I'm more than happy to make that critique, but it's also something to do with with the ease of the systems being put into that particular use, because I think it's also really important to emphasize that these tools can always be turned around and put to other uses. They contain kind of really extraordinary potentialities for seeing the world differently and not just the world through screens, but for example, I mean, just to take the first story about the, the the dual use abilities to use a military term for, for most of these technologies. One of my so many favorite stories in the book, one of them is the is the one about the senior manager, Nasser, who got this phone call one day. This has happened a few years ago in kind of 2017, 2018. And he tells a story that he just sort of got this phone call slightly out of the blue and he had this checking to make sure it was real from someone who claimed to be from the National Geospatial Agency, which is the kind of third U.S. intelligence agency after CIA and CIA and NSA. It's the one that runs the secret space program. And he told this Nasser manager, the person on the phone, that they had a couple of satellites going spare. And did Nasser want them? And this Nasser manager after Jackie, a few things ended up in a kind of huge warehouse in upstate New York, where in this immense secret, clean room were two space telescopes more advanced than the Hubble Space Telescope, which is has been for 20 more years. The most advanced thing we put into space. This predates the James Webb, though I think these are definitely equivalents and technology to the James Webb as well and were themselves 10 to 20 years old surplus to requirements of the surveillance industry. And those two satellites are now being refurbished by Nasser and I think one of them has launched or is launching very soon as W first the world, the wide field infrared spectrometry telescope which is going into space to look for signs of dark matter in the origin of the universe and also to scan for exo planets, new forms of life in outer space. And pretty much, I mean, no side to Nasser. They did a lot of work here, but the main thing they did was turn it around to point it up instead of down. Like these things were designed to be pointed at earth, to spy on us to in secret and like, you know, maintain existing power relationships. And they've been flipped around to look at the world and the universe and to increase our knowledge and agency within it. And so there is always within these technologies the potential for kind of total and radical change. And and that's really a really key thing to understand when talking about this. And for me, it deepens the argument against capitalism because you can really separate, you know, the effects of these technologies or the potential effects from the particular things that they're mostly used for in the present. And then you have to start looking for other ways in which. We can create them because it's very obvious that simply the code alone or the techniques we have in the present are so easily captured by corporate forces, and that's why they mostly operate in the way they do. It's not enough just to say, well, you know, some of us will build them in a different way. You need to build quite robust societal collective frameworks around them in order to think very deeply about doing it in another way. Means. One of my favorite examples of this is the a group called Terra Media in New Zealand who for the last few years have been using machine learning to build translation systems between Maori, an Indigenous language and culture of New Zealand and and other languages, particularly English, because they were just a radio station. I mean a very wonderful radio station. They'd been broadcasting for 20, 30 years in Maori and across a range of dialects. So they had this huge corpus of recorded speech and they wanted to, to index it in order to index something like that using machine learning. You also need a corpus of tagged speech. So they needed some sentences in Maori with like actually typed equivalents. They knew how the translation would work just just to transcribe it. And what they did was they reached out to the community, they reached out to almost all the Maori language kind of organizations in New Zealand to gather this corpus of tags speech, literally just asking people to record themselves, reading particular sentences and they got a huge response and they were capable of building in just a few months a speech engine, speech recognition engine that it could empower ones built by kind of large corporations. But what's also particularly key here is that they they did it with the community, but it remains owned by their communities. One of the one of the big arguments they've had subsequently is that large corporations have tried to either buy that technology directly or to pay some of the people who previously volunteered their time to to to do this again for their language systems. And the argument that they make is that this is a form of kind of ongoing colonialism, where these you know, the only reason that a large corporation wants a maori speech recognition engine is essentially to sell things including that language back to the Maori themselves, rather than having a kind of community ownership over this system. And they've even developed a language which a protocol and a licensing for things like this that are made by Indigenous communities are currently being used by other Indigenous communities around the world in order to kind of restrict the use of this to the communities that make it that make them themselves. And so there is not within this necessarily a big obvious profit motive, particularly if you're not scaling at the size of a corporation. But while there is a very, very different models for how we can usefully engage the kind of process of AI and machine learning for benefits of community rather than purely as kind of profit making things that involve changing the fundamental structure of how we go about building technology in the first place. 

Steven Parton [00:47:28] Yeah. I mean, I love that idea, but the place I always get stuck on is that in a world where it is driven by capitalism, where a lot of people do need the money, how do you incentivize people to do something, you know, for your community or, you know, to basically turn the satellite around in the case of naysayer rather than take the exploitive, you know, consumerism path when there is so many aspects working against us, it seems like in a lot of ways we really need to have this kind of feedback loop where there's a cultural shift at the same time, a technological, technological shift that then enables a cultural shift that enables a better, you know, technological. 

James Bridle [00:48:15] Now these things are tightly tied. 

Steven Parton [00:48:16] Yeah. And I'm just. The hard part for me is it feels like the momentum is going against making these communities decisions like, you know, they did in New Zealand and is pushing most people's decision making apparatus, you know, bred into the the negative incentives of capitalism. 

James Bridle [00:48:35] Yeah. And recognizing first and foremost, there is a deliberate process that is being done to people, but that is part of a plan. However, semi unconscious like I think everyone doing it is like deeply evil. But, but that's that's that is absolutely the track we're on. And and and in all these cases, any technological problem of sufficient scale is primarily a political one. It involves the the community and culture that you're engaged with, but it requires thinking very hard about a real, real moment. But I was kind of writing this, thinking about I'm thinking about one of these kind of, you know, supposedly terrifying paradoxes that crop up in these kind of arguments all the time, which is that thinking about the trolley problem. It's I'm sure some of your listeners are familiar with or anyone who's kind of familiar with all kind of machine ethics debates comes about a lot. The briefly the trolley problem is, you know, there is a runaway trolley car or tram for Europeans approaching a switch point. And on one side of the thing is like a little old lady. I know. The other one is like six children or something. It's basically just, you know, how do you try and do the least harm within these kind of automated systems? And it's brought up all the time as some kind of like, show stopping moral quandary that tells us that there's always difficult, hard decisions to be made within this kind of development. But the thing about the trolley problem is it ignores absolutely everything else that matters. And this in this equation, it ignores the fact that, you know, a system was designed with these particular constraints. It ignores the fact that a bunch of other, you know, things have to come into play around the cultural design of a system in order to produce this in the first place. And the fact that a real world tram scenario includes not just those people on the top and everyone else on the roads. The fact that you've decided to have a public transport system over a car one, the fact that you've decided to make a system of brakes that's capable of failing. And in a and this kind of way, I could go on and on. But what it does, it ignores the entire context of decisions that went into producing this particular moment. And that is our great failure in kind of technological ethics in the moment is that we only see these kind of inflection points of of where harm is done. Rather than focusing on the far broader culture in which in which those situations are produced. And, you know, we see it happening now with with all the discussions around self-driving cars, which is a kind of test case of A.I. systems, all the discussions around kind of pedestrian safety. And so, you know, it's relationships, self-driving, that are really only hard questions if you think that self-driving cars are a thing that just obviously should exist without respect to the communities and, you know, people that surround them all the time. And so, yeah, there's a huge, huge gap between the way that we think about the particular instances of technology and the actual conditions of life that need to be brought to bear upon them. And money is the thing that mostly forces that that constriction of viewpoints. But I, I, I like to think that it is potentially changing and that certain awarenesses and changes of thought remain entirely possible, and not least because of the ways in which our technology has in certain times created that greater awareness. You know, I'm thinking of things like the satellite programs giving us this just the, you know, the whole the kind of the this the pale blue dot, you know, which changed environmental consciousness to such a degree, the shot from outer space. You know, I relate very closely my thoughts on the possibility of the strangeness of A.I. intelligence, making us think more about the intelligence of other beings to the fact that without the Internet, we wouldn't have been capable of recognizing the way in which forest networks operate. And when. The first researchers who start to discover the kind of extraordinary networks that exist within forests, the relationships between tree white tree roots and fungal mycelium that allow nutrients and information as well to kind of pass through the forest roots. They were also some of the first people in the 1970s and eighties because they were working in large institutions to be connected up to the Internet, which was the birth of a certain way of thinking about networks. They came to that work with a model of networks in their mind, which didn't exist previously. And in fact, even the the mathematics that was developed to describe the Internet network theory, which was developed because previous mathematical topology didn't. You know, describe well enough the behavior of this very odd thing, the Internet, which had all of these different nodes, different weights, and you could take them in and put them out. And it didn't seem to change the overall kind of transmission information power of the network. That mathematics was then used to understand the ways in which the trees were communicating. The two are not identical, but they form a kind of powerful metaphor that we were only capable of developing because of our technology. We seem to have the need as a species to kind of construct these ways of seeing ourselves internally, either in our minds or kind of through to use in the building of technologies that then allow us to see the world afresh. Yeah, to radically change our perspective on the world. And that potential remains always within us. What was capable of reimagining the world from some of the most surprising kind of beginning points when it that necessity is upon us? I don't necessarily hold high hopes that we're going to do it in particular speed in the present moment, but it remains something useful for me to think about. 

Steven Parton [00:54:44] Yeah. So as we come up kind of towards the end of our time here, would it be fair to say that your hope or maybe the way the stopgap that you see in this kind of exploitive momentum that intelligence and technology and capitalism have kind of brought us to that you think we can maybe turn that around by the fact that as technology progresses, it gives us new insights into things that kind of slow that momentum or undermine it and help us kind of see how we can do better. 

James Bridle [00:55:17] I don't think technology alone is going to do any of those things. I mean, I think but but if we are capable of of sharing the knowledge of how to build these technologies in ways that the little stories like the contemporary example do give us some hope to. We are capable, utterly of changing the direction of them. And that will require, as I say, huge political changes. But the lessons are right there. We're really. Only just now starting to glimpse the abilities of other beings around us and as we learn more and more about them. You know, whether it's simple things like the slime mold equations or whether it's, you know, the forms of politics that I write about extensively in the book enacted by other species. There are real lessons for us. In some ways, it becomes untenable to stick to our existing processes when more and more alternatives become available to us and start to become tested out and actually practiced. I do think that there is a tendency, a hopefulness within us that pushes us towards those. And they're going to be necessary because we're not about to stop climate change or or significantly mitigate it any time soon. And so alternative strategies are become mandatory, essentially, in the situation that we're going to face. And and whether. However much we do in the next few decades to address the changes that are going to happen. That's going to have to involve looking around us for the strategies and knowledge is that exist beyond the human and beyond our technology so that perhaps in partnership with the to to survive at all in the next decades. And so that doesn't sound a lot like hope to me. And I'm not that interested in hope, optimism. I'm interested particularly in agency and what we're capable of imagining, capable of feeling ourselves, capable of, and therefore capable of actually doing when the time becomes necessary. 

Steven Parton [00:57:26] That's a lovely note to end on, James, but I do want to quickly offer you a chance to put any closing notes on and if you'd like. I know obviously we're going to put a link to the book in the show notes and tell everybody where they can find it. But if you have any closing thoughts or projects you're working on that you'd like to share, talk about, feel free. 

James Bridle [00:57:47] Well, so, I mean, you know, having written that book, I'm now trying to work. I'm trying to follow some of its rules, essentially. You know, one of the you know, trying to take a few of those ideas from the book and say, well, you know, if we actually started to work along these lines, what would that sort of look like? And so a project I have going at the moment is is a project called Server Farm, which is intentionally wide open at this point and potentially almost certainly decades long because one of the things you find when you start to work with plants and animals is that they don't fit within, you know, kind of standard schedules or timelines. But Server Farm is a project essentially to reconstruct the entire architecture of computation as we imagine it at the moment. And as I critique within the book, has being this incredibly narrow binary and maladaptive or rather non adaptive kind of process to bring in other species, animals, plants and microorganisms to start to take on some of the to, to, to maintain the metaphor of computation, input, processing, output, everything we know from quite simple architectures. But to see what happens when, for example, some of that computation is being performed by slime, mold, fungi or other kinds of microorganisms, when some of that memory storage is being performed by DNA changes within plants and seeds, when the when the output is a field of flowers whose, you know, placement is designed by patterns to those microorganisms. And just this is a very large scale, long term project, as I said, but it's a place in which the kind of more than human relationships of the quality and care and respect that I describe in the book might actually be practiced when we see other species and perhaps technologies as AIDS, as being both persons and having their own kind of rights, responsibilities being things that we can't really know, which is something we haven't really talked about in this conversation. It's very interesting to me how do you work with systems that you don't fully understand and can't really know because they're so different to how do we do that in ways that manifest equality and and ultimately having this access to cognitive systems that are composed of US machines and other species, what sort of questions can we ask? How can we reframe the things that we want to know and how can we understand the answers? So it's a big, big project that I don't know how it will go, but it's already leading to all kinds of kind of fascinating questions about how we relate and how we think. 

Steven Parton [01:00:20] Is there any way for people to kind of keep an eye on that, too, to get involved with their interest? 

James Bridle [01:00:26] Yeah, yeah. You can get a server from that famous come. You can see me talking about it. You can even read a weird little science fiction story about it. 

Steven Parton [01:00:33] Perfect. Lovely. James Mann, thank you so much for this well-articulated conversation and a lot to think about. I really appreciate your time. 

James Bridle [01:00:42] Thanks so much for having me. It's been a pleasure.