< return

Why Storytelling Will Prevent AI Dominance

March 6, 2023
ep
93
with
Angus Fletcher

description

This week our guest is Professor of Story Science at Ohio State University’s Project Narrative, Angus Fletcher, who pulls on his background of literature and neuroscience to understand how brains and machines process story and narrative.

Angus has recently made some extremely bold claims, including putting forth a proof that “even a sentient, self-aware, and infinitely powerful computer could never innovate” because it can’t engage in narrative thought. In essence, computer AI cannot replicate human creativity, and all of our expectations around self-driving cars and ChatGPT come down to a human-guided prank that pretends to do something it’s not really doing. In this episode, I obviously push back on this idea that narrative limits computer AI, but Angus makes some strong counter-arguments.

Find out more about Angus and his work at angusfletcher.co

**

Apply for registration to our exclusive South By Southwest event on March 14th @ www.su.org/basecamp-sxsw

Apply for an Executive Program Scholarship at su.org/executive-program/ep-scholarship

Learn more about Singularity: su.org

Host: Steven Parton - LinkedIn / Twitter

Music by: Amine el Filali

transcript

The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

Angus Fletcher [00:00:01] I mean, I think we've had this dream that has accelerated over the last 20, 30 years that somehow there's going to be this computer to come and save us from ourselves. And this is literally the medieval god, you know, this all logical, omniscient creature that somehow fixes everything and creates heaven. It's not going to happen. We have to own our problems. We have to fix our problems. And a big part of that means we have to start talking with each other. 

Steven Parton [00:00:36] Hello, everyone. My name is Steven Parton and you are listening to the feedback loop by Singularity. Before we jump into today's episode, I am excited to share a bit of news. First, I'll be heading to South by Southwest in Austin on March 14th for an exclusive singularity event at The Contemporary, a stunning modern art gallery that is in the heart of downtown Austin. This will include a full day of connections, discussions and inspiration with coffee and snacks throughout the day with an open bar celebration at night. So if you're heading to South by and you're interested in joining me and having some discussions, meeting our community of experts and changemakers, then you can go to su.org/basecamp-sxsw, which I will link in the episode description so you can sign up for this free invite only event and just to know it is not a marketing ploy. When I say that space is genuinely limited, so if you are serious about joining, you probably want to sign up as soon as you can and get one of those reserved spots. And in other news, we have a exciting opportunity for those of you with a track record of leadership who are focused on positive impact. Specifically, we're excited to announce that for 2023, we're giving away a full ride scholarship to each one of our five very renowned executive programs where you can get all kinds of hands on training and experience with the world leading experts. You can find the link to that also in the episode description and once more time is of the essence here because the application deadline is on March 15th. And one final note, we will be taking a break next week while we attend South by Southwest in Austin. But we will return with our normal schedule the following Monday on March 20th. And now this week's guest is professor of story science at Ohio State University's Project Narrative, Angus Fletcher, who pulls on his background of literature and neuroscience to understand how brains and machines process story and narrative. Angus has recently made some extremely bold claims, including putting forth a proof that, quote, even a sentient, self-aware and infinitely powerful computer could never innovate, end quote, because in essence, it can't engage in narrative thought. Angus is basically arguing that computer. I cannot replicate human creativity and that all of our expectations around self-driving cars and Chachi come down largely to a human guided prank that pretends to do something it's not really doing. In this episode, I obviously push back on these ideas about narrative limits on computer A.I., but Angus makes some strong counterarguments. I'll leave it up to you to decide what you think. So on that note, please welcome to the feedback loop. Angus Fletcher. All right, man. Well, and yes, I think the best place to start with you, just because there are a lot of topics that we're going to get into that are a bit more complicated is just to lay a little bit of a foundation by starting with the cliche. What is your background was the focus of your work and what are you really trying to work on these days? 

Angus Fletcher [00:03:59] So I have a very unusual academic background. I am known now as a story scientist, but basically I got started out in neurophysiology, which basically meant that I was studying how one neuron talked to another neuron. And over time, I became very interested in the idea that neurons and brains in general were doing a lot of narrative work or sort of planning and plotting and hypothesizing strategies. What could happen, what could I do, these kinds of things. And so I was like, I should go study imperatives. And so I went and sort of got my passion that and I've had a kind of unusual career which has carried me. Now I'm at Ohio State Project Narrative. Before that, I was at USC and at Stanford, and I work with a lot of different groups. I work with people in Hollywood. I'm now most recently work with U.S. Special Operations. But it's all on this focus on narrative, how narratives work, particularly how narratives work in the brain and why narratives have evolved to be such a powerful tool in human intelligence. So that's kind of my my main background and one of my main interests. 

Steven Parton [00:04:59] Sure. And so I think with that, we're going to go straight to the big thing that you kind of put forth, which is that you think that computers and specifically AI lack the ability to ever really replicate narrative or let's just say human creativity, and you even say, quote, even a sentient, self-aware or infinitely powerful computer could never innovate or plan to take over the world. So why is it that this idea of narrative that you do focus on is not something that you feel can be expressed through the technology as we're trying to today? 

Angus Fletcher [00:05:36] So first of all, I want to make it clear that narrative, as we think of narrative usually as a form of communication. Mm hmm. Narrative is a lot of how our brain thinks. And the important thing about narrative is it's a mechanism. And it's a mechanism for going from cause to effect. And computers work through logic. So all a computer is when you break it down is a set of logic gates Specifically a set of NAND nor logic. Gates And those logic gates and logic in general have been known around for hundreds, really thousands of years back to Aristotle's Organon and they think in equations. And equations are just fundamentally different from narratives. They're just different mechanisms. The way I often explain it to people is it's like the difference being a hammer and a saw. It's just two totally different tools. And the human brain has evolved mechanisms to do both logic and narrative, and both of those mechanisms can be non-conscious. So sometimes when people say, Oh, well, what about consciousness, what about emergent properties, these kinds of things, that's totally irrelevant because most of the a narrative thought in our head is non-conscious. So the main point of the argument is simply that these are two different mechanisms. There are two different tools. Both of them are incredibly powerful. Both of them are necessary for life. But humans have evolved to be good at one of them and in fact much, much better at humans than humans at one of them. But they just can't do the other thing at all. And that's why the future is really in human partnerships. It's not in autonomous A.I. or AGI or anything like that. 

Steven Parton [00:07:01] So one of the things that you've said that kind of separates this, I guess, logic and equation type of thought from narrative type of thought is this idea of thinking in action rather than thinking an equation. So what does that mean that we think in action? And why is that something that an I couldn't do? 

Angus Fletcher [00:07:23] So let's start with AIG. Because, I mean, what an eye does, which is amazing, is in I think's in truth, the purpose of logic as it was developed and devised is to determine what is true. And so an example of something that is true is two plus two equals four or two plus two is four. So true things are eternally true. They exist in the mathematical present tense. They don't change. There is no change in truth. Truth doesn't change over time. So it's important to understand that that's the power of how computers think. But that's also the limit of how computers think. Because if you want to think in terms of a cause and effect, the most fundamental thing about a cause and an effect is they cannot occur at the same period of time. A cause must precede the effect. And if you are thinking in the mathematical present tense, that means you cannot think in terms of cause and effect. What you have to do is you have to think cause equals effect. And this is what computers do. So a kind of classic computer thing is to say fire equals smoke or smoke equals fire. And that's very useful, very useful in terms of signs and semiotics and whatnot. But when you actually start to get down to physics, there's a big difference between whether or not fire comes before smoke or smoke comes before fire. And computers can't do that. That doesn't make any sense to them. And this is why computers think and correlations. I computers look for correlational patterns and they can look in for hundreds, thousands, millions of correlational patterns, which is one of the reasons that they're so good at identifying two apparently disconnected things. But they can't do that simple human thing, which is to say, Oh. Fire causes smoke as opposed to fire. You can smoke. 

Steven Parton [00:09:07] So the big challenge here that you're pointing to is that there's a inability for a machine that's computing in the present moment to to consider a span of time. Can the span of time that life takes place and is not something that a computer can capture because it has to compute in snapshots? Is that kind of what you're saying? 

Angus Fletcher [00:09:29] Yeah. So computers sometimes people say to me, well, Angus, don't you know that computers can do time? I mean, this is totally ridiculous. You know, I mean, obviously, you know, mathematically you can have a mathematical curve, but to your point, it's it's not actually time. It's snapshots. Mm hmm. And, you know, so so, I mean, the way to think about it would be the way to think about a kind of an old time film reel. And, you know, when you watch a 35 millimeter film, it seems like things are moving on the screen. But, of course, you know, in reality, that's a series of frozen images that are just moved to create the illusion of motion. And it's the same thing with the computer is playing chess. A computer is not thinking, oh, you know, here are all the possible future states, you know, and this will happen and this will happen something in these causal chains. It's just thinking of all possibilities simultaneously as probabilities. It's just a different way of thinking. And it's not a worse way of thinking. It allows you to do lots of things that a human brain could never do. But it also means that because it's a different way of thinking again, like a hammer and a saw, there are these things that the human brain can do that the computer can't. 

Steven Parton [00:10:33] Could it be argued that our brains are somewhat doing the same thing, though, operating on risks that are kind of taking segments of input at a time? And even though we have a lot of parallel processing that kind of coalesces this into a seamless experience, there are things like the binding problem and whatnot where we seem to be getting packets of sensory experience that do kind of have that snap snapshot feel. So how do you reconcile that? 

Angus Fletcher [00:11:02] Well, because the computer, the human brain is part computer. Hmm. So my argument isn't that like, human brains are ontologically different from computers. My argument is that there are multiple mechanisms of intelligence, and the human brain is like a Swiss army knife, and it contains a lot of them. And the human, the computer just takes one of those mechanisms and expands it. So to give you the kind of evolutionary basis of this. Yeah. Let's go back 520 million years to the evolution of the earliest neurons. What are those neurons doing? What is their function? They have two primary functions. The first is vision. Vision is about inducting information from the environment and processing it to find patterns. It's computational. Early vision neurons are off sensors. The latest there, the light isn't there. And our visual cortex is ultimately one of the most powerful computers on earth. It can do extraordinary feats of pattern recognition and real time processing. All kind of amazing stuff. That was only half of what things needed to survive. The other thing that they needed to do was to generate action. Why did they need to generate action? Well, because things were trying to eat you. And so if something is trying to eat you, it's all very well to see the thing coming towards you, but you also have to be able to move unpredictably. So you have to be able to generate new and original actions. And this is one of the reasons why a lot of our brain just kind of pulses spontaneously. A lot of our brain is engaged in spontaneous action generation. This is why we get anxious and worried as humans, because there's all these different things kind of going on in our brain that are sparking and creating tension and so on and so forth. That's also why we're creative. I mean, that tension, that anxiety that usually fuels creativity and so and so forth. So that early primitive action generation mechanism then became parts of the human motor cortex and came into the which often called the default mode network or a kind of center of imagination which thinks in stories. Most of the time you imagine yourself role playing and doing things like that. And so it's a different tool that evolved for different functions. And if one of them was was better than the other, the human brain over these 500 million years would have gone in that direction. But instead, what happened is, is it kept both of them around. And that's because vision and computation are very, very good in high data, stable environments. And action generation is very, very good in low and no data environments that are volatile and uncertain because you don't know what to do. So you got to try something and that's something that you try has to be new. So these are just totally different mechanisms of thought that evolved to keep us alive. And the miracle of AI is it's taken one of those and just turbocharged it. And that means that when you have a stable, data rich environments, of which logistics is a good example, you know, I mean, there's a lot of them that computer people can point to. Computers are enormously powerful. But the moment you get volatility and uncertainty, AI gets very fragile and it starts to do very odd things. And that's why you're never going to have a self-driving car. You're never going to have a computer doctor. You're never gonna have any of these things because they bring with them this inability to deal with the very kinds of situations that human brains are comfortable dealing with normally, which is low data information situations. 

Steven Parton [00:14:34] So let's focus in on one thing you talked about right there, which is the self-driving car. Now, you say we're never going to have a self-driving car, but obviously that's one of the things that we can point to right now is like our most sci fi like thing that we currently have in the world. And it's not fully autonomous, of course, but it's come quite a long way. Is your claim, I guess, that if you take away some of the rule sets that we give it as humans, such as like the lines in the road that give it a I guess a data set to learn the rules from that, it will simply not be able to work. I mean, why is it that that breaks this thing that we currently see functioning quite well? 

Angus Fletcher [00:15:17] Well, it's not functioning quite well. And actually, I was very disturbed to be watching the Super Bowl and see all these draw these ads for like self-driving Chevy Chase and stuff like that, which are going to kill people. So there's a lot of problems with with self-driving cars as as cool as they seem. The first is that there is kind of a disinformation program behind how these things work. They are very heavily, as you pointed out, hard wired. You know, and in the early days people just thought, oh, I'll just kind of drive this car around and it'll absorb information from the environment because as you know, that's how machine learning works. You've seen it tons and tons of information and then it trains itself. And so people tried to do that. And, you know, there was a lot of problems. And so human engineers came in and they started hardwiring it. But the problem is you can't hardwired these things for all these different situations. And so basically you end up with actually fairly dumb vehicles that have these kind of decision trees that are hardwired by human engineers. And they do things like when they run out of a decision tree, they just stop. You know, I mean, that's probably better than doing other things right if you don't know what to do. But that's still going to cause a lot of problems if you stop inappropriately. And if you start thinking we're going to have thousands of these things driving around, you know, that are hardwired by human engineers. Human engineers cannot anticipate every situation that a car is going to encounter, whereas the human brain is very good when an unexpected situation occurs at figuring out how to deal with it. Now, that doesn't mean the humans are perfect too, that humans make lots of mistakes. But the reason that humans have been around for so long is that we are able to deal with those situations and we are able to learn from them. And the way we do it is through a simple version of a scientific method. Again, this isn't magic, right? You hypothesize that something might work. You tell yourself a mini narrative, a hypothetical story, a science fiction, if you will, of the near future. And off that you test that. Does it work? When I start trying it, if it doesn't, you quickly revise the narrative. And the human brain has the ability to run multiple narratives together. So one of the kinds of trainings, for example, we do as special operatives is we teach them, how can you get your brain to be running ten or 20 or 30 different futures in parallel? Because if you can do that, then all of a sudden you have this enormous flexibility in the space. And, you know, that also allows you when one of those narratives breaks or all the breaks to quickly come up with a new one that you hadn't done before, and self-driving cars, they just don't have that capacity. There's. And so it's just kind of a boondoggle. And I understand why people are really excited by it. It seems fun, and obviously companies want to take money to do it because the money's there. But I just think that money would be more profitably spent doing things like, for example, training engineers, you know, because engineers solve problems. I did not. 

Steven Parton [00:18:07] Well, touching on the quote that I pulled earlier in the conversation where you basically say that, you know, no matter how powerful this A.I. gets, it's not going to happen. I'm going to lean into that a bit more here and say is, do you think there's room for and AI to create multiple simulations and check their probability for, you know, likelihood or outcome that is desired and choose from amongst those in the same way you train Special forces. Like if processing power gets advanced enough. Couldn't we maybe also create the simulations and call those narratives and then kind of implement the ones that we like? 

Angus Fletcher [00:18:46] Well, we couldn't call them narratives because they would be a different kind of mechanism. But you're right, we could like in chess, we could call them probable outcomes or something like that. And I don't want to get into semantic distinctions. The key is that there's a totally different mechanism through which computers generate original thought. So it's not that computers can't be creative, it's just the computers are creative in a totally different way than humans. And that form of creativity just has different upsides and downsides. So how are computers creative? Computers are creative by being random. So what they do is they generate a whole bunch of random things and then they usually filter that. The way that we would talk about it technically is they do divergent thinking and then convergent thinking. So they spam out a huge variety of ideas. We could call this brainstorming. They random access a whole bunch of things, maybe random access, right? And then they and then they. And then they converge in advanced. This is fundamentally different from the way that a scientific process works in the human brain. First of all, humans are not random. We know this because if you ask a human to come up with a random number, it won't be very random. And if you keep asking humans to come up with random numbers that get worse and worse and worse, get it right. They just like are like six, six, seven, You know, it's because the human brain is not evolved to do that, but the human brain is nevertheless very creative. So clearly the brain is being creative in a nonrandom way. And you will see this obviously, if you work with a creative, creative, to have their own method. And what's interesting about those methods is that they're distinct to each creative. So Van Gogh doesn't pay the same way as Picasso. You know, Steve Jobs didn't innovate in the same way that Nikola Tesla, you know, they had their own method. Well, then people say, well, that's incoherence. How can how how can something, you know, not be random but also not be logical? What's going on? It's narrative. What is happening is they are telling themselves a different kind of series of causal sequences, and that's allowing them to tell these narratives and be very precise about it. So when you're a human driver. You're not imagining a billion random things that might happen if you're imagining a small number of hypothetical narratives, you know, And when something happens, it's unexpected. You use that to then spring off and rapidly imagine more narratives off that. That's a low data process of one thing. Narrative is low data. How do we know that narrative is low data? Well, because when you like a story, you like it because of specific details about it. Whenever you're in a beginning creative writing class, the first thing you're always told is be more specific, you know? Because the more specific you can be, the more it captures the imagination and drives the audience. And the way that storytelling works with humans is if you're telling me a compelling story, I'm actually taking those details and then imagining ahead of you, which is creating suspense. A computer cannot feel suspense because it's not imagining ahead, you see. So these are just different processes. And, you know, I mean, sometimes people say to me, what have you could have a computer that imagined every single possibility, you know? And I would say to that, well, look, we've now totally got into the realm of science fiction, right? Because that is, I mean, mechanically impossible to have a computer that we already know this from mathematically modeling very simple physics equations. It's just. But if you could have that computer, it would be larger than the universe, you know? And would it work? Yes, yes, yes, of course it would work. Of course, if you have something that. But you've just jumped to your own conclusion, right? You've jump to the conclusion. And so, you know, I mean, to me, when we're talking about now in the 21st century, these are just odd conversations to be having because computers are already so much more powerful than the human brain at doing certain things. If they could do the things that humans could do, computers would already be doing them better. So it's just obviously empirically the case that they're not going to do those things. So why, instead of trying to make a hammer into a saw, why don't we focus on all the things we could be hammering? You know, and there's a lot of things the computer I could do really, really, really well. It's just not going to drive cars. It's not going to be a doctor. It's not going to write literature. Being cheap is another example of a kind of just disastrous overreach in AI is great at logistics. I mean, you know, you can you can go through probably better than I can all the things that I is good at. But but it should focus on those things and be amazing at them and then let humans be good at the things you humans are good at. 

Steven Parton [00:23:00] Yeah, let's touch on chat CBT because I know that there's somewhere, I think on your website where you very explicitly call it a prank and that it's tricking people, I think is your wording what's going on there that kind of while the world revel, revels in the joys and the revelatory power of this new technology, you seem to be very skeptical. Why is that? 

Angus Fletcher [00:23:27] Well, to credit myself, I've been skeptical about it for many years now, ever since I was first brought in to work on MLPs. And so, you know, to me, this is like this weird, surreal experience, you know, of people who predicted that kind of crypto was a bubble, you know, And then somehow people kept investing in crypto. You know, at a certain point, so many people invest in it that you're like, wait, is it real? And then of course, eventually it bursts. It's just important to understand what is going on with these MLPs. It's a very, very basic mechanism. Which is just mass. Mix and match. It's just taking huge numbers of sets and randomly mix and matching between them. So what you're basically getting at is the most effective plagiarists in history on a semantic level, but it's not producing ideas, it doesn't understand ideas and it doesn't understand more importantly, the narrative components of the texts. And the narrative components are why things work. So narrative is basically causes and effects. And so a narrative says to you, This is why this works. The narrative is a rule of nature or a rule of society. And these machines are completely blind to all that. They can only just mix and match the semantics. So they're not actually engaging in the thought. And the simple way to understand this, I guess if you if you want to kind of get technical about it, is is action verbs. So action verbs are the simplest example of narrative, and famously natural language processors just cannot handle action verbs properly. Why? Well, because an action verb takes place in time. So the way that a natural language processor has to deal with an action verbs is it has to break it into a proposition. So, for example, Aristotle runs has to exist in the mind of the NLP as Aristotle is running. So a action is turned into a quality is the same thing that happens in medieval logic and produces medieval science. And all the insanity is medieval science. Well, why is this a problem? Well, when you turn a a an action into a quality, your first of all, eliminating its origin, it's not something that starts at a particular moment in time. It's just something that is eternal. It just is. You're also eliminating the source of the action. It's no longer that that Aristotle or his legs or his muscles or or something like that is causing them running. It's just the running exists. And so you immediately just bolt into magical thinking. And so what's happening with all of these texts is they're engaging in the same thought process that produced medieval science, that produced the literature of criticism of the mid-twentieth century. This is another one of my favorite topics is if anyone's taking a literature class in the last 20 or 30 years. But there's this thing called deconstruction, which is essentially prove that no literary text actually means anything, and you can actually read Hamlet. It's not about Hamlet because and this is, of course, nuts to a human, but is logically true because the logic can't perceive all the things that's going on there. So basically, these machines are just not competent to understand what is a very, very important feature of texts. And they make the mistake of thinking that the texts contain reality. Whereas actually texts are a memorial tool invented by humans to prompt processes in the human brain which do not exist on the text itself. 

Steven Parton [00:26:44] So what that makes me think of, though, is the idea of the output and whether or not it really matters how you get there. And I guess my point with that is, is it so bad if Chat GPT or these self-driving cars are not doing it in a way that is authentically human but still gets the same results? And in a sense I guess I'm saying if they can pass the Turing test and if they can appear as thoughtful as us, if I can write a prompt in to jet chat GPT, tell it to give me a story and it gives me one. Does it really undermine the value that we're getting from it? I mean, does this maybe warrant still investing in this approach, you know, even though it's not the most authentic approach? 

Angus Fletcher [00:27:33] So I don't care about authenticity. I'm a radical pragmatist. So if computers were able to achieve the same output through a different mechanism, I would be completely comfortable with that. You know, that's not a problem for me. I'm not some kind of, you know, romantic essentialist. What I think we should say is that, first of all, the Turing test is a little bit bogus. I mean, it made sense at the time, But but obviously, obviously, what is happening with these computers is they figured out how to hack the underlying intention behind the test. That's the kind of first thing I think we should say. Second of all, the main point here is can computers learn? Because if the computer is not learning, it's not going to get better. All it's going to do is spam out nonsense that humans have to wade through. And for the computer to learn it has to understand when what it's doing is working or not work. So what's important here is that these computers are amazing at text generation. But they have zero capacity at narrative reading. So they cannot produce a text and then tell you whether or not it is narratively coherent or not because they lack that capacity, They can't understand how to get better at it. If they could have that at even a minimal level, they could over time get better and better and better and better and better. But because they don't have that capacity, all that's happening is they're spamming out huge amounts of nonsense. And then humans were put in this terrible position. So first of all, the only reason these computers seem even vaguely capable is because they're being fed in the work of literally millions of human beings who have given their text to these machines to plagiarize. So it's all based on this foundation of human labor. You know, the only reason there's any competency there at all. So it's important to realize that the computers are giving you this work for free. This is actually a huge that there is a tiny thing at the top of the human period. Second of all, they then spam out stuff which varies from the banal to the dangerous to the insane. And so you therefore have to have this huge amount of humans who can read as fast as the machines can produce to constantly screen out what they're bombarding us with. That's just a losing proposition for humans. Why would you rather have a machine bombard you with nonsense that you have to screen as opposed to training a 18 year old to write for you because the 18 year old is going to learn. They're going to understand how narrative progresses and they're going to get better over time. By the time that 18 year old is 24 or 30, they're gonna be a much better writer than they were when they were 18. And you know what? Instead of spamming you with the huge amount of nonsense, they're going to give you less and less stuff and it's going to be better and better. So your whole system is going to be more efficient. So this is an argument about authenticity. This is an argument about effectiveness and efficiency and common sense. And again, if you want to have fun with Chickpea, there's nothing wrong with that. If you're lonely on a Friday night, you want to have a random conversation with it. I'm not arguing that this is like some crime against humanity to have a conversation with you, but I'm just I'm just telling you that it's never going to replace a therapist. It's never going to replace your friend. It's never going to write a TV script. It's never going to say anything. Intelligence. And to the extent that we are able to perceive narrative in it, it's simply because we are seeing something that is not even aware that it's producing. So it's a case of our brain adding something that the machine itself can't register and therefore can't learn to get better at. 

Steven Parton [00:30:56] Yeah, if we could, can we go back in time a little bit to maybe help understand our relationship with machines, maybe understand our relationship with animals? It was is there or I should say, are there any animals, I guess, that think in the narrative way that you're talking about here, is this a strictly human thing that requires complex, abstract language? 

Angus Fletcher [00:31:18] No, no, it doesn't require language at all. It's not linguistic and it's non complex. And this is another important thing is language and narrative are different. So because we think of narratives as communication and you know, because, you know, the famous examples of of effective narratives are, you know, Shakespeare or Hemingway or something. You know, we think that language equals narratives. Most of the time your brain is making plans and plots. It's not making them in language. You're just thinking, I'm going to do this and I'm going to do that, and you're not even putting those words to it. It's just in your mind, you know, that kind of sequence of actions. If you you're a pianist or a surgeon. Your hand is capable of making very complicated creative finger prints without your brain actually even knowing what it's like consciously doing. It's all happening in these lower kind of motor regions of your brain, and you simply don't put that to language. That's why, for example, if you're a great baseball player or a great dancer and you try to explain to someone how you hit the ball, you can't explain it to them because it's not in language in your head. You can I hold the bat here. I do that. And these are things, you know, and you basically have to mimic it for them. So language and narrative are totally different. That's the first thing I want to put out there. And abstraction and narrative are different narratives about the specific and the particular. We think that language, that narrative is about abstraction because of Joseph Campbell and the myth of archetypes and the hero's journey and these other things. I'm going to get probably a lot of hate mail because of this thing, because it's like I'm insulted and I'm insulting Joseph Campbell, but he's also a fraudulent banker, and that's also junk science narrative. This is specific. What this means is that almost any animal that can move has some capacity for narrative. This also means that narrative is frequently non-conscious, even in the human brain. So, you know, does a dog think it narrative? Absolutely. Does a squid think a narrative? Probably. I mean, it's hard for me to know, right? But almost certainly it does, because it's capable of creative motor movements, you know. So so we have reason to suspect that this is a common animal thing. It certainly dates back long before our species to, you know, earlier versions of hominids. And you can see it in earlier primates and so on and so forth. You know, because they're able to make plans, anything that's able to make a plan is displaying the ability to have a narrative. Crows can make plans. So and crows are quite biologically ancient. So. So you can assume so. So, yeah. So it doesn't it doesn't require any of these kinds of sophisticated things, you know. And, you know, if we're honest about it, logic is itself very ancient. I mean, I think there's this myth nowadays that somehow, like this amazing new thing, as you and I have already talked about, it comes from Aristotle and it comes from these logic circuits in the human brain, which are themselves going all the way back hundreds of millions years old. So these are very old tools. And what's amazing about AI's, it's figured out a new way to use an old tool. 

Steven Parton [00:34:08] Hmm. Do you think as we progress into the future, our relationship with narrative in our brain, internally or even externally, is changing due to our relationship with machines? I guess my question there is, you know, the neuroplasticity or evolutionary adaptive ness of our species might becoming more binary or logical at the loss of the more narrative driven kind of behavior. Do you think that's something that can occur or makes any sense? 

Angus Fletcher [00:34:42] It makes a lot of sense. And I don't mean to be alarmist about this because the human brain isn't like a giant blank slate. You can't suddenly convert one region of the brain to another, reads the brain, things like that. You know what I mean? But it's definitely clear that we're not exercising the nature of portions of our brain as much as we used to, because, you know, we're so used to using these kinds of computational machines. And at work, everyone's so used to thinking in data and spreadsheets and so on and so forth. And that's one the reasons why I got involved with U.S. Special Operations is because they were very fixated on the idea, I think correctly, that when they received recruits, those recruits had had years and years of training in IQ tests and memorization and early computational tasks, and they were totally inept at functioning in the actual world, you know, because they hadn't exercised these parts of their brains. You see, this was college students all the time. Like college students can get amazing test scores. And then the moment they graduate, they're panicked, they're anxious, they're freaking out, you know, like, how do I do stuff? How do I do basic things? How do you know how to take care of myself? We know and we've known for 30 years or so that school is making kids less creative. And I think a lot of times we think of creativity purely as an artistic thing. Creativity needs the ability to solve new, open ended problems. And so what this means is that creativity is essential for innovation, for leadership, resilience, for just basically, you know, how do I deal with something I haven't dealt with before? I got to figure it out, you know? And so the problem with our school system is we have gotten so good at training up this one part of our brain that we've neglected to your point, the narrative point, the narrative part, and meanwhile, the part of our brain that we've trained up computers can do for us. So it's just like bizarre situation in which we're like training ourselves to get better in something that we already have, something that's better than us at. And we're not training up the part of our brain that there is no substitute for. And you see this in the rise of anxiety, anger, people having problems dealing with stuff. You know, you know, there's so many companies that I mean, I get brought in all the time because these companies are just drenched in design thinking and ideation and all these logic based techniques, and they're like going round and round in circles and not having any new ideas. And it's like, well, you know, that's because you're out of touch with your primal intelligence, you know, with how your brain kind of evolved to be. 

Steven Parton [00:36:52] So we've some famous anthropologist, Robin Dunbar, has talked about, you know, human species were made for groups of 150. And people like Yuval Harari have talked about the fact that the reason we got to the societies as large as they are now is because we had narratives, we had religious narratives mostly that kind of glued our species together, but larger narratives as well that helped us trust each other even if we didn't know who somebody was. Do you see the current world we're entering becoming an issue because it's kind of fragmented and isolating due to technology, kind of pushing people into separate reality tunnels or narrative structures. And we're losing that kind of cohesive umbrella narrative that helps bring kind of like harmony to a society. 

Angus Fletcher [00:37:43] Let me make even more enemies. I think Harari is totally wrong. I think that. Look, look. We've always had a multiplicity of narratives. If you go back to the Greek myths, the whole point of the Greek myths is they were always being retold. S-Class tells stories differently than Euripides tells them differently than Sophocles tells them, different than Homer tells them. Narrative is a living tool because we constantly have to evolve our plots and plans. Life is constantly changing. There's the whole point of narrative is it's flexible, not archetypal, not eternal, not universal. This idea that we had these universal cultural narratives which glued us all together is total nonsense. This is a fiction of the British Empire came about at the end of the 19th century, which is again responsible for Joseph Campbell. And this idea of the great myth that this kind of stuff. No, the problem that we're having is actually that people are having less narratives. So what's happening is we're having an atomization in which each of us has our own narrative as opposed to having a kind of collective set of multiple layers. And this comes to the fundamental problem that we talked about at the beginning. Computers deal In truth, narrative is different than truth. Narrative is not truth. A narrative might be falsifiable. A narrative might be useful. There's also two things a narrative might be, but a narrative is never true. And when you start believing that a narrative is true, you become intolerant, closed minded, fixed, angry, fearful. And what we need to do is to open people's minds the reality that there are many different stories to be told. There are many different stories that our lives can take. When I think back of my own past, I can think of myself in multiple ways. You know, I can think about my future in multiple ways. We want to be open to that multiplicity and share that multiplicity. Conspiracy theories are obviously dangerous because people think that their true conspiracy theories would be fine if it was one possibility among a million that you entertain. Right? It's not dangerous if you think, well, it's possible that this conspiracy is true. The problem comes when you conflate the narrative with truth. So I don't think that the atomization thing in terms of the atomizing of narratives is the problem. I think the atomization thing is we've lost trust with each other and we've actually, you know, got to this point. We can no longer have a conversation and realize that actually, you know, you are valuable to me because you think differently. I mean, that's the essence of narrative, right? Not that you and I get to get along because we're the same. You and I get along because we're different. And that means we can solve different problems. You're going to bring unique perspectives to my life, not going to be unique perspective to your life. And it's not important that you agree about my view of I. Right. You know, it means good that you have your own view of I write and we have conversations about it. And then when we have find a problem, we have different tools and different resources. And so, you know, human beings have always been diverse. Diversity is our friend. The idea that narratives exist to kind of create modern myths and to Toutes is nonsense at odds with reality and also with sanity. And you know, what's what's more important is the capacity of individual humans to hold multiple narratives in their brain at the same time and realize that none of them are true. 

Steven Parton [00:40:30] Let's talk about that idea to hold multiple narratives in your brain at the same time and maybe give you one more chance to put your foot in some controversy. You talk about your Darwinian approach to literature, and that makes me think of Dawkins and his idea of the meme, the, you know, a self-replicating idea that moves from mind to mind, kind of like a virus, hope being hosted by human thought. What is your thoughts, I guess, on memes in general? And do you think that the technology is exacerbating the ability for, let's call them modern myths or, you know, standardized narratives to spread and infect people more easily and kind of strip people from those that multiplicity of narratives and instead force them to kind of acquiesce or adopt one narrative only. 

Angus Fletcher [00:41:22] Yeah. So, I mean, these are very big and complicated questions. Let me start by saying that I think names are a cute idea, and I've laughed at a lot of means on the Internet as I hope other people have. But memes are not like narrative or like any better. No relationship to genes. I mean, this is a totally wacky idea that only someone who didn't understand narrative could think that a meme is like the fundamental, you know, kind of like archetypal basis of of how stories work, you know? It's also important to understand that that whole model of transmission is is a data driven model of transmission in which the idea is that humans are computers which which we passively download memes and other information from our environment and then use it to make decisions and so on and so forth. And as you and I have talked about, part of the human brain does that, but a huge part of the human brain is just generating stuff. And, you know, human beings are always just making their own stories, creating new stories, generating new stuff, you know? And so this is one of the reasons why people in North Korea are never actually going to be brainwashed. We live in fear in our society. I mean, this is one of these kind of this in our society. You know, the left has this fear that, like somehow, you know, there's all these people on the right who have been brainwashed by, you know, right wing politicians. And then the right also has this idea that, oh, my God, there's all these idiots on the left who've been brainwashed by left wing politicians. No, that's not how people work. You know, we have a huge degree of autonomy and creativity. The main problem we're experiencing in society is we don't get together and share and listen to our stories anymore. I mean, technology has just created this kind of like weird situation in which we get immediate gratification from technology. We feel like little gods. I mean, this was kind of Steve Jobs is, I think, key insight when he built the Macintosh. I mean, if you ever go back and use an early Macintosh, it's like a terrible computer. It has like no memory, it has no functionality whatsoever, but it has this mouse which allows you to move things around the screen and you're like, Oh, my goodness, I'm a God. I don't have to make any effort at all, you know? And it has this finder function which allows you to recall everything and like, Oh my goodness, I could know everything. Like, you know, it's all these little these little things that give you these immediate emotional jolts, you know, kind of like Internet clickbait. And I think that we've become so used to that kind of cycle of immediate gratification, which technology gives us about trivial things, but then actually having a conversation with a human being, it requires like real work, you know, and this human doesn't think that everything that I think and I can't just download my ideas onto them. And then and then we get frustrated. And that's kind of the problem, is that, you know, life is difficult, but in a good way, because when you and I struggle, you know, we then come up with ideas that neither of us would have had alone. And we build things that neither of us had alone. But we have to make that commitment to the other person. It's to get you know, to get totally philosophical here. It's like it's like a it's like a relationship. It's like a romantic relationship. Right? You know, you have to invest in that relationship and you have to realize that over time it's going to cause you a lot of problems. But if you go through those problems, you get growth, you know? So that to me is the main thing that's missing. And, you know, and I think that, you know, means and and, you know, this idea of disinformation and propaganda and this idea that, you know, like, you know, we have to like, make sure all the facts are correct and this kind of stuff, I think that's all a diversion from the real thing, which is that we just need to be able to sit down, listen, care for people, have empathy here. I mean, like just basic things like that, you know, And they're just so kind of like mass produced instant answer, which is I think what we increasingly want in our technical in our technology culture. 

Steven Parton [00:44:30] Yeah. I'm going to take a bit of a sidestep here, a little bit to your book Wonder Works, which explored, I think the subtitle is The 25 Most Powerful Inventions in the History of Literature. Could you maybe give us a bit of a framing in terms of what the most recent one is and maybe even what the first one was? And we can get this picture of kind of how literature has evolved and maybe is going to evolve. 

Angus Fletcher [00:44:57] Yeah. So I'll give you a couple quick examples. I mean, probably the earliest effect of literature is to create wonder. It's a sense of awe and you get that sense of all by a very simple little device, which is known as the stretch. So the stretch can be just take something blue and make it bluer. Or to take someone who's courageous and make them more courageous or a plot twist is another good example of a stretch. And, you know, this is just associated with a kind of mini spiritual experience in the brain. We're just like, wow, you know? And, you know, a lot of us just read books for a sense of wonder, for for a sense that there's just more there. And if you like science fiction, you like fantasy or those kinds of books, those really tap in very, very heavily. So to wonder how much more recent invention would be detective fiction. And detective fiction is great at treating scientists because what happens in detective fiction is you get a few kind of random pieces of information and then you start to make guesses and then you see, ah, my guess is correct. And so I am a huge fan of detective fiction. If it's done well as as a way of kind of training young minds. And, you know, the main point that I was sort of trying to make in the book, I guess maybe there's two but main points. I mean, one is that who knows what's going to be invented by future generations, because the whole point is. We can't predict it. If we could predict it, it would be logical and it would be a deductive science as opposed to a kind of narrative empirical science. But the other point of the book is that we are trained in school to think of stories and literature as a way of getting other people to do what we want them to do. So we're trained. Like if I tell you a good story, then you'll buy my product or your elect my candidate, you know, or you'll, you know, you'll have the opinions that I want you to have. And I can't tell you the number of young writers I work with who are like, If I could just write a story to convince people that climate change was bad, I would change the world, you know? And, you know, my point in there is that, like, you know, beyond the fact that that is the territory way of thinking about literature, it's not really how literature literature is primary function as literature. Its primary function is to change the stories you tell yourself. Literature is an opportunity to change the way that you yourself think. And so what I go through in the book is I basically say, you know, if you want to be braver, here are stories you can push. You're braver. If you want to be more optimistic, hear stories you can use, You know, you want to process grief. If you want to be more curious, if you want to think more like a scientist. So I just go through the book and basically talk about how instead of doing what we do in school now, which is just feed everyone the same books and then interpret them through the same method, which is some version of close reading where we write like a five paragraph essay about them and said, How about we tailor? What people read in the same way that we would tailor medicine or other things we would put into our body and our brain based on the needs and desires that people have. And how about we explain to them the mechanisms by which they work? And how about by doing that we show you how you can grow your brain and your potential in an intentional way that is actually rooted in a specific invention of the literature itself, as opposed to some kind of universal interpretive method that anyone can apply to any text any time. 

Steven Parton [00:47:55] I'm going to bring both our worlds together here and ask, Do you think that that is a. Key way then to kind of navigate the rapid change of technology that we're experiencing right now, that we need more kind of access to customized narrative to help us understand our relation to the world. 

Angus Fletcher [00:48:12] Yeah. So first of all, I'm a huge fan of nurture and science fiction. I think that's a great source of creativity. I don't think we can ever have enough clever near term science fiction. Now people are really thinking deeply and writing these stories. I mean, we wouldn't have the space program without near term science fiction. I mean, so much of science really just come from people imagining it before it happens. I also think that as we've talked about, I mean, one of the things that has been lost in our technological worlds are certain basic functions, the ability to communicate with other people and also the ability to process our own emotions. We have a very hard time there processing our own emotions. So we just generally like to do things like mindfulness, which is, you know, okay, as a short term thing, you know, but which doesn't actually help you do the deep work of going through your memories, processing your grief, processing trauma, processing, all sorts of kinds of things, shame, embarrassment, whatever. And I think the wonderful thing about literature is we've all read a book at some point, which is helped us grieve. You know, it has stimulated a memory and it has helped us process that memory. And this is just stuff you have to do as a human as you is. You have to work through your past and your in your memories and your and your narratives. So there's this huge function, I think, for literature in a technological world. And the more technological we get, the more logical we get, the more we're still going to have to find ways to deal with the fact that we are emotional, we are narrative, and these are bad things. You know, there's a lot of intelligence in emotion. We can maybe talk about that in another episode if you want. But I mean, you know, there's a lot of intelligence and all these kinds of things and literature can kind of help us access our full intelligence and and not just kind of make ourselves into kind of like mini computers. 

Steven Parton [00:49:47] Yeah, Well, looking looking forward, then, you know, let's let's say 100% that your proof claim is definitely valid. Everything is, as you say it is. What are the implications? What what does that mean in terms of where you think we should invest our time and energy as a society? What what does that really mean as we navigate what is right now looking like a pretty insane, revelatory moment for AI in technology? 

Angus Fletcher [00:50:15] Well, the first thing it means is that it's not going to solve our problems for us. Mm hmm. I mean, I think we've had this dream that has accelerated over the last 20, 30 years that somehow there's going to be this computer to come and save us from ourselves. And this is literally the medieval god, you know, this all logical, omniscient creature that somehow fixes everything and creates heaven. It's not going to happen. We have to own our problems. We have to fix our problems. And a big part of that means we have to start talking with each other because it's only communities and groups that can solve problems. I know we again, we live in this world where we think I'm really smart and I can somehow convince everyone if I give them enough data to do what I want them to do. That's not how humans work. So, you know, we have to start having hard conversations with people. It also means we need to invest a lot more in human education. Our educational system now is really failing people. It's not preparing them for life. It's not because there's something malevolent about it. I mean, a lot of times when people complain about education systems, they think that there's like this brainwashing happening or something like that or, you know, the people on the left appeal in the right or parties. But that's not the problem. The problem it's not helping kids solve their own problems. It's not building resilience. It's not building problem solving. We want to invest in all of those things. You want to invest because, you know, those are our future scientists. Those are our future engineers. Those are future doctors. Those are our future writers. We'll invent new literary inventions. You know, we want to invest in all those kinds of things alongside technology. The final thing I'll say and you know, this is where usually people who have agreed with me the whole way unplug and walk away in indignation. But I'm going to say it anyway. The narrative elements of the human brain are mechanical. What that means is you could build a machine to do narrative. It's just not a computer. And so if we really, really value narrative and we think that that's going to be very helpful to us, we should also start investing in alternate forms of of intelligence, of non-human intelligence. You know, I can't really call them artificial intelligence on this podcast because artificial intelligence means computational artificial intelligence, but they would be other mechanical forms of intelligence. And, you know, and I've sort of sketched out some of the ways in which that could work. And it's it's not like an insolvable problem. So if people are really, really interested in having a narrative machine, you could go ahead and and you could build that. You could you could invest in that and it could help you solve problems that human brains couldn't solve. 

Steven Parton [00:52:32] Yeah, I can't let you drop that bomb on us and just walk away. So, I mean, can you give us a bit of a hint of what that what kind of machine that is? I mean, are we talking something like a synthetic biology or are we talking something more, you know, analog? Like what kind of machine? That's not a computer or a AI. Yeah, gets us that. 

Angus Fletcher [00:52:54] I think, look, the first important the first thing to say is we don't actually have the hardware capacity to really do this yet, because one of the things that's important about the way the human brain works is it doesn't run on a continuous flow of electrons. That's one of the things that allows it to work without design. I've talked about this. If people don't just have a bunch of science papers on this stuff. But but so you would first of all, you'd need to be able to build a machine where each of its individual units was self-powered so that it could make these connections that are electron independent, just like in our brain. You know, our one one neuron connects physically, not electronically to that. So, so so that would be an example of the kind of hardware thing you'd have to do. And, you know, so the main thing is, is what we would probably want to do is we probably want to look to the human neuron for inspiration, but then we'd want to then go outside to engineers and say, Here's what the function is. Let's go ahead and try and build a different find, different way to do it. I've talked to people at MIT, I've talked to people at DARPA. You know, there are conversations underway about this kind of stuff. But, you know, again, it's like when ENIAC was built. ENIAC was a mostly useless piece of technology. It was enormous. You know, I mean, it's like this huge building, you know, in downtown Philadelphia or wherever. And the military had to constantly reprogram the thing every time they wanted to use it. And they basically used it to figure out a couple artillery trajectories and then maybe potentially model a hydrogen bomb. You know, they think possibly, you know, it took the invention of the mast transistor and all sorts of, you know, just huge, huge hardware breakthroughs to allow for what we're now seeing now with the software. And so, you know, the big pitch I'm basically making here is let's start investing again in new kinds of hardware. We have this total bias in the engineering community that the smart people go into software. You know, that's where like the real intelligence is. That's where the real future is. I'm telling you, there are new forms of hardware to be built that aren't just wearables. You know, there's like actual big, big hardware, things that are non computational that operate completely mechanically differently, just like the combustion engine is different from a computer. I mean, they're just very different pieces of hardware. And I would just encourage young people to say there's a lot of opportunity out there for me to build new mechanisms. And again, if you want to build new mechanism, a mechanism is another word for a narrative. So go and get yourself some narratives, get yourself some near-term sci fi, and start thinking more in story. 

Steven Parton [00:55:23] I love it. Angus We're coming up on time, so any closing thoughts before we con into this? 

Angus Fletcher [00:55:29] No, I'm honestly just thrilled that you hired me through this entire podcast. And, you know, I would say to people that if they're interested in reading more of my stuff, it's published in academic journals. You can you can find it, you know, it's peer reviewed. It may be wrong, but but it has at least, you know, generated enough support to get out there. And, you know, the main thing that I just want to communicate to people is, is be imaginative. You know, get away from this idea that data is somehow going to solve all our problems for us. There's a huge role for imagination in the future as an engineer, as a scientist, as anyone. And at the moment, that's a human capacity. And so don't give up the gift. Don't give up the opportunity. 

the future delivered to your inbox

Subscribe to stay ahead of the curve (and your friends) with new episodes and exclusive content from the Singularity Podcast Network.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.