< return

Using Technology To Thrive in Chaos

August 29, 2022
ep
69
with
David Weinberger

description

This week our guest is author and technologist, David Weinberger, who has spent years lecturing at Harvard as well as acting as a fellow and senior researcher at the renowned Berkman Klein Center for Internet & Society. And just prior to covid, David released his latest book, Everyday Chaos: Technology, Complexity, and How We’re Thriving in a New World of Possibility.

In this episode, David and I explore some of the key ideas he focused on in Everyday Chaos. This includes looking at the ways in which we have historically used reductionist thinking to make generalizations for society, products, and technology, and how the latest technologies like the internet and Machine learning are revealing how much more we can thrive when we embrace chaos and customization. This means letting individuals and data tell us what people want by exploring all the possibilities rather than attempting to predict and shape outcomes beforehand.

**

Find out more about David at his website weinberger.org and buy his book at everydaychaosbook.com

transcript

The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

David Weinberger [00:00:00] But it turns out the Internet shows us over and over and over again that we cannot predict what we're going to want, what other people are going to want, what's going to be important, how things are going to be put together. And thus, we cannot predict what we should put up, how we should categorize it, why we should connect with all that sort of stuff. We got a much richer world if we hold back them anticipating, but a world in which we're not anticipating is a fundamentally different world. 

Steven Parton [00:00:38] Hello everyone. My name is Steven Parton and you're listening to the feedback loop on Singularity Radio. This week our guest is author and technologist David Weinberger, who has spent years lecturing at Harvard, as well as acting as a fellow and senior researcher at the renowned Berkman Klein Center for the Internet and Society. Just prior to COVID, David released his latest book, Every Day Chaos, Technology, Complexity and How We're Thriving in a New World of Possibility. In this episode, David and I explore some of the key ideas that he focused on in Everyday Chaos. And this includes looking at the ways in which we have historically used reductionist thinking to make generalizations for society, for products, for technology, and how the latest technologies like the Internet and machine learning are revealing how much more we can actually thrive if we embrace chaos and customization. This means letting individuals and data tell us what people want by exploring all the possibilities, rather than attempting to predict and shape the outcomes beforehand. And now with that being said, let's get into it. Everyone, please welcome to the feedback loop, David Weinberger. My favorite place to start with. Anyone who's written a book is what was it that motivated you to write this book? What made you think that this was a story worth telling? 

David Weinberger [00:02:11] There's a unintended but obvious theme or thread to the one I've been writing for the past 30 years. Which has been about the way technology is shaping our ideas. I think the fact that it does in some sense shapes, I mean, it's a very loose sort of phrase. But and how it shapes, I think, is a really hard question. But nevertheless, our thinking seems about who we are and how we live in the world and what the world is and those sorts of things. Seems to track our technology. Taken in broad strokes. So I've been writing primarily about the Internet's effect on everything. On everything. But it's broad enough. And in particular, I've been an enthusiast of the Internet from its beginning, even while recognizing the horrible things that it does. But I tend to write a little bit less about that, in part because I'm a writer, and that's very well covered territory. And in part because while it's very important that the negatives be covered, it does have the tendency in the sort of global or national conversation to blot out the positive, or at least in really interesting ways in which the Internet has already changed how we think about a whole bunch of things. And that consistently I noticed that what interests me is the way in which we, the Internet in particular, has broken, broken up our models, given us models of things that are broken up into lots of little pieces rather than centrally controlled. Or that's things that start out as holes and we can just take them for granted. The Internet has given us a different picture of how, what things are and how they relate. The title one of my books is from like 2001 is Small Pieces Not Excuse me, small pieces loosely joined. And that seems to be a pattern that guides how I see everything. Almost every book that I've written for different publishers. The first draft of the cover art. Is small circles or small squares that are loosely joined, even though the title one of the books that none of the rest of them. So that theme seems to be apparent even in the other books as well. But over the past, I don't know, five or six years I've become. Very interested in machine learning. Mm hmm. And for the same sorts of reasons. I mean, what what is it that we are learning about ourselves from machine learning? And I'm not thinking when I say that so much in terms of, for example, what machine learning tells us about human or neurological structures. If it tells us anything at all, but rather how we are taking how machine. Our encounter with machine learning is teaching us about how the world is put together and what it means to know things or to explain things and the like, all of which are really human sort of thought level structures. So I wrote the book reluctantly because I thought I was not going to write more books because my prior book was in some ways an argument against writing books. But being hypocrite, you know, gets this get you over a whole bunch of problems really quickly. So so I did. And one of the ways of thinking about the stroke, well, one of the ways it's not the structure of everyday chaos, roughly, is, I don't know, half or third or something like that about how the Internet has been changing our ideas about how the future works. And then the rest of it pretty much is about how machine learning picks up on that and is pushing. Is explaining the sort of chaos that we got used to living on thanks to the Internet. The machine learning is giving us, in a sense, a model for understanding that for appropriating it, that sort of chaos. And so there seemed like a for me, a natural connection between the two, so that the longest answer you've ever gotten does not. 

Steven Parton [00:06:54] At all believe me. Not really close. 

David Weinberger [00:06:57] But I can go on. 

Steven Parton [00:06:58] Well, what would I'd like to hear if we could kind of continue this journey is what was that old picture? That old way of thinking? What was the older models that we've been operating on that you're starting to see a shift away from? So before we get to future oriented, I'd love to talk about the previous or present paradigm. 

David Weinberger [00:07:18] I mean, good. I think it's very helpful. I should preface this by saying that my long time ago background is in philosophy. I have a Ph.D. in philosophy from 1979 and I taught until 1986. So technically that's a long time ago. And I'm not a qualified academic philosopher, even if I was then, about which there could be some argument, I'm certainly not now, but that those that will explain why I'm about to answer this. That is what is the the background out of which we are emerging, what is the context? And I think it's a very old one in the West and everything that I've written or talk about assumes I'm talking about the West. Yeah. Only because that's the only thing I know anything about and that that culture that we all that we've all grown up in and which is still present. There's lots and lots of things to say about it, of course, but. That culture and our current culture is faced with an enormous problem, which is the world is very, very big and our brains are very, very, very small or our embodied brains, one refers as I do, really small. And so we have had to deal with that. And our one of our fundamental most basic strategies in the West and elsewhere, but very much in the West, has been to try to reduce what there is to know by focusing on what things have in common, whether these are laws that explain how the world works, which quite rightly apply. You know, ever since, Newton said, the laws applies to everywhere the same and in the entire universe. That's pretty much new with Newton. But, you know, so we find laws that apply to every case very helpful, not argue against that. We have been on a search since the Greeks and maybe epitomized by Aristotle for the essence of things, which assumes that there is an essence for the Greek, this essence what is a thing, really what's essential about it and what is a sort of accidental and we don't care still is this thing, even if the skin of the apple is green instead of red or whatever, you know? So if you're Aristotle, you're trying to find a definition for each thing. And then the human task is to discover these definitions and how they are organized. Because the assumption also was that things get that the natural world consists of objects that have essences, that we humans as rational animals can understand, or we humans as creations of God can understand, because he literally He has given us this ability. And these things are also arranged in a perfect and knowable and logical order. And we can know that order as well. And for Aristotle, it was a taxonomic sort of order in which things are in categories, and the categories are in categories. And yet this hierarchy and it's all quite beautiful and for thousands of years we were in pursuit of this or God created God would not have created a chaotic world. He to stop apologizing for saying, yeah, he created a world that expressed beauty. It is a beautifully, perfectly organized world. And for us to understand it, we have to be able to understand it because we are those creatures, the essence of which is to understand their world. We can discover those definitions, we can discover that order all those things, ignore particularities, individuals, an individual apple in the scheme or an individual duckbill platypus which doesn't fit in the order very well, which is probably why thought of it or individual person or anything is not understood by what makes it a unique individual. That's the accidental stuff. That doesn't matter. It's, you know, by the definition it shares with all others. And that is a really good way of making sense of a wildly particular world in our brains are up to that task. I know this because we keep doing it and, you know, we never we argue about the order and but we have assumed all the way up at least through the 18th century that there was this order. And if there wasn't, then there was nothing to know. So what good you know, well, good is it to be the rational animal if there's literally nothing to know, if it's just a wild pile of particulars. So that's the context. Gotcha. Into which many things have emerged wildly oversimplified course. One of those things is and in some ways I think culturally the most decisive one and a stupid thing to say because, well, how am I measuring that? But in any case, I'll stick with it. For now has been the Internet. The Internet has shown that that that order has many uses, expresses much truth. But particulars are really, really worth our attention and escape from those categories. Once we. I'm sorry, I'm almost done. Actually, I think I'm almost done. Okay. Good of you. And it turns out that the order that we have devised in the past in the West mirrors and I think not in an accidental way, mirrors the sort of order that we imposed when we were organizing physical objects. You put all the apples in one bin, the apple bin, if it's a library, which is a really good example. I used to work in libraries a few years ago. You have shelves, you got to put a book on one spot, on one shelf, and so you have to decide, okay, what is the major subject this book is about? Now in Library World you can have more than one topic, but which is really great. We had to put it on one shelf. Yeah. So if it's, you know, military history of military cooking or music or whatever, you have to decide. Just go on the military shelf, the music shelf, the history shelf. You don't get a choice. I mean, and so our our philosophical our conceptual way of organizing things mirrors that limitation, which is a terrible it strips meaning out of things. The whole hunt for essence strips meaning out of things. With with the digitization and the connection of digitization over the Internet via the Internet, we no longer have to put things in single categories. And you get this riot of meaning and riot of possibility. And riot is many ways not a metaphor. Yeah. So. 

Steven Parton [00:14:11] Yeah, well, I mean, what I'm hearing is that we pretty much have to be overtly specific, very reductionist minded. We had to be very prescriptive so that we could navigate the chaos of the world as we got to this point. And maybe we needed to do that to get to where we are. But now that we are at this point, we're starting to realize that we're coming up against what, I don't know, chaos theory or the riot that you're talking about is would that be accurate to say is like now computer modeling is advancing to where it is. The Internet is showing us what it is. We're realizing that that very overt specificity needs to start embracing some kind of like chaos theory. 

David Weinberger [00:14:54] Yes, I think that's exactly right. I mean, we have been on the Internet unconstrained, by and large, unconstrained. And in some cases, besides the horrible cases of not being constrained, we have been pretty literally unconstrained in terms of organizing things into multiple categories. This is why tagging in the mid 2000 was such a big deal and tagging as a by which you take a link and you share it with others, but you provide tags and as many tags as you want and there's no particular taxonomy you can be tagged according to what it matters to you, but maybe nobody else, or you want it to be found by others. And and we don't think about tagging as a separate activity anymore, but it shows tagging. And the idea behind tagging shows up everywhere. We went from carefully curating collections to what we think is worthy as a library must, because physical constraints as a spice cabinet does a spice rack in your kitchen does because of economic but also physical constraints. Just so much room. And we lost all of those constraints and we discovered really pretty early on in the Internet. That we could put things as many things as we want up and not have to not engage in this reductive activity and pull things back. And we don't even have to tag them. We don't do psychotics. Sometimes we do, but we don't have to. And the technology, the developing technology at that time is able to to find them, no matter how we are looking for them. And that technology has gotten more and more powerful. Basic I say basic search technology. What we take is basic search technology, which is wildly improbable. Even from 20 years ago. We don't have to organize things in order to find them. And so we we rather than filtering on the way in, which is what we used to do in order to keep the collection manageable, we don't that that required anticipating what people other than us would want. And it turns out we don't have to do that. In fact, we can't do that. So we put up everything. We don't even it is easier and less expensive to put up everything than it is to organize it and filter it. And then we give people tools for filtering on the way out. Mm hmm. And so we no longer just in this one type of use case of posting collections. We don't have to try to anticipate what our users, what the world is going to find interesting, which is essentially unpredictable. I mean, some things are predictable, but overall, it's not. I'll give you a quick example. And this goes I mean, this goes back to 1963. So it's very much pre-internet. Lee Harvey Oswald bought a gun from Sears catalog series was Amazon except no online of course and killed President Kennedy. Nobody could beforehand would have known of the significance of that one ad in the gigantic theater at the time. It's a gigantic Sears catalog. You couldn't you could not anticipate that that would be important and worth preserving. Turns out it really, really was. And we see this now all the time on the Internet. We can't we can't tell what's worth preserving, but we're preserving everything is the overstatement of saying everything. We don't filter on the way in. And this is this helps break apart the notion that the way to succeed in a task I don't mean economically that to show the way to succeed is to anticipate what's going to happen and what you're going to need and to prepare for it and then hope that the thing that you anticipate is the thing that happens that has served us since Paleolithic times, literally since Paleolithic times. It's still an important part of our strategies now, of course, but it turns out the Internet shows us over and over and over again that we cannot predict what we're going to want, what other people are going to want, what's going to be important, how things are going to be put together. And thus we cannot predict what we should put up, how we should categorize it, why we should connect with all that sort of stuff. We get a much richer world if we don't do that, if we hold back them anticipating. But a world in which we're not anticipating is a fundamentally different world. 

Steven Parton [00:19:41] And in that sense, would you say that you're you're advocating or you support the idea of just basically collecting as much data within reason as possible and not, you know, the issue that I'm hearing is that maybe we thought something was going to happen. So we only collected data on that one thing. But that kind of was a self confirming bias. But maybe what we could do is collect all the data that seems possible and then let it tell us what the right approaches. 

David Weinberger [00:20:08] So I'm everything. Yes, to everything, except I'm concerned about the word data. 

Steven Parton [00:20:14] That's fair enough. Fair enough. 

David Weinberger [00:20:16] Because there are implications there. Privacy and surveillance and so many bad uses of data that sort of take take away data, replace it with stuff, the sort of thing that we put up on the Internet, I think. 

Steven Parton [00:20:33] The of data collection, let's call it that. 

David Weinberger [00:20:35] Yeah, but it's c it turns out that, that there's certainly unethical data collection and it's easy to think of examples of that. But the issue is that when you have lots of data together and the data by itself is ethical, can quickly become unethical, yeah, people can do bad things with it. So I want to put that in a little box for the moment, in part because of that problem, but also because in many ways, philosophically, I'm a phenomenon in most ways, philosophically I'm a phenomenon ologist. I'm more interested in how things appear in our experience. And our experience on the Internet is not primarily of data. We're not staring at spreadsheets or alarmist numbers or staring at videos, posts or things that people compose, create. They already have human meaning. Well, so does data, but you know what I mean. Yeah. And there's terrible, terrible stuff. And just what I said about data also applies to the stuff on the Internet. You can derive information that we don't want people to have. It can be mainly used for evil, for doxing people, for example. Nevertheless, we are in a world in which every day we see directly see the value of having, not filtering on the way in enabling semi-permanent connections, public connections, links among things allowing to two way participation multi way participation not simply publish and broadcast. And that and we see companies as well as individuals succeeding by following a path of what I would call on anticipation, a phrase it really is not going to catch on a word that's going to catch on. But I think it's pretty accurate. Can I give you some examples of that? Yeah, please do. Yeah. And then you have machine learning, which I think is giving us a model for understanding the chaos and chaos theory sense that is the Internet. 

Steven Parton [00:22:50] Yeah. I mean, please build on that. But yeah. Or I would love to hear kind of how taking advantage of that, that chaos. Like what are some of the tools or ways that we're. Using that to our benefit. 

David Weinberger [00:23:03] Much of what many of us like about the Internet, I have no idea how many. I assume everyone, because I assume everyone's like me, possibly an error. But, you know, I think that many of us, we use the Internet, we do stuff on it, we rely upon it for work and etc., etc.. But I think many of us just sort of like hanging out there. Hmm. And in part, I assume that's because we don't know what's going to happen. We don't know where our end of the day, our browser history, what it's going to show us, or even how we. How do I end up there? Yeah, you know, I'm not even interested in the types of grass used on a golf course. I don't play golf. How do I. Oh, I remember it was. And then you maybe you can track your way back has something to do with climate change or something. You don't know where you're going to end up. And that is a good thing. I mean, sometimes you fritter away time, but, you know, that's you know that depending that's usually okay too. You're exploring you're you don't you know you never know what you're going to be curious about. You can't tell what you're going to be interested in. I mean, interests are not things generally you have. It's the ways, ways the world snares you, catches you really you can't know everything you're interested in. And so this opens up that it's part of the delight, but it means you can't it relies upon an environment that permits that that isn't telling us, Oh, here are the boxes you're interested in climate change and lawns. Oh, here's the box or that, and you better stay in it. Yeah, it really isn't like that. In terms of on anticipation, I do want to say just a. Something about that. So I think we see that as the fundamental it's not quite the value of the Internet, but it is the liberating value of the Internet that is maybe by mistake, designed to enable us to thrive in in an unplanned environment and unplanned and multiply organized simultaneously, organized in multiple different ways. Because the an object that is within my little web that I traverse that's linked one way for you, it's linked out in different ways. So it's deeply multi-dimensional in that sense. So in. In business, there's the minimum viable product idea from the early 2000s, which has become a mainstay of how start ups and others start up, where you launch a product with the minimum set of features that you think will be of appeal to two people as opposed to the Henry Ford approach. When you design the Model T of spending eight months in a lab with a handful of engineers and absolutely nailing what the market wanted, no changes in it, virtually no changes in it for 19 years. Yeah, 15 million cars sold and we don't do that anymore. Generally we are agile and light and we save for Dropbox. Okay. What matters? The key feature is keeping a backup copy without anybody even noticing it. It's so flawless. And it did that and it worked. And then once you've done that, then you see what what people want, how they're using it, what they're not using. We talk with them or listen to what they're saying to one another about other features. And you end up eventually with a full blown product that did this also, and it lets people do it. But that's a way of succeeding by not anticipating, by holding back as much as you can from anticipating. And I think once you start looking for things on the Internet that way, you will find tons and tons of them. Another quick example is the rise of APIs, application programing interfaces, which allow any developer in the world to make use of the services that your technology provides, often for free, either to tweak the user experience because their users want something different, or to integrate that product into some other product which may be a big important one or could be just a homegrown one for them. But that's okay. You don't. The creators of the API, the company, say Slack, which has a great API from Dropbox as well, to explicitly say, look, we can't we cannot anticipate all the ways people can use this tool. So we're going to give you a gate by which you can get at the functionality that you need and build your own thing with it or make your own experience with it. Mods on games. I mean, this goes way back to the early eighties where people were changing. I mean, I did this in the early set, changing the graphics and Wolfenstein to something else. You know, that's a really minimal case, but fundamentally making having open access to the game from the game maker, allowing you to make your own levels, your own characters, changing the rules, changing the visuals, the graphics, etc., which is just win win generally for games makers because there's people unleash value in it and they get a game that they want to play even if nobody else wants to. So this is actually pretty characteristic of the Internet that's holding back from anticipating. 

Steven Parton [00:28:26] And it seems like the these APIs, these mods and what you're talking about the Internet here, these are all little portals. The chaos that just let more chaos flood into the system to see what the individuals or the masses can manifest with it. 

David Weinberger [00:28:41] Yes, chaos and in particular in the sense that you're using it, which is not the one in which people are running wild on the streets, which can be horrible. But in the sense of chaos theory where. There's I. So I'm not. I'm going to do a bad job of explaining chaos theory. But there's tons of stuff the tons have. The stuff has many, many, many interrelationships. And there are so many dependencies among these relationships that it can be difficult or impossible to predict what's going to happen. So a butterfly effect is an example of this, the well-known butterfly effect, where, you know, little movement by a butterfly in theory causes a tornado, I think, thousands of miles away. And the idea is that, yeah, it's a very small motion, but because the web of things is. Attached in many ways so tightly. I mean, that it can pick up energy along the way and have a much larger effect in on the Web. One example of that is a viral video, which nobody predicts. Everybody wants their stuff to go viral. Very little of it does. Nobody knows what makes something go viral. And but some does. And you end up with, you know, I don't know how many hundreds of millions of ice bucket challenges there were five or six years ago, which is a crazy idea. Sort of a dumb idea. Let's pour ice over a bucket of ice over our heads and raise money for charity. Good charity. There is, I think, $150 million or something. I never get a factory. This is general disclaimer. 

Steven Parton [00:30:24] Ballparks good enough. 

David Weinberger [00:30:25] And you know that every other charity and commercial enterprise there is some dumb ass CEO going calling in the marketing chief and demanding. I want one of them see what happened and put an ice over their heads. Give me one of those. We need that for whatever. Yeah. And this marketing manager is undoubtedly smart enough to try to explain that. Not only point it out, viral videos is nobody knows why they happen. So we'll we'll make it. But I can't guarantee you that it's going to go viral. Boss is that. 

Steven Parton [00:30:59] Boss is kind of in the old mindset of predictive, prescriptive reductionism. 

David Weinberger [00:31:03] Yes. Yeah. Yeah. And they had the Internet confused with a broadcast medium or something, although this has been true, I'm when Las Vegas came up with. What happens in Vegas stays in Vegas or I love New York famous you know I heart New York. Huge important symbol. I am certain. Same thing happened is before the internet was a thing. See, I heart New York. I was one of them. And the whole point is you're pointing in that example because it's an exception. It's a great example because people, you know, survive. It's always been a problem. Now it's worse than that. Is this idea that we can control and manage. Yeah. Another fact facet of of chaos theory is that the initial state of the system is very important to. What? What happens when there's an event? And if you go back to in the morning, you open up your browser. If you're old and you're not going to hear the apps on your phone and not knowing where you're going to be is. Well, that's because the initial state is so complex. It's different every day, different links, different emails and messages and all that sort of stuff. It's different. It's intensely complex, and that means that it's a key. That's part of it being a chaotic environment. You cannot predict where you're going to be, you know, 2 hours later, you know. 

Steven Parton [00:32:29] This is this is starting to make me think of and maybe this is a bit of a heavy handed term, but like self-actualization. And in that sense, I didn't connect this with your work when I first read it, but I'm liking this idea that as we kind of embrace chaos, what we're really doing is giving ourselves more possibilities or ways in which we might find that thing that speaks to us, something that's engaging, something that gets us passionate, and then that pulls us in that direction and we create the best version of ourselves or the best product or whatever, because now we're operating in a very natural way. Rather than trying to force ourselves into paths that aren't good fits for us. Does that resonate with kind of what you're saying? 

David Weinberger [00:33:13] It does, at least in part because I think one of the takeaways from the Internet is trying to. So we have traditionally that not to some extent so in the recent past anyway and I would say in the past Enlightenment, past 400 years, we've thought about the future as a series of patterns heading out into the broader landscape. The future consists of lots and lots of possibilities, but as the future comes closer, that is as we move towards it more and more, as more and more of those possibilities fall away and you're left with just one, which is becomes reality is the actual. And so our task we have thought to succeed has been, whether in business or anything else, has been to anticipate what is the best path for us, and then do everything we can to narrow it down the future down to that one, surviving one. And in business it's more explicit than it is in your personal life, where we feel less control, I think understandably. And that model is. Assumes a type of predictive capability that we don't have never had. And it gets confirmed by our congratulating or self congratulating those who make it through as a proof point, when in fact their negative proof points, the fact that they are the exceptions who succeeded. Meanwhile, that's not really not a great model because it assumes something that we can't assume. There's a tagline in Everyday Chaos that has not been picked up by anybody. So at the end of the section about the Internet and how it has us thinking about the future, unanticipated and that sort of thing, is to point out that the different ways in which we can see on anticipation happening on the Internet and I've pointed to just in this conversation to minimum viable products and to APIs, but there's a bunch of other stuff. Also open access, open open source where you put things out, you don't restrict it, you don't know how it's going to be used, but it's a different sort of imperative for the future rather than, okay, let's narrow it down to the possibility we want. Instead, people who are literally succeeding personally and in business by trying to make more possibilities. And so that's that's the tagline that make more future. It was supposed to be the tagline, but that's is shorthand for make more possibilities where possibilities are not figments, you know, logical possibilities that I could it turns out I could be on Mars now eating an apple and I'm just dreaming this. I mean, that's just sort of yeah, you know, baloney possibilities. But these are possibilities that people can actually take up and do things with, as with an API where they can also have services available so that more things are possible. You know, I we all have access to tons more research and information and that makes more things actually possible. 

Steven Parton [00:36:33] And there's a there's a counterintuitive essence in here, though, too, right. In the sense that part of the point of making more possibilities is so that you can use things like A B testing or APIs or machine learning to hone in on the best possible version of those possibilities, right? 

David Weinberger [00:36:53] Yes. Yeah. That's that's a very good point. One that I do not make. I think we'll probably talk about a B testing in a moment. But yes, I mean, given. Let's come back to that. Yeah, because I think I think it's a really interesting point. So I got interested in machine learning. For for a few reasons, some of them not particularly interesting. I was co-directing the Harvard Library Innovation Lab, dealing with tons has tons and tons of data as one of the second largest book collection and bibliographic collection. There's lots and lots of data and there's so many interesting things you could do with it. But I got interested in it big primarily because. It. At least in some cases, we don't know how it comes up with its results. We can't figure it out. It's just beyond us how it's doing that. But we have evidence that those results are useful. They're right in some sense. And so it just makes makes one think that, oh, so that we're using it in cases where plain old computing isn't good enough. So maybe there's something about the world. Maybe it's capturing something about the world that plain old computing doesn't play. Old computing works are models of how a domain works. You know, a spreadsheet is a good example. I mean, a spreadsheet is a type of programing. It's very easy for grammatically, but and in a spreadsheet you try to do one for your business or whatever, and you can see if you can shift costs and what effects that will have you going to say, what if? In a spreadsheet, you make a model in which you figure out what are the factors that apply when you know, those become the columns. And it's whatever number of salespeople and quarterly revenues and etc. generally is not the license plates of your employees. That might be an h.R. You know, database, but. You put in the factors that you think are going to have an effect, and then you put in the formulas that can connect them. And that's essentially what programing overall does. You figure out what the rules are and what what data matters and what's the connection between them. And that's it because it's being done by a human mind. It's, you know, a spreadsheet can be very complex, but it's still relatively simple. And you can always, I think, who knows these days? But you can figure out what are the connections and how are they working. I'm sure your listeners are going to think of the spreadsheet where that's not possible, but that's a very unusual. Well machine learning is you, don't you? You just giving it numbers. The numbers are in buckets and the same sorts of buckets that you would use in the spreadsheet or whatever. That of the switch to a health metaphor. Health care. Okay. Yeah. So if it's hospital records, one of the buckets is out of the 100,000 or million or whatever it is, how hospital medical records that you have, one will be maybe gender, another will be maybe the weight of the people, heartbeat, etc., you know, tons and tons of these metrics, you the machine, the machine learning system knows there's a bucket here and there's a bucket there has no idea what the connection between them is. Even when we humans know or at least think we know what the connection is, we think there's some connection between lung capacity and I think between COVID or pneumonia, you know, lung disease. But we don't tell it that. Right. You know, between fever and certain types of illness, we don't it has the fever numbers. It doesn't know it has a list maybe of the diseases as data. It doesn't know that there's a connection, which is crazy. Yeah. Because your intuition would be no, make it smarter, at least tell it what we know. There are arguments about that now, by the way. I think reasonable arguments about it. But if you don't do that and potentially include information that maybe you don't think has anything to do with it, because maybe our model of how disease works, this disease works or causes of cancer, we've been shown repeatedly we don't know what the causes of cancer are. We have to keep expanding our model. We know some of it, but if you give it, they may not seem to be connected. Maybe it is in the world, maybe it is, and not necessarily even causally. It can be probabilistically. Excuse me, it can just be sort of secondary causes. And some of them are going to be bogus. There's a famous one about. Of. Number of deaths in swimming pools charts very closely to the number of movies that Nicholas Cage makes in a year where the popular how much money of the movies make of that? It didn't hold up over time, but it was quite accurate every year to every sure of that. That's the bogus. That's correlation without meaning. But some correlations have meaning, even if not causal. It will find probabilistic. Correlations that led to predict with some degree of reliability. And it tells us what it thinks that reliability degree is the relation among things. Right. And so the thing that was really, really real and still is deeply interesting to me is that this may be a better representation of the world than the programmatic one where we're stuck with what we humans think, how we think things do go together. Now, machine learning is also limited and still limited by lots of human things, like what data we think is worth collecting, which is intensely human and often very biased. This decision is sort of implicit decision that can exhibit great bias and do harms harm to groups of people, but it's less restricted than. Okay, I know how whatever works. Sometimes we do. Sometimes we don't. 

Steven Parton [00:43:54] Well, it seems like the machine learning helps us get outside of the biases that we know definitely exist for humans. Like we have certain associative expectations that might make us very blind to something that we just completely overlook. But then the issue there becomes to get that, you know, outside human perspective, we have to hand things over to a black box. You know, the famous A.I. black box. Where were you? Like you said, you don't understand. And we don't know how the calculations are being made. It's just finding ways to figure out connections between data. Do you think that we're ready to start taking guidance from a black box? Do you think will be accepting much longer this notion that, like, we don't know how it works, but it works, so let's just move forward with it. 

David Weinberger [00:44:44] Yeah, yeah. Yes. And we already do. And we frequently do from boxes that may not be fully, fully black, maybe not fully un interpretable, I think is the technical term, which is really most of us would call it inexplicable. The difference is difference being ability and interpretability doesn't matter for now anyway. So we do already. So. I accept routing from Google Maps, having no idea how it's putting it together. Because generally it works and sometimes it fails spectacularly. And I do wonder how the hell I mean, it sent me miles out of my way. Yeah. And I'm curious about it, but still, it works enough times that I rely upon it. Forecasts have to weather forecasts that may or may not be an explicable algorithm. It's not really an algorithm. Sorry. It may be a black box. Don't care. I mean, it works. Weather forecasts have gotten way more accurate. They're not perfect, way more accurate than they were. If I'm getting if we get used to getting medical diagnoses that are more accurate and look further ahead, then give a greater early warning and show themselves empirically to do so. Yeah, well, my doctor says, okay, I think you ought to say something not too drastic, but you really ought to become a vegetarian, which is pretty. I already am. So it's easy for me. But, you know, that's pretty big lifestyle. I object the change and I object to it, the doctor says. I say, Why? The doctor says, Well, the guy says that it increases by 32% the chance you're going to come down with this or that cancer. And I say, well, why? Doesn't make any sense. And the doctor says, We don't know, but we've been doing this for five years and it's about 32% increases. Eating meat is so up to you. I just thought I would stop eating meat. 

Steven Parton [00:46:52] I'd had my bets. Yeah. 

David Weinberger [00:46:54] Yeah. Maybe you don't want to, you know, but people do that. And the second thing is we do this all the time and have forever for non air base applications, human applications. Well, we don't understand why, but it works so we do it. We didn't know why aspirin worked until like 1950 and it had been used by the Egyptians. There's a long history of being used without understanding how it works. Okay? I mean, nobody understood how that works. And I they do now, but we didn't for a long time. But it worked. I take medicine now. I have no idea how it works now. Somebody does probably that's human metabolism is insanely complex. There's an argument that it's the most complex thing in the universe because it includes the brain, which is also arguably the second most complex thing in the universe. So we're used to black pot. Or I'll give you another example. I'm sorry. If you don't get a job that you applied for, you don't get into a college that you applied for and you think you really should have. Or if a judge says, I'm sentencing you and you think, well, that's crazy, that's unfair, it's they may maybe an appeal process may not be. But the one thing that the judge is not going to tell you is why they said gave you that sentence. And if you say, why did Schmo next to me in high school got into this college, I didn't. You can ask the college either. They have some stock answer, but they're never going to tell you. And if you ask the college if the college did a review because they were so upset by this, you know, which they're not, and they wouldn't. What they're going to have eventually come down to some some person who's read through your, you know, materials in a hurry and will be able to reread them because they won't remember and say, oh, I think I must have been thinking that we didn't need another hockey player or whatever. It's like this is a black box. I mean, you know, it's one of the things that machine learning teaches us. Yeah, I need to say now, before anybody who cares about this correctly, just down my throat, there is a tremendous amount of work being done to try to make systems more explicable. There are ways of evaluating the fairness of systems and even to find how they went wrong without knowing exactly how they work. That's the second point. And it's it's really important work and a lot of progress is being made there. There's no guarantee that I know that we are going to always be able to discover how a system made its judgment. And there's argument about what I'm about to say as well. But it seems there's a very plausible possibility that I will continue to get more and more complex, outpacing the tools that we put in place to make an understandable. There's a very long conversation with maybe regulation that says we don't want that. It may be that there are approaches that will always give us something of what we need and so forth, because explanations are tools. You don't have to understand everything and this is before machine learning as well. The tools we use to do things. We may be able to get enough tools to lower the panic about machine learning, working in ways that we can't understand. My hope. I mean, I can't I don't want to call it a hope. I hope that what we take away from our encounter with machine learning is the notion that. Oh, the world is really, really complex, and we should always continue to support science more and more in its quest. As much of but not all of science is to discover universal laws that apply to every situation or even more domain specific laws. But we should also we should never again let that blind us to the equivalent importance of the complexity of individuals which we hid from for thousands of years because we were not equipped to deal with it. We couldn't possibly. So we don't know where confetti is going to fall. Fortunately, who cares? But it's just we can't do it, so who cares? We can't predict outbreaks of diseases or who will? Who's going to get COVID? Yeah. Even now. Even even right now. You know, people who are fully waxed and careful are getting it. And they don't know how. We don't know how. It's unpredictable. And we're okay with that because there are too many particulars. And because I'm sorry, we can't predict. Because there are too many predictable. Like who? You passed on a subway, you know. 

Steven Parton [00:51:38] Right. 

David Weinberger [00:51:39] And so forth. But we're also okay with it because we don't have an alternative. And so, you know, but now I hope machine learning is getting us to recognize that we live in a kingdom of accidents, that the densely interconnected, chaotic system in which many things are predictable, many are predictable. Probabilistically, there are general rules that apply in many realms and apply better in some than in others. But essentially, life is our world, and the universe is a series of overwhelming particularities, all affecting every other piece, literally every other piece, all at once, forever. 

Steven Parton [00:52:19] Yeah. And we should embrace that. 

David Weinberger [00:52:21] As well, because it's real. Because that's the truth. Yeah. And we can see that now that we have a technology that lets us that takes advantage of that to some extent and reveals it to us. 

Steven Parton [00:52:32] Yeah, I love that optimistic note, David, and I know we're coming up on time, so I'll just segway here to to the last question, really, which is, do you have any closing thoughts? Anything you want to talk about that we haven't touched on? Where do you think this is going? Anything at all? The floor is yours. 

David Weinberger [00:52:50] So on the one hand, I obviously cannot now say and I know where it is going, that's because I don't think you know. I have a hope for how it's going is but only in the sense of what we may be learning from this encounter. And most of it has to do with accepting human limits, which I think is a very healthy thing for us to do. I would like to believe, but I have little confidence that our encounter with machine learning that is teaching us about the importance of particulars as opposed to only looking at the general as a source of truth, will teach us about the importance of differences, because particulars are particulars only because they are different from other things. And the search for generalizations for general generalizations looks beyond those differences. And if we recognize that we're in a world of particulars, then we recognize that we're in a world of differences. And I would like that. I would like to believe that that will extend to the political and social realm where we become more engaged with and appreciative of differences among us. But that's a big leap. I don't and right now, you know, I generally refer to myself or think of myself as a depressed optimist. And I'm like a clinically depressed optimist at this point. Is this you know, it's a really tough and critical time for us. 

Steven Parton [00:54:24] Yeah. 

David Weinberger [00:54:27] So I don't know. I don't know what will happen. I am hopeful that there are lessons that we learn, including about the nature of of being moral and good that we will take from the encounter whether we want to encounter it or not. Our encounter with machine learning will see. Or maybe we won't, you know. 

Steven Parton [00:54:51] Fair enough. David, I appreciate your time. And this was some nice optimism, you know, and like you said, in an otherwise not so optimistic circumstance. So I appreciate your time. 

David Weinberger [00:55:02] Yeah, well, good. I figure that pessimism doesn't need another voice. I mean, that's really what it comes down to. 

Steven Parton [00:55:09] Yeah. All right. Well, thank you so much. 

David Weinberger [00:55:12] Thank you. Great talking with you.