This week our guest is author David Auerbach, who was a software engineer at Microsoft and Google during their rise to become the dominant companies they are today.
In this episode, David and I discuss his latest publication, Meganets: How Digital Forces Beyond Our Control Commandeer Our Daily Lives and Inner Realities. David explains how, despite what many tend to think, the realm of digital technology we all occupy is beyond the control of any of us, even the major companies who are creating the technology. He suggests that, similar to the weather, it’s become a complex system that is difficult if not impossible to predict and control. This takes on us on tour of the many consequences and benefits this paradigm, including the loss of individuality, the impacts of chatGPT, the loss of a shared reality, regulatory possibilities, and more.
Learn more about Singularity: su.org
Music by: Amine el Filali
David Auerbach [00:00:01] Until we start treating these treating these mechanisms, I call them in, I guess, a more ecological way, treating them more like, you know, economies or treating them like weather systems. We aren't going to be able to figure out how to actually, like make. Interventions that actually are effective. But instead we're just doing things that are that are that that either don't work or annoy people or both usually benefits.
Steven Parton [00:00:42] Hello, everyone. My name is Steven Parton and you're listening to the feedback loop by Singularity this week. Our guest is author David Auerbach, who was a software engineer at Microsoft and Google during their rise to become the dominant companies that they are today. In this episode, David and I discuss his latest publication. Megan That's how digital forces beyond our control commandeer our daily lives and inner realities. David explains how, despite what many tend to think the realm of digital technology we all occupy is beyond the control of any of us. Even the major companies who are creating the technology, he suggests it's become a complex system that is difficult, if not impossible, to predict and therefore impossible to control. Similar to the weather, this takes us on a tour of the many consequences and benefits of this paradigm, including the loss of individuality, the impacts of cheap tea, the loss of a shared reality, regulatory possibilities and much more. So without further ado, everyone, please welcome to the feedback loop. David Auerbach. All right. Well, I'm going to start with the obvious question here then and just ask you what a maisonette is. And perhaps more importantly, why did you decide to dedicate so much of your time and energy as a thinker to write about the concept?
David Auerbach [00:02:11] Yeah, and I think just as easily you could ask, why invent a new word? Because there are so many new words floating around out there. Why did I think that there needed to be a new one? And I'd say that the key aspect of it is that I didn't think we had a word that really reflected the equal importance, not just of the large scale networks we have, but also of the constant effect that the hundreds of millions of users of those networks have in shaping the behavior, weights, whatever, of those networks. To the extent that, you know, compared to when I worked at Microsoft and Google, we felt that we had now more or less total control over our systems. And if you look at it now, I don't know that we didn't predict it. And I think and yet it's undeniable that so much of what tech companies confront these days are systems that are running of their own accord and not because, you know, bad actors are messing with it, but just because these systems so rely on the data that users put into it on a near constant basis and that alter the whether you want to call it weights or algorithms or training data that that this voluminous mountain of data is itself sort of a soul is a semi-autonomous entity that you just don't have control over, as you would traditional algorithms or traditional software where it's like, okay, we give you what we want and you like it and you complain about it and we fix it, you know, piece by piece. So making it, you know, it takes into effect into account that. The core notion of feedback which I inherited from the cybernetics so much. I don't talk about it explicitly in the book, but it is very much influenced by Norbert Wiener and and Ross Ashby and the idea that these systems are are constantly responding to constantly engaging in sort of inter iterative self modifications that, you know, the designers and administrators of the systems only have, you know, indirect and partial and coarse grain control over. And the reason why I thought this was so important was because it seemed like so many of the debates at all levels really were oriented around, I think, increasingly archaic conception of the of technology as being something that could be controlled at a fine grained level, even even within even within tech companies. I think that, you know, executives really preferred to think that, yes, they could do whatever they want, even if they didn't, even if they were choosing not to, when in actuality, these systems are considerably more out of our control than I think anyone would really like to think. And that until we start treating these treating these mechanisms, I call them in, I guess, a more ecological way, treating them more like, you know, economies or treating them like weather systems. We aren't going to be able to figure out how to actually, like make. Interventions that actually are effective. But instead we're just doing things that are that are that that either don't work or annoy people or both. Usually both.
Steven Parton [00:06:02] Yeah. What would you say the the bigger the big issues here that you have to forces at play, you have the fact that once you put it all out into the world, you have 8 billion people who can do whatever they want with it. And that's really hard to predict what all those people are going to do with it. And then on the other end, you have these automated tools that once you create them, the algorithms are kind of running without supervision in a sense, and that allows for this dynamic between the two that's very unpredictable. That kind of at the core.
David Auerbach [00:06:34] Is both of it. And, and the fact that those are necessary, that to build these systems, we don't there's there's no possibility for for any sort of human administration to take place at that scale. They've just gotten too big. So it's not as though we've made this choice to outsource it to machines. It's that even if you're bringing in human moderators, human reviewers, which companies are doing, there's no way you're going to get through all the content. You're seeing this. Now, obviously, Facebook faces problem, but you're seeing this with AI now, too, whereas, you know, there seems to be the A.I. companies are pretty secretive about what their training data is. And I think it's because if you show it, they release it. It's going to be what who vets this? Well, it's very at a course grade level. You're not putting in blatantly ridiculous sources. But when they say, Oh, we trained it on Wikipedia, did someone go through Wikipedia and say, Oh, that, that's wrong there? And it's like, you know, no one's going to claim that Wikipedia is accurate, right? So but are the AI companies going through and looking at it and saying and saying, okay, we've we've isolated the good bits of Wikipedia, we're using them as training? No, of course not. That's impossible. So that's that's what you're going through, is that you've gotten to this scale where the only the only you know who's big enough to watch the watchmen, right? Well, the only, the only you can only build another system. So you get into the circular dependency where you're looking to AI to solve the problems that these mega nets and the AI versions of them are creating, which is that you can't reliably administer them. But that's you set up the circular dependency there where it's like, okay, well how are you going to make sure it's doing the right thing?
Steven Parton [00:08:23] Are there any gears or levers or dials that that we can reliably point to, to give us some way to nudge the mega net? Like, are there are there any leverage points that we have a good grasp on?
David Auerbach [00:08:36] I mean, you can you can always make course level changes, you know, and the examples I like to cite here, tick tock. I think relax their recommendation algorithms when they were having trouble with recommending too many pro anorexia videos to the teen girls. And that gets into the idea of, okay, well engagement has become this the sort of default goal of, well, tech companies don't really want to be in the business of telling people what they should be seeing. So the default answer is, well, we'll just give people what they want. As it turns out, giving people what they want is not always a good thing. But the question is, is that you don't want to be prescriptive. So I think tick tock just sort of relaxed it and was like, okay, we'll just have more heterogeneous. So that sort of course grained non-targeted intervention I think is manageable. And that's what I recommend in the book. As far as like if you want to at least put some sand in the, in the gears of the systems to at least prevent them from spiraling out of control on such a slow in such a fast paced way. The other example, you know, Facebook in the run up to the 2020 election banned all political advertising. And I always say that because it's like, okay, does this seem like the action of a company that was actually able to sort of like very carefully filter out what was true and what was misinformation? Doesn't seem like it seems like this was the action of a company that was like, okay, forget it. We're just going to wash our hands of the whole thing. They also limited link forwarding to you can only forward to five people at a time in Messenger, which is interesting in itself and that you're now, you know, that's truly private discourse and they're basically just saying we're going to limit the fan out here so that it can't spread to more than five people at a time. And I'm not necessarily endorsing those specific measures, but I am saying that that's the sort of measure that you're actually going to get some traction with as opposed to say, oh, let's get rid of misinformation on this topic or even let's identify misinformation and put little tags on it that I feel those are pretty much nonstarters.
Steven Parton [00:10:47] Right? Do you think some of the issue comes from what I would call a perverse and. Center of attention. Right. Of hijacking people's attention for engagement, for advertising. Do you think if we shifted to something like a subscriber model or had different aims for these mega nets in the beginning, that we could have their the ways they manifest be more harmonious to the human condition, I guess?
David Auerbach [00:11:12] Laura That's a great hypothetical because yeah, I mean, you're basically saying you're going back work, going way back in time and then having a huge divergence. I, it's weird. I, I personally tend to be a little actually fatalistic about it. I bring this up when I talk about Farmville, which I don't know. You probably remember Farmville. And the thing about, yeah, it was an addictive viral game where your crops die if you don't, like, beg your friends to play and do all that too. And I feel that the thing is, is with these things is that if somehow if Zynga hadn't invented them, someone else would have. So that's not to remove responsibility from people. But there is a part of me that tends to lean towards thinking that human nature being what it is and virality being what it is, that the only way to get up to have gotten onto a really different track would have been to administer, you know, an authoritarian degree of control that I couldn't imagine happening. And you can even say that I think information about China can be a little hard to find, but it doesn't really seem like China has as created an environment that is all so different that they're also playing catch up and trying. They're building out similar systems and they have an army of sensors and and monitors to try to find. The people who are saying bad things and they still can't keep on top of it either. So. That's not to answer your question definitively, but I do feel like in some ways just the glut of information was just going to prove to be too much for for us to handle. And it would evolve into something along the lines of the shape that we're going in today. Just like, you know, I do think that the sort of the the mass the centralized mass media model, despite, you know, the continued prominence of, you know, a number of institutional outlets, you know, CNN or FOX or whoever. I do think that it is still gradually losing traction. And that will that will continue and that there's and that just because of the nature of these things and the fact that you can get these spontaneously self-organizing groups of geographically disparate people, that it's you're never going to be able to like batch up attention into such centralized forms ever again, barring, you know, again, some hyper authoritarian intervention.
Steven Parton [00:14:05] Does that worry you in terms of access to a consensus reality? You know, not having a common ground between people seems like a kind of big social problem for our society.
David Auerbach [00:14:19] Where there I think we're already there. I think I've been going there and I think it's only going to get worse. I think well, the better aspect will be I think people will start just ignoring realities that are opposed to theirs rather than fighting with them and that so things will actually seem peaceful. It's just you won't even be aware of the fact you'll see even speaking the same language. I think one of the reasons for the development of of of highly sort of jargon ish languages within online subcultures is to make it very easy to identify and cluster with other people. And this is this is on all sides. This is yeah, this is just like what one what terms of wokeness But also there's a vocabulary that is very specific to, to, to right wing circles, to, you know, to singularity circles. There's an entire vocabulary I could use here that I wouldn't dare use, that I wouldn't, it would be pointless for me to use in other interviews. You know, I, you know, I could talk about rule versus act utilitarianism on this show. And it is entirely possible that a good chunk of your audience actually will know what I mean by that. I don't know. But but I wouldn't necessarily bring that up on a lot of the other interviews that I've done. And, you know, if you look 50 years ago, that definitely wasn't the case. Then there was much more of a shared vocabulary. So it goes beyond. So when you talk about reality, I would say, yeah, it's extending to the level of just the language being used. And Lord knows, just in doing interviews for this book, I've been amazed at just how. A It's interesting because we before this, before we started, I was talking about bias and that might have actually been the wrong word. If it gets down just to the level of vocabulary and perceptions of the world. And I think. One of the things I did in this book was I did try to remove myself from any particular context as much as I could. And I've been in probably, I think, a more a wider variety of them than a lot of people because I was in academia, I was in like DC think tanks, I was a software companies. And yeah, it's a you people have very much created shared sets of assumptions that they never think to question. And the issue is when the rubber hits the road, when society actually has to take an action that means different things to different people. What do you do about this? I think COVID was definitely one yeah, one one wake up call on that. But I think it happens on a much subtler way.
Steven Parton [00:17:01] So is that one of the is that one of the negative consequences then of of the meg And that's is that they kind of build these isolating pockets by virtue of kind of algorithmically pushing people into little echo chambers.
David Auerbach [00:17:17] The term I've been using is narrative bunkers, because it's not just echo chambers. You can have disagreements within them. It's just that the prioritization is the prioritization. And so the shared assumptions varies, so vary so much. I think that a lot of the problem is that is it comes also from the jarring juxtaposition of scale. It used to be that you dealt with the wider world only intermittently every day. You know, you get the news, but everything else was local. That's not true anymore. So you're constantly being reminded of a how big the world is and how how much how connected you are to it while having to ignore the fact that huge chunks of it are have completely different assumptions about how it operates. And that sort of jarring dissonance I do think is producing a lot of the a lot of the noise and confusion we see today because because on the you can't go completely fatalistic and say, oh, it's nothing is meaningful. Some people are doing that. On the other hand, obviously people are saying like, well, okay, now every single thing, Oh, here's a person that did something I don't like and they're responsible for what's wrong with the world. That doesn't work either. But it's not something I hear I'm going to say. And we evolve to perceive things in more or less small, upfront personal terms. And, you know, mental plus mental plasticity is an amazing thing. And the human brain could work out a way to do it. But it's a big challenge, if only only because our bias towards centering the unit of agency on the individual person is very strong. And and I think that more and more, to the extent that it ever was, I think that seeing individuals as the movers and shakers in this world is also becoming a more and more antiquated idea because as as power devolve, the individual agency I think does require some idea of a hierarchical society and elites. And to the extent that that is getting leveled by mechanism, which I genuinely do believe it is, because the power is devolving, it becomes harder for to say, oh, here's a an influence or at least an influence or past a certain point that that that influence that there will be more and less important people. But the critical mass required to really move something is going to be is going to require, I think, groups of people joining together. And I think that's why I point out that some even someone like Elon Musk is as much someone who's following trends as as as instigating them, although they like not to such people prefer not to not to not to be that way. But I think if you look at like you want to big personalities, look at Donald Trump or Elon Musk, and I think you'll find that ironically they were more flexible and sort of what they would say and that anyone who sort of sticks to their guns on this or that tends to die out as the as the consensus is, as their consensus, as their backing, as their group or whatever sort of evolves and shifts.
Steven Parton [00:20:55] Yeah. I mean, I often think about the fact that hashtags are incredible in the sense that when a hashtag goes viral, you know what? Every human on the planet is basically going to be talking about that day. And it's quite remarkable that that. At the same time, all the species is focused on this one conversation because that's what you're supposed to talk about that day. And I know in the past that you've talked about, I think it was a bit wise, the nuance being lost for the individual. So do you think people are becoming more standardized because of this, that these when these mega nets, I guess, deliver us the the daily trend that we all kind of coalesce or move towards that trend? And then that way we all kind of lose some of our individuality.
David Auerbach [00:21:41] Well, very, very categorized and regimented that, you know, you're now expected to list yourself as a bunch of labels, whether they're hashtags or identifiers that that, you know, it's not that you're presenting yourself as yourself. You're presenting yourself with a laundry list of of of digits of computer friendly terms by which you can be slotted and sorted. And those because the natural tendency is of these systems are to reinforce that. Well if you if you see that I like jazz, you're not going to group me with people that like, you know, country music well okay, So that's going to become more and more something that's sort of pasted on to onto my personality. Now, my sense is that that also led Facebook to decide that I was black. So, you know, it's not like these things it did for like several years. And and when I was looking over, I was like, oh, maybe it's because all these jazz artists but that, that, that, that these things are projected onto us as much as we take them onto ourselves. And they do become self reified, you have to actually actively work to reject them. You don't even know when they've been assigned to you. And this is just, you know, it's not as though people don't it's not as though we don't pre-judge others in the day to day life. It's just that now it's being reified, built up and and prescribed algorithmically, nonstop. And yeah, I think it has a certain reinforcement mechanism and it has a certain rate, it has a certain reductionistic aspect where by these labels because you'll be we can take much more loaded labels. I tried to pick music as a fairly innocuous one alone. People certainly find ways to argue about that. But, you know, if you if you want to say whether you are, you know, all you need to say is, are you gender critical or are you trans right activist, You have already said all you need to say on that issue to to at some point you are all you've already limited your audience and and it gets to the point where it's like, well, what's the point of speaking beyond beyond that? Well, a lot of it is to shore up shore up one's own sense of group identity.
Steven Parton [00:24:19] Yet what is this what are the implications, I guess, for you and of this in terms of power dynamics then? Because I think of social dilemma, you know, and they had this idea of the voodoo dolls in that documentary, that idea of voodoo dolls being built for each person based on these labels you're giving yourself. And if the maisonettes are automated and they have all this information about you, it seems like there is a gross imbalance in the amount of power that these the people who at least have this data have over the people who are being labeled.
David Auerbach [00:24:52] Yeah, well, I mean, the good thing is that there's so much data that that they can't administer it on a person by person basis. There are cases there's a case I talk about in China where someone got where a corrupt administrator labeled someone as I think under quarantine when they weren't. And and that actually turned into a big scandal. And the person was, I think, disciplined and sent to jail or something. So there's there's big pushback on that. I don't think that's that's ironically, I don't think that's power tech companies are particularly keen to have because they want money. They don't want to actually I genuinely don't think they actually want to have power over people. The difficulty is that this power sort of accumulates in clusters in sort of an organic way, and they're watching people get stigmatized right out under them. And, you know, it turns into chaos. So can you create systems in which that sort of stigmatizing and gang like mentality is less likely? Well, I think if you if you if you if you take non-targeted steps to try to, you know, to break up like really ossified, like ideological. And and and affinities and clusters you. You might be able to help that a bit but there's no question that that as the difference as the gaps between groups grows, they become less human. They become less human to you and you care more about you care more about the people that you interact with and with whom you share a common vocabulary. And it does seem like the whole idea of a of an objective view, a view from nowhere has become less popular. But I do wonder how much of it is that this is simply becoming. It's hard to even imagine because things there's just so much data, you know, now that we're producing more data everyday than we did in the entire history of the world up to the year 2000.
Steven Parton [00:27:01] Yeah. Do you think that that is having some adverse impacts on us? Because I think of a concept called learned helplessness and psychology where if you're constantly exposed to an adverse stimuli and you can't get away from it, you develop learned helplessness. And this is kind of like a gateway to psychopathology, like depression and complex PTSD. And it feels a little bit like this overwhelming amount of data that you're talking about here could kind of lead to that, where it's just like there's so much change happening so quickly, there's so much data, and I feel helpless. I feel like I have no control over any of it. Do you do you worry about that?
David Auerbach [00:27:39] I think it's paradoxical because I think there are self-correcting aspects to it, because you find people who think the same way that you do and are very allied with you, and in some way you can find a sense of belonging that is in some ways greater than what you would find in a friend's group locally, because you're going to find people who are real. You can migrate towards people who really share a lot of interests. With you. And I mean, like, you know, there's definitely I'm like, if I look at my Facebook wall, there are people who will get a certain frame of reference. And I just you know, I don't think I could find enough people like that, just like in the in the immediate vicinity. So it's less like increased sort of hyper sense of belonging, fighting against the fact that you're more aware of your group being an island in in, in a sea of mutual incomprehension. And that gets back to what I was saying about my thought that that these these little clusters will become more units of agency themselves and we'll have more mechanisms for for for groups of people to act as units. And that those numbers become because numbers count at this point, that, you know, one voice in the wilderness is actually, I think matters less than it use matters less than it used to be. Contrarian. You need you know, you need 50 contrarians instead of just one because you need two or you need a bot army and need to engineer that or something just to sort of get enough volume and noise into the system to start making a difference. So I think what's happening is everything's sort of moving up a level that instead of the individual you're having these sort of these, these, these subgroups and that they're going to start acting more as one and annual at least feel a great kinship with the other people in your group. So it's a weird combination, I think, of alienation on the one hand, but also incredible belongingness on the other. And I think, yeah, well there's pluses and minuses to.
Steven Parton [00:29:48] Was going to say like in addition to maybe this ability to find affinity groups, what are some of the other pros we've touched a lot here kind of on the consequences the negatives but what are some of the other pros that you might see in the make a Net culture and making that dynamic?
David Auerbach [00:30:05] Yeah, I mean, I think just, you know, we we talk about bad things sort of spreading virally but the a the ability for for for information sharing is generations has tons of good good effects on its own the ability to build up, you know, collective knowledge in a more concrete way than we have before. You know, I'm not I'm not I don't think Chad is conscious or anything, but just the mere fact that we've been able to dump a huge amount of human data into it and outcomes, something that can that can sort of speak with the default voice of humanity, that's a remarkable thing. And there are positive aspects of that, even if there are also negative and very conformist aspects to it as well. But, you know, the book certainly focuses, I think, on the problems, but that's because I think it is presenting us many problems. At the same time, the ability to to look up and synthesize huge amounts of data is definitely going to lead to, you know, knowledge work that has not been possible before. The question, I think, is whether it's going to get drowned out by all the garbage that's being created at the same time that you can pull back and look at it as, okay, well, one, if you look at an environment in which you've effectively gone from informational and publication scarcity in that basically there's just a very tight limit on what information can be created, published and distributed because of the sheer limits of physical physicality to what we now have, which is more or less informational abundance that the bar to create, to do publishing something and getting it out there, or at least making it available to anybody is pretty much a zero. What are the positive and negative effects of that? Because I think that that's really that's, that's really the turning point. And the good is, well, everything is out there. The bad is that. Everything is out there and it's too much for us to deal with. Yeah. So. So the problem has shifted. So in effect, I think the whole of the landscape is just in the set of problems has been shifted. You know, I grew up I'm old enough to remember a time when when it very much felt like there was a monopoly on information. And for all the fights that we now have about how, like, o our viewpoint is being censored, our viewpoint is being censored. I mean, it's not that there's not grounds for complaint, but you still have the you still have much more opportunity to get out there and and get and at least still have it be available even if you're. Yeah, it's more. And many of the complaints are like, well, okay, I'm not being ranked highly enough. Well, okay, at least you're still there. You know, it's not that the complaint isn't legitimate, it's just that it's there a very different landscape than when you literally could be shut out. You could be you could basically be, you know, made into nothing. And one of the things I point out, you know, so substack in its early days, the sort of elite consensus was that, oh, substack is only where like apostates of like, of the only right wing wackos publish on substack you shouldn't go there. And that thing was gradually chipped away as more and more people, more and more sort of people go on substack And I mean a I will say when I in publicity meetings for this book, I was a little surprised at some of the suggestions that were given to me that I'm very sure would never have been made to me like five or ten years ago as far as like, oh, years where you could promote your book. And this is, you know, this is old school, you know, New York, New York publishing company. There's nothing. So it's not saying anything bad about it. It's just, I think, a representative trend that that that that any attempt to sort of preserve a narrative monopoly you can do it but but it's much longer and in some ways it's good you know I think the fact that sometimes it's good that we don't have control, you know, sometimes it's better for no one to have control and for anyone to have control because but it produces a certain amount of chaos and produces certain problems that we're having great trouble coming to grips with. Because it seems like people are in agreement that no total anarchy is just doesn't make for a very pleasant experience. It does seem that like and you don't have to be in favor of actually like censoring people to say like, okay, to say that, well, we should build systems in which people can have somewhat more pleasant experience, or at least I haven't seen many people holding up for channels like some ideal of Internet discourse. Even the Hyperloop.
Steven Parton [00:35:25] No, not at all. Do you think, though, that the claims that they track GPT, do you think that they could potentially undermine their own benefit by creating so much chaos? And I guess by that I mean if you have the feedback loop of for these mega nets and you have the large language models pulling their data from the pool of knowledge that's out there, but the pool of knowledge is created by the large language models in ways that isn't accurate, right? It's creating just natural language that isn't actually correct. Yeah. It's just creating whatever words look good. Eventually it kind of corrupts itself, Right? Is that a possibility?
David Auerbach [00:36:09] Oh, yes, absolutely. Yeah. No, I mean, from the very beginning I was thinking about this because I was like, well, okay, here's the New York Times writer saying, Oh, Chad, GPT says it wants to launch nukes and release the nuclear codes or whatever. I was like, Oh, great. Okay, so basically we're going to write a bunch of articles about this which are being sent back in. The next iteration is going to talk about that that much more when we ask it about. And so, yeah, this is this is going to be a big issue. But at the same time, I forgot who I was talking about this, but I was saying that on the other hand, if you lower, yeah, it could actually be perversely good because if you lower the quality of the average quality of content on the internet, enough people will lose enough faith in the problem sort of correction itself where oh, everything is now GPT generated garbage. Okay, well great. I'll just ignore all of it. So you could actually and the pendulum could actually end up flipping back to some sort of more authorized hierarchical form. I don't know. But that's the thing is that it's difficult to extrapolate because I don't think it's necessarily. A one way direction except in terms of the sheer size. But how we react to it, I think will continue to sort of swing back and forth in terms of what we'll release. These things will have unintended effects and it'll be just like a counter counter poll in the other direction. So, you know, the L.A. Times, they're going to be useful for for generating contact in very, very strict contexts. And I think I think that's manageable. There's going to be enough contexts where it just like the facts are going to be too much of a problem and people edit it. But in terms of auto generating articles and allergen generating crud, I mean, there's a lot of that out there already and there's going to be more of a.
Steven Parton [00:38:06] Well, it makes me think of ways to potentially certify or validate that the data is coming from a reliable source. Right. And this is something that you've talked about with like ad hoc, I believe it's pronounced India's ID system. And I'm wondering, do you see something like a digital ID as a step forward that might help improve the fidelity of a mechanism?
David Auerbach [00:38:31] I mean, I have I actually I feel like there's there's two different issues which are one is which one is authenticating sources of content that doesn't. But just because we've authenticated the stories doesn't mean the content any good, as we found out, because there are plenty of very reputable sources that are still garbage. And then on the other hand, there's just the issue of there is just the issue of, okay, how do you. How do you describe the provenance of content, which doesn't necessarily have to be tied to an individual person? It just has to be you have to have some sort of chain of trust established where it's like, okay, well, this video was recorded and here's some guarantee that this video has not been doctored at some point along the way by A.I. and some guarantee that it was actually recorded on on real hardware. That's a second problem I think can be dealt with. I think. Will they take some time to get it into place? But I don't see how you I think it's going to become necessary because you're going to need some way to tell whether like a video has been tampered with or not. And it's I think it's going to be difficult to do unless you've got some external validation system As far as as far as like digital I.D. goes, I do think we're heading in that direction just for the sake of simplicity. I have very mixed feelings about it because centralizing identity around any single identifier and I think it would have to be governmental ultimately, even if it's right now, it tends to be more private. That's why I say I think India is sort of a peek into the future in that, Yeah, I mean, why wouldn't you want to unify driver's license, Social Security numbers and just sort of link all these things together in a more robust way? Well, for privacy reasons, for for things like that. But those arguments, I do think, are unfortunately going to fall by the wayside in favor of just like basically having you have all these multiplying systems. You're going to want to bring them together somehow. So I do think we're going in that direction that will have unintended consequences. It will provide authentication mechanisms for saying like, okay, well, I take responsibility for this and that, and that opens up a whole Pandora's box of its own, which is that, okay, well, great. Now we've got a single identifier around which you can call us, all sorts of stuff, and that's what's happening in India. So I do think we're headed in that direction. I don't think and it may offer something positive with regard to the authentication, but it also offers a lot of dangerous because you're effectively supercharging the idea that our individual identities are going to take on are going to are going to take on are going to coalesce and take on this enormous importance such that we are effectively carrying around, you know, our digital shadow selves from the day we're born. And and and and it's increasingly one shadow self that just is our reflection that we're stuck with. Although, you know, this is something I keep meaning to write about, although it's sort of it's more it's more soft speculative, which is that I do think that you know, in the same way that we've seen the explosion of genders over the last over the last 20 years or so, I think I don't see why there's going to be any I think people will start claiming to have multiple selves, too, so that, you know, why can't I be two people? I think that that's going to catch on at some point in the next decade or two. That's my prediction.
Steven Parton [00:42:20] Well, in terms of where we are going in the future and, you know, as we kind of come to a close here in the conversation, what are some of the solutions and ideas that you have around how we handle the problems of the mega nets? And specifically, maybe you can speak to your thoughts on regulation and how much that how much of a role that should play.
David Auerbach [00:42:42] I mean, there's different types of regulations. You know, when I when I you know, if the EU forces Apple to use USB instead of lightning. Yeah, I'm on board with that. I'm really sick of lightning cables. So so I'm okay with that either first standardization that way if you're talking about auditing a ISE and and and acting as some sort of registry for those things, that's I think a lot more of a nonstarter because there's this assumption that the assumption going in that the companies know what they're doing and they're just promoting that motivated in the wrong direction by profit incentives. And I'm not saying that's not a factor, but I but most of the things people are complaining about, you know, when Facebook got in trouble for like it's like Swedish, neo-Nazi, like content, I really don't think there was anyone saying, like, but we're making so much money off of it. I really it's just hard for me to say, you know, yeah, they would love to get rid of stuff they don't, but it's just so. So if anything, I think a regulatory apparatus can easily act as effectively a sort of fig leaf for them to say like, well, not our. Problem. You know, we let them we let them look into it. And and I don't see much in the way of of governance. You know, what are you going to put the equivalent of like, you know, allow all cookies on on, on these guys? I mean, that that's what I think. Yeah. So it's like.
Steven Parton [00:44:24] Is there a tangible solution that you do see, though? Like is there something that you would love to push people towards their approaches?
David Auerbach [00:44:30] And there are and there is room for even government participation in it. It just it doesn't quite fall under regulation so much as partnership. And again, it's things like basically just throwing spanners into the works of these systems and slowing them down, balancing out the ability for content to to, to go viral. But basically instead of reinforcing virally exploding content, looking for a kind of homeostasis seeking, seeking a gentler homeostasis, there's a number of directions you could go and having some kind of turn taking mechanism to balance out the fact that the loudest people are the loudest people in various discussion spaces are often the ones that you know, that should be that be the quietest that they're way stupid. But, but generally non-targeted approaches in which you aren't trying to filter on content that I think is the important thing both for civil liberties concerns and also because I think it's it's also just like not practically feasible. It amazes me when people seem to think that we could just like get rid of the bad data. A Because it would be because just identifying is difficult and B because a lot of people seem to think that while everybody agrees with me on what bad is and it never ceases to amaze me is like, have you you really think I mean, there there is some data, there's a small amount of data that you could get like 95% consensus that like, say, everybody should know about this, oh, major earthquake. Okay. Hopefully there's no and hopefully there's a consensus about that, maybe not about anything, but like, what should be done about it. But at least maybe you can get it. We can at least report that that happened. But beyond that, man, you know, I think didn't you see what happened with code? Like there were so much and no one shifted to you? Like there's a significant part of the population that differed from the so-called mainstream narrative and never changed.
Steven Parton [00:46:45] They only hid behind their bunkers, their narrative bunkers.
David Auerbach [00:46:48] But but in general, my my suggestions and I got into these in the last chapter of the book is my solution was orient around non-targeted ways of of of breaking up ossified groups, slowing down virality, slowing down the networks more generally and and also, where possible, actually lowering faith in and sort of just like random third party data streams so that you can just sort of assume that oh, this, this, this marketing data I bought from such and such a company is that is actually accurate. But I think there needs to be some experimentation there and some discussion and more big point of the book was to just get that conversation started instead of people banging their heads against a wall that I don't think is going to really help us with anything.
Steven Parton [00:47:48] Well, speaking of conversation starters, I will come to the conversation closer's here. Do you have any final thoughts, David? Anything you want to leave our listeners with that? We haven't touched on that you might want to leave as a parting gift.
David Auerbach [00:48:02] Well, since it's like it's a Singularity podcast, I genuinely do think that the sheer skill we're at is actually going to lead us to new forms, I think, of human organization and larger scale cognition. The difficulty, I think, is that individual rationality doesn't have a larger role to play as it did in the past, that I don't think it's guaranteed that when you put a lot of individual rationality together, what you get out of it is a single rationality. You got something else and you get something that we may know that an individual may not even be capable of understanding. So again, to repeat what I said earlier, I think the unit of agency is going up. And if we actually want to get any grasp on what's going on, it's worth looking at how these individual organ systems coordinate together and how they tend to act as as unities. And I think that's a big challenge. But I think that that's actually what's where you're going to get more understanding than you are from trying than trying to solve the alignment problem or what have you.