< return

Futureproof in the Age of Automation

September 12, 2022
ep
71
with
Kevin Roose

description

This week our guest is Kevin Roose, a tech columnist for The New York Times and author of several books including his latest Futureproof: 9 Rules for Humans in the Age of Automation.

In this episode, Kevin and I discuss how the world’s tendency towards automation is leading to the end of many professions as well as a world of individuals whose behavior is shaped by algorithms. We explore how humans have promoted technology from an assistant that helps us meet our goals into now being a boss that controls our filter on reality by controlling what information we see. This includes things like recommendation engines, what we see in our news feeds, what ideas we’re exposed to most frequently, and a whole lot more. Kevin proposes that to be more employable and free from these behavioral constraints, we should focus on being social, surprising, and scarce.

Find more of Kevin's work at kevinroose.com, or follow him on Twitter at twitter.com/kevinroose

**

Host: Steven Parton - LinkedIn / Twitter

Music by: Amine el Filali

transcript

The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

Kevin Roose [00:00:00] As machines get more and more humanlike, a lot of humans are getting more and more mechanistic. We are outsourcing some of our agency and our free will to the machines that are around us every day. 

Steven Parton [00:00:28] However, when my name is Steven Parton and you are listening to the feedback loop on Singularity Radio. This week our guest is Kevin Roose, a tech columnist for The New York Times and author of several books, including his latest FUTUREPROOF Nine Rules for Humans in the Age of Automation. In this episode, Kevin and I discuss how the world's tendency towards automation is leading towards the end of many professions, as well as a world of individuals whose behavior is shaped by algorithms. We explore how humans have promoted technology from an assistant that helps us to meet our goals and to something that is now more like a boss that controls our filter on reality by controlling what information we see. This includes things like recommendation engines, what we see in our newsfeed. What celebrities and ideas were exposed to most frequently and a whole lot more. Kevin proposes that to be more employable and to be free. From these behavioral constraints, we should focus on being more social, surprising and scarce. So everyone, please welcome to the feedback loop. Kevin Roose. Well, obviously, one of the motivations for wanting to chat with you was because of your book Future. Future Wow. Future Proof nine Rules for Humans in the Age of Automation, which came out March of last year. And then you say in your bio that one of the things that prompted you to write your book was that you personally felt worried that you, as an individual weren't ready for the world of AI and automation and algorithms that we were creating. Can you unpack that concern a little bit and the motivation that brought you to the book? 

Kevin Roose [00:02:11] Yeah, this started for me, this book. The idea first came to me almost a decade ago. I was working as a junior reporter and I was covering Wall Street and and corporate earnings. And, you know, like a lot of entry level business reporters, you know, four times a year, companies would put out the error reports and I would scan them and write them up. And it was like the most boring part of my job, but it was it was part of my job. And then one day I got an email from this company that had basically developed an AI that would do what I did. It would take structured data like a corporate earnings report and strip out all the newsworthy parts and plug it into a news story and publish it. And so I thought my first thought was, this is great because it means I don't have to do this anymore. My second thought is, Well, am I out of a job? Like, how could this replace me? And of course, that's not all I did. Like, that was just part of my job. But it really got me thinking. I think those of us in the kind of like. Creative slash knowledge professions had kind of consoled ourselves by saying that, like automation and I were like a risk for other people, like factory workers and delivery drivers and like the sort of blue collar and service professions we thought, you know, oh, well, those are clearly the jobs that are going to be eliminated first. And so I can kind of rest easy, but that's not true. And the more I looked into it, the less true it it was. And so I really started to get scared, not just about my own career and future, but about the sort of the sort of, I don't know, blissful ignorance that many of my fellow, you know, journalists and knowledge workers and creatives had when it came to what was going on in the field of AI. And so I decided to write this book, Future Proof, not just to kind of correct the record about what AI is doing and can do to to replace work in those professions, but more so on how we should react, like what steps we can take to prepare ourselves, to make ourselves less susceptible to displacement, to do the kinds of things that I can't do or struggles with. So, you know, we can give ourselves a little bit more security by making ourselves less replaceable. 

Steven Parton [00:05:01] Yeah. I mean, what are the things that you would say that machines can't do? Because I was like you thinking for the longest time that writers and artists were going to be safe. But then GPT three comes on the scene and you see it spit out novels and articles and art that rivals you know in seconds were very talented artists make in months and now I'm not so sure about where the safety is and then in an occupation. 

Kevin Roose [00:05:31] Totally. I've been playing around with Dolly, too, which is the, you know, image generating sort of spin off of GPT three. And it's incredible. And every designer, photographer, illustrator I know is panicking because they're like, this thing is, you know, good. It's very fast and it's really cheap. And, you know, how do I sort of adjust my my playbook now in this world where this thing exists? And I think, you know, I, I wrote this book before GPT three or I mean, I guess Jeopardy three had had Jeopardy two was out, GPT three was coming out and Dolly had not yet been released when I was writing this book. So I thought maybe we have like five or ten years to get ready for this. As you know, creatives who work with text and images and it turns out we had like 18 months. So I went to a lot of sources when I was writing this book. You know, people who work in A.I., people who run startups, who invest in startups, people who are sort of on the the frontier of AI research. And I asked them like, what? What can't I do? Because that's like that. I thought that's where we should focus our energy, right? Like on the stuff that is very hard for machines to do. So what is that? And they basically broke it down into three buckets for me and I call them surprising social and scarce. So surprising tasks are things that are not regular and rule based. So I, you know, is very good at chess. It's very good at go, it's very good at like structured tasks that are repetitive and regular. They have the same rules every time. And you can iterate, you can you can run a model, you know, a million times and it gets, you know, a fraction of a percentage better each time until it surpasses human level performance. And so, you know, there are there are jobs that are more or less like that, and those are not very safe. But if what you do is surprising, if it varies a lot from day to day, if it's not rule based, if it involves lots of chaos and, you know, unintuitive, you know, tasks and reactions. The example is in the book, it's like a kindergarten teacher. Like that is not a job that is going to be easily automated because that is just pure chaos. And so that's the first bucket. The second bucket is social. I call social jobs. And these are jobs that are not primarily concerned with making things. They are concerned with making people feel things. So the output is not a good it's not a service, it's an emotion. So if you are a social worker, a therapist, you know, a hospice chaplain, a even something like a barista turns out to be. A social job because what people are paying for when they go to a coffee shop is not the coffee. Everyone has a coffee maker at home. What they're paying for is the sense of connection, the smile to start the day, you know, the friendly greeting. So that turns out to be a pretty resilient kind of job, not because computers can't do those things, but because we want humans to do those things. We want a human barista to greet us in the morning and we're willing to pay for that. And the third category of sort of safe, hard to automate work is what I call scarce work, which is not to say that there are only a few of these jobs. I actually think there are a lot of them, but they involve sort of rare skills, combinations of skills, high stakes situations with low fault tolerance, things that don't occur very often. But when they do, you really want someone who who knows their stuff. So the example I used in the book is like a911 operator. Like that's a job that we we have the technology to automate. You could run a totally automated dispatch that would route your thing, but we've sort of decided as a society that like when you call 911, it's too important to entrust to a machine. You need someone on the other end of the line to pick up who knows exactly what they're doing, knows how to serve. You can make sense of, you know, whatever's happening and and get you the help that you need. So those three categories, surprisingly social and scarce, are sort of the basis for what I would consider a future proof set of jobs. 

Steven Parton [00:10:02] Yeah. Do you feel at this point that they're still as futureproof as you thought when you wrote the book? Is there possibilities that maybe we will make more social bots, that piece, that sense of a social outlet that we're looking for at the coffee shop, or will we make more adaptable and spontaneous? You know, we could just do a randomizer to keep it spontaneous. Spontaneous, like, you know, how much of this stuff do you think will stay futureproof in the long term? 

Kevin Roose [00:10:33] Yeah. I mean, I don't think there's any such thing as a as a sort of robot proof job. Like nothing is totally, you know, immune to automation. But every job contains tasks and routines and elements that are more automatable and less automatable. So like, take my job, for example. Like, I, you know, I write articles, I, you know, I'm hosting a podcast, I, you know, I, I do, you know, I send a lot of emails. I, you know, have meetings, I talk to sources and some of that stuff could be automated, even it maybe even the article writing part. But I'm banking on the fact that there are these other parts of my jobs that are more surprising, social and scarce, that do sort of require more of my humanity. And that that's what, you know, as, as the writing articles, part of my job gets smaller and more heavily mechanized. Those other parts will still be there. 

Steven Parton [00:11:37] Yeah. And do you think that these future proof jobs will represent a substantial portion of jobs, or do you think that, you know, will create what I think Harari called the useless class, you know, this mass unemployed group of people who simply can't learn the skills fast enough and don't have maybe just the emotional intelligence of their social skills, what have you, to step into these future proof positions. 

Kevin Roose [00:12:04] I think it's going to require some work to get us there. I think right now we have sort of a suboptimal situation where as as machines get more and more human, like a lot of humans are getting more and more mechanistic. We are becoming more predictable. We are becoming you know, we are outsourcing some of our agency and our free will to things like recommendation engines and, you know, just sort of turning over parts of our choice architecture to the machines that are around us every day. You know, I am not immune to this, too. I like I let Spotify make my playlists for me. Like I, you know, let Gmail, I do the little autocomplete thing sometimes like we are are making ourselves easier to replace. And so part of why I wrote the book is to tell people like this is coming. And rather than sort of surrendering to it or giving up like you actually need to start moving in the other direction. It's not that everyone needs to go to coding boot camp and become a machine learning engineer. Like that's not going to help because a lot of the programing jobs, as we've seen with things like, you know, GitHub copilot and, you know, open AI's codecs like those jobs are becoming more automated. Two. What we need to do is to really sort of amp up our attention to building human skills, the things that that machines can't do. 

Steven Parton [00:13:37] Yeah, if I can, I'm going to push back on that just a little bit to just explore the conversation. But, you know, sometimes I wonder if when we talk about human versus machine like skills, in some ways when I think of machine skills, I think of, you know, logic and reason and for in a good way less emotional. And when I think of human, I can think of capricious, emotional, angry violence, you know, succumbing to animalistic instincts in a way. Is there maybe some benefit then towards embracing some of the more maybe automated or logical behaviors of machines and putting aside maybe some of the things that might have been the reason for war or greed or things like that that, you know, are there in our animal past? 

Kevin Roose [00:14:25] Yeah, I'm sort of I don't actually disagree on that. I think that there are areas of life where our our human frailty, you know, whether it's in the form of cognitive bias or just outright bias toward different groups or different, you know, different factions makes us prone to bad decision making with with large consequences. And one of my favorite examples of this is in the criminal justice system. And, you know, there are a lot of people, you know, especially on the left, who are very opposed to using algorithms for things like criminal sentencing, because, you know, you could have a biased algorithm that, you know, learns to replicate, you know, data from, you know, biased sentencing guidelines in the past that would disproportionately affect, you know, certain groups. Um, and I look at that and I think, well, yeah, sure, the algorithm is biased and we should work to make them better. But also, like humans are really biased, like judges and, you know, and juries like they are, they are so fallible. And, you know, I think about these famous studies where they look at the sentencing decisions made by judges before their lunch and after their lunch, and they're much more lenient after lunch than they are before lunch. So that's like a total piece of of human frailty that I think we could sort of, you know, eliminate through turning to something more like like an algorithmic sentencing decision tree. But obviously, you have to do things like that very carefully or people are just not going to trust that they're actually less biased than the the biased humans who used to do that job. 

Steven Parton [00:16:11] Yeah. I mean, do you think that that's something that requires us to find a way to look into the black box, you know, to see what these algorithms are doing, since for the most part with machine learning, we are clueless on to how the decisions are made. Or do you think that we'll just, you know, push forward and accept the fact that we don't really know how the decisions being made, but it's accurate enough of the time that we'll just trust it anyway. 

Kevin Roose [00:16:37] No, I think Explainability is a big piece of of how society is going to wrestle with, you know, AI and automated decision making. I think that the less we understand about how these algorithms work, the more we're apt to, you know, we're prone to mistrust them, to impute, you know, bad motives to their creators. It doesn't mean that we all need to go like debug the models ourselves by hand, but we should at least know. Like, here is why this particular thing is being recommended to me. Or Here's why this sentence is being handed down. Or Here's why this, you know, this this loan is being approved or denied or something like that. 

Steven Parton [00:17:25] So do you think that would be a big benefit for us to start bringing transparency into how these algorithms are work? Like, let's take a really mundane example, but like the Spotify thing, you know, it would seemingly be really beneficial for me to have a clear understanding of why they recommended the music to me that they did recommend to me, because then it gives me the power to kind of game, game it in a good way, but also to like know what proclivities that I might have. Like, Oh, I didn't know that was a genre that I was into and now I know the name of it. Do you think that we should start kind of pushing towards that transparent policy for these things? 

Kevin Roose [00:18:03] Totally. I mean, I think that's that's a huge piece of getting people more comfortable with automated decision making in their lives. I'm thinking about like there's a sort of phenomenon on TikTok right now where like there's sort of this code that has developed among like young people who use tick tock or like. They're sort of intuiting that the algorithm sort of, you know, up ranks or down ranks content based on certain words. So like instead of saying like poorer and they'll say corn or instead of saying like onlyfans, they'll say like accounting or something like that. And then so they have this whole sort of folklore that's developed around the algorithm because no one knows, because tiktok's not saying like, here are the words that will get you demoted in the for you page. It's all a black box. And so in the absence of concrete information, people are going to make up theories and some of them will be right and some of them will be wrong. But like, that's the kind of behavior that we see when companies do don't do a good job of explaining how things work. 

Steven Parton [00:19:14] So what do you think about the ways that these algorithms are like grabbing people's attention? Is that part of why you like this idea of being spontaneous and unpredictable so that you can kind of escape the snares that that algorithms use to kind of hook people? Is that a big concern of yours in terms of like attention? 

Kevin Roose [00:19:34] Yeah, it is. I mean, I think it's useful maybe to distinguish between some of the subtypes of algorithms because, you know, we're surrounded by algorithmic influencer recommendations every day. And there's a researcher who has talked about the difference between Read Your Mind algorithms and change your mind algorithms. So there are many recommender systems, for example, that are basically just trying to predict what you'll like based on what you've liked in the past. And that can be benign and it can be helpful, but it can also be corrupted because, you know, if I'm Spotify AI, I want to recommend you the songs that are going to increase your time spent in the app and you know, minimize the payments that I owe to copyright holders of music. If I'm Netflix, maybe I have an incentive to steer you toward my original content, even if my algorithm is telling me that you would probably like this other show that we didn't make better. So there's a kind of corruption of preferences that happens at the level of the algorithm that I think really does like filter down into who we are. I this story didn't make the book, but it was it was while I was writing it, I had this experience where I was using one of those like online clothing delivery companies where you like type in, you know, here, here, my measurements and here's sort of, you know, like I, you know, I like these five out kinds of outfits and I don't like these five kind of outfits. And I was doing this for a while, and it would send me boxes of clothes and I put them on. And one day I was like looking at myself in the mirror, wearing this, like, bomber jacket that I'd gotten from this thing. And I was like, I hate this. Why am I wearing this? I hate how I look in this. I hate everything about this. But I had trusted the algorithm and its sense of my style over my own actual preferences. Like the machine's preferences had overwritten mine in my brain. And I think that's happening on a scale that's kind of hard to fathom and kind of blurring the lines between like who we want to be and who the machines want us to be. And I think that's really dangerous. 

Steven Parton [00:21:58] Yeah. And I mean, one of your your rules, I suppose, is, I believe, demoting your devices. And in that sense, do you think that it therefore becomes kind of important to step away from, I guess, the machine world in a sense, so that you can kind of self-reflect on who you are and develop that sense of self outside of the influence of algorithms and recommendation engines. 

Kevin Roose [00:22:21] I think, yes, I do. I do think that some, you know, moderation and forbearance is warranted there. I'm not a total Luddite. I don't think it's realistic to say like don't use a smartphone and don't use Spotify and don't use YouTube and whatever. I just I don't think that would help, frankly. And I think that's an unrealistic ask to make of people. So the chapter in the book is called Demote Your Devices. It's not called Get Rid of Your Devices, because what I'm really arguing for is sort of a rebalancing. You know, I think for a lot of us, like when we when we first got smartphones, you know, your first BlackBerry, your first iPhone, they were kind of our assistants like, you know, let you is Steve Jobs, like his famous line about, you know, computers being like bicycles for the mind, right? Like they were tools that we could use to get things done to communicate with people to, you know, work on the go, whatever we were doing. And then at some point, like, they got a promotion and they became our. Bosses, right? They tell us what to pay attention to. You know, who to communicate with. Which celebrities Instagram feeds should show up in front of us. Like like really setting the agenda of our lives in a way that I don't think a lot of us sort of consciously agreed to. And so what I'm calling for is not total abstinence. It's just taking back some of the control that we've ceded to our devices over the years in a way that I think could help a lot of us detach, like what we like from what these machines want of us. 

Steven Parton [00:24:05] Is this something that was influenced by by your coach, Catherine, at one point? 

Kevin Roose [00:24:12] Yeah, I did. I did take myself to phone rehab with the help of a professional phone detox coach Catherine Price, who's become a good friend. And she's really amazing. And so she put me through a 30 day bootcamp where I basically, you know, sort of systematically weaned myself away from my phone. Totally. It's not an abstinence based program, but it was it did have a sort of radical effect on on my awareness of my phone, just making the sort of unconscious, conscious, like, why do I you know, so one of the things that she advised me to do right when I started is to just put a rubber band around my phone. And I've kept that. And I found that incredibly helpful because it's like not it doesn't prevent you from using your phone. It just makes it kind of a little more annoying because there's like this rubber band. And if you want to like scroll, you have to like kind of push it out of the way or take it off. So it's just a little like speed bump that forces consciousness. It's like, Oh, now I'm using my phone. Do I want to be using my phone? What else should I be doing instead of using my phone? So little things like that can really kind of help us reset our relationships. 

Steven Parton [00:25:28] Yeah. Are you a fan of more of those kind of speed bumps being implemented into the world of technology? Because I think of things like Twitter, for instance, and some of the conversations I've had in the past with other guests revolved around this idea of putting, you know, reducing the amount of shares that an article can go through a day like that, a person can make a day. So you're limited to three. So you have to think about, is this something that I want to share or, you know, the notifications that pop up and say, hey, this comes from an unreliable source. Are you sure you want to share it? You know, we're going to let everyone know that you're sharing unreliable data. Do you think these kind of speed bumps should be implemented more broadly in the and the tech space? 

Kevin Roose [00:26:11] Absolutely. I'm a big fan of friction, which is not a popular word in the tech industry. I think that, you know, one thing that, you know, tech companies have found is that when they introduce not not huge amounts of friction, but just little speed bumps, people tend to use these things in a more thoughtful way. So Twitter, you know, sort of famously did this thing where it's like, you know, if you want to share an article but you haven't read it yet, it sort of says like, hey, do you want to read this article before you share it? And they found that it radically, you know, reduced the sort of amount of of, you know, false and misleading information that people are sharing just because it sort of prompted it didn't stop them from sharing. And it just put one little interstitial in a box between the act of clicking share and the and the act of sharing. And that was enough to just, like, make people sort of slightly more conscious of what they were doing and give them a sort of chance to kind of recalibrate or rethink their decision. So, yeah, I'm a big fan of those products. I think that more of the products in our lives should be like 10% harder to use. 

Steven Parton [00:27:26] Yeah, I mean, it feels like that's kind of the switch between the unconscious action and the conscious action. You're just trying to buy that person off distance and time to reflect on what they're doing instead of just doing it habitually. 

Kevin Roose [00:27:39] Totally. And that's like we do that all the time in our lives. Like, you know, you look both ways before you cross the street or, you know, you, you, you, we, we don't we don't think of decision making as something that should be totally seamless. But tech companies, obviously, like their incentives, are all toward frictionless design because they want you to click more and scroll more and watch more. And like, you know, if tick tock put a, you know, thing on the for you page where like after ten videos it would say like, are you sure you don't have anything else to do that's more important than scrolling mindlessly through your TikTok feed like that would probably really hurt their usage. So that's why they they don't do it. But I think companies like Apple that have sort of more control at. The device level have been, you know, I think the screen time nudges are a good development. I think that some product, things like that have had a real impact on making people more aware of what they're doing. 

Steven Parton [00:28:38] What do you think about like the models that drive this? I mean, is that maybe one of the key issues here is the incentives and I think specifically of social media and the fact that as a free. Platform. You know, cliche as it is, you are the product. Your attention is the product. The advertising model is there to hook you. But if we switch to something like subscription and it wasn't ad based, then we might have a change in incentives. So I guess just more broadly, like what are your thoughts on the incentives around, I guess, the future that we don't want and maybe how we could go around it? 

Kevin Roose [00:29:17] Yeah. The sort of ad. I mean, I'm a little less. Opposed to ads conceptually than than some people. I don't think they're, like, intrinsically evil. Um, I do think that. You know, we've we value what we pay for. And so, you know, I think, you know, what I've found before I was at the times I was I, you know, worked for. You know, a couple of places that got most of their traffic from social media. And so I found that the incentives as a writer were, you know, unspoken, like no one was telling me, like go out there and write like click bait that will get a ton of traffic on social media. But like, I kind of knew since like that's where my bread is buttered, like that's what I should go do. And so I did that. And one of the best parts of moving to the times for me was just this sense of freedom from that kind of rat race. Because, you know, we're not measured on page views. It's mostly a subscription business at this point. We do have ads, obviously, but like most of you know, the majority of revenue comes through subscriptions and that, you know, you you don't get totally free from warped incentives with that. I mean, people talk about, you know, subscriber lock in and, you know, if you're if you're if you're making your subscribers mad, that's not good either. But I do feel like it has freed me from that particular type of sort of lizard brain incentives, like just write the thing that's going to get the most clicks right. So I think I think incentives matter a lot. I think that. Also, I think that there's there's been some good writing and thinking about kind of, you know, how do we measure? How do we measure a kind of aspirational incentive? So, you know, part of what we now know has been very destructive about platforms like Facebook is that they were optimizing for engagement. And there is sort of internal logic. There is like, well, people are showing us what they want by what they click on and what they watch and what they. And so clearly, you know, if they're clicking on something, it means they want to watch it. And so we should optimize for that. And like what I would say to that is like, well, maybe, I mean, maybe that's what I want to watch, but maybe I have a higher goal that conflicts with that. Like, you know, if I eat fries and ice cream. Like, on one level, yes. That means I wanted to eat fries and ice cream. But on another level, like, if I stop and think about it, I might say, no, I don't actually want to eat fries and ice cream. I want to eat salad or I want to eat chicken or something more nutritive. 

Steven Parton [00:32:15] Yeah. So eventually I'd like to eat something else. 

Kevin Roose [00:32:18] Right, exactly. So I think I think we need to figure out the incentives. But also, I think these companies, these platforms need better ways of of measuring what actually matters to people and not just using revealed preferences as a proxy for actual preferences. 

Steven Parton [00:32:35] Yeah. And what do you think in terms of public policy when it comes to this stuff? Like how much would you like to see policy help make the world futureproof versus maybe like grassroots cultural shifts, you know, do you think the companies are going to do it? Do you think it's going to be up to the masses to demand it? Do you think the government needs to delete it? 

Kevin Roose [00:32:56] I don't think the government needs to lead it, nor do I think they're particularly likely to lead it. And I don't mean that in as snarky way, but we you know, we just we've seen throughout history that, you know, first technology changes and then government catches up. And, you know, I think if you look around the world, the role that government has played and can play in helping people through periods of technological change is to really solidify a, you know, a social safety net and an economic safety net for people. So, for example, in. In Sweden, there are these job councils where like if you are laid off as a factory worker because your factory automates, you know, 5000 jobs, you don't just go totally like unemployed and like out of luck. And you have to retrain yourself through these job councils that, you know, companies in the public sector all pay into. They help retrain workers. They help replace, you know, find them new jobs. It's like a really robust safety net. And so as a result, they haven't had, you know, mass unemployment there. Something similar exists in in Japan for, you know, workers in in certain manufacturing jobs. So there are countries and, you know, I would include things like like international health insurance in that where, you know, in America, if you're automated out of a job, it's a really life altering event because not only do you lose your job, you lose your health care, you, you know, you you can really screw up pretty quickly. But in, you know, a number of other countries with with national health systems and, you know, socialized health insurance, you don't have the same kind of double whammy of losing your job and your access to health care. 

Steven Parton [00:34:55] And so with all that being said, like, are you feeling optimistic or pessimistic about the trajectory that we are on? Are we bolstering our defenses against automation well, or are we on a downward spiral that's going to come to a grinding crash very shortly? 

Kevin Roose [00:35:14] Yes. Yes. You know, so in the book, I say I'm a sub optimist because I am actually quite optimistic about the technology itself. I think that, you know, GPT three and Dolly and whatever, you know, Google's cooking up like they all can do wonderful things for us. I think the part that I am less optimistic about is the is the humans, frankly. You know, I, I we've seen that the people who are sort of not just, you know, building the A.I., but also, you know, implementing it, they're not necessarily sort of working in the interest of of their people. They are, you know, often CEOs of big companies who just want to cut as many jobs as possible as quickly as possible. They're not thinking about this as sort of a holistic problem. They're just like, how can I use technology to shave my costs so that I'm I, you know, have my margins are, you know, better next quarter. So I think there's a real. I think the technology can be amazing and the outcomes can still be poor. If the people in charge of deploying and overseeing the technology are not doing a good job of being responsible. 

Steven Parton [00:36:37] Yeah, fair enough. Well, Kevin, I know we're coming up pretty close to time here to get you to your next appointment, but I want to give you a chance to give any closing thoughts or maybe point people towards any projects you're working on or anything you'd like to talk about. 

Kevin Roose [00:36:52] Well, thank you so much for having me on. This is so much fun. The book is Is Future Proof. You can buy it anywhere. Books are sold, as they say. And yeah, I'm I'm hosting a podcast that'll start later this fall at The Times with my friend Casey Newton. And that will cover A.I. and, you know, the metaverse and web3 and all these these concepts that are starting to become very important in our world. So I hope folks will tune in to that. 

Steven Parton [00:37:25] Is that named or is anywhere people can find anything about that yet or should they just keep their till. 

Kevin Roose [00:37:32] The name is still under wraps? But yeah, it should it should launch in the next month or two and it'll be on the New York Times. 

Steven Parton [00:37:37] Perfect. Well, we'll send links to all the relevant things. And again, Kevin, thanks for your time and I appreciate it. 

Kevin Roose [00:37:43] Thanks so much for having me. This is a lot of fun.