Button TextButton Text
Download the asset
Back
Podcast

AI & Engineering Emotion

AI & Engineering Emotion

This week I’m honored to be joined by MIT Professor, Rosalind Picard, who not only founded the field of affective computing, but is easily considered one of the most impactful inventors alive.

In this episode, we explore affective computing and its many impacts on society. This takes on a tour through concepts as wide-ranging as manipulating emotions, treating health challenges, surveillance, social robots, and more.

You can follow Rosalind at twitter.com/rosalindpicard, or check out one of her very successful affective tech companies:  @Empatica or @Affectiva

**

Host: Steven Parton - LinkedIn / Twitter

Music by: Amine el Filali

Transcription

Rosalind Picard [00:00:01] And so I see human dignity as a super important thing that binds us all together. And so if any of this technology were used to diminish that, I would I would think that was a crime. That's wrong. We should be using technology to build better lives for people. 

Steven Parton [00:00:34] Hello, everyone. My name is Steven Parton and you are listening to the feedback loop on Singularity Radio this week. I'm honored to be joined by MIT professor Rosalind Picard, who not only founded the field of affective computing but is easily considered one of the most impactful inventors alive. And this episode we explore affective computing and its many impacts on society, which takes us on a tour through concepts as wide ranging as manipulating emotions, treating health challenges, surveillance, social robots, and a whole lot more. And with that, let's jump into it. Everyone, please welcome to the feedback loop, Rosalind Picard. Well, then I think the most natural place to start with you is to just make sure everybody is on the same page. And that means one simple question What is effective computing? 

Rosalind Picard [00:01:31] Affective computing is spelled with an A. Although in the early days, Nicholas Negroponte told me that it was nicely confused with effective computing with an E, and I hope that is still the case. I define the term to mean computing that relates to arises from or deliberately influences emotion. 

Steven Parton [00:01:51] And on a more tangible level, what are some of the, I guess, aspects of emotion or affect that you are able to capture in effect? Affective computing. 

Rosalind Picard [00:02:01] Yeah. The first problem I thought was computers need emotional intelligence. They people were getting mad at computers, They were cursing at them, they were kicking them. They were furrowing their brow and shaking their fists. And I heard in Texas, one guy shot his computer through the monitor and the hard drive, and a chef in New York picked it up and threw it in the deep fat fryer. People were so mad. And computers could be taught to see that what they just did was annoying, but they hadn't been taught that. And our lab was working. The Media Lab was working on perceptual computing. And I thought, you know, one of the things that needs to perceive and know how to respond to is the customer's experience, right? The user's emotions. So the first focus of our work in affected computing was to give it the ability to recognize something that you were expressing and then respond more intelligently to that. And then that branched out. We were starting work on wearables at the time too, and we started monitoring the physiology and behavior to see if we could understand more about your affective state changes with that data as well. Always taking multi-modal and contextualized approach because the context matters too. 

Steven Parton [00:03:18] And you kind of mentioned there about the timing of things. I mean, I think it's fair to say you've been credited with starting the field since maybe 95, 97, depending on the paper you're looking at or the publication of your book. In that time, I'd say we're about 25 years now. And the future kind of how have things evolved in the field? Are we are you seeing with things like A.I. a rapid increase in what affective computing is capable of? 

Rosalind Picard [00:03:47] Yes, things have moved to quite fast. I think the emphasis on in computer vision and deep learning in particular has accelerated the facial expression recognition and more gestural and body movement analyzes and of course, textual analysis, sentiment, speech, you know, interpreting how something is said. And mining lots of data online also for, you know, improving as we know with you know, chat GPT and three and things like that. Much more large language modeling that is giving, you know, a bit more sophistication to those affective dialogs. Right now, although machines still have no feelings, they don't think they don't know. They don't really learn. You know, it's just our language that is wrong when it comes to a lot of how we describe. 

Steven Parton [00:04:44] Do you think that's going to change? 

Rosalind Picard [00:04:47] Oh, good question. I think we're getting better and better at simulating the appearance of these human like traits or traits that we attribute to other animals. We, you know, we can spit There's the old saying of if you want to catch a liar in lie detection, keep them talking AI and with the machine, you know, for short assessments you can fool a lot of people for really long assessments. You know, you'll probably suss out the problems. 

Steven Parton [00:05:18] Mm hmm. So you think. And shorter interactions. And in the near term, it might be more convincing. But as we get into more kind of robust experiences and relationships with technology, it might fall apart a bit until we maybe make some major advancements. 

Rosalind Picard [00:05:34] Yeah, and it depends what you mean by fall apart. Like, do we really want to misrepresent these machines as having all of these abilities in the same way we do? You know, there may be a place for that, but in general, I'm not a fan of not telling the truth. Right. I like to tell the truth. I like to be upfront and honest. And when when an if we use deception in a study, it's under an IRP with careful independent scrutiny of how we're doing it and then always with debriefing people afterwards. 

Steven Parton [00:06:06] Right. Are there any aspects of affective computing that maybe are really common in our day to day lives that a lot of people just don't realize? Are there or is there a lot happening behind the scenes that maybe the average person just doesn't realize is happening? 

Rosalind Picard [00:06:23] There is if you are calling a call center and you're an angry customer or or the occasional happy customer, there is a vocal affect analysis. Certainly in voice assistants, people who use the Siri Alexa similar those there. They have huge teams paying attention to the affective state of of the caller of the speaker. There are increasingly, you know, deployments with face expression analysis. A bit controversial because you can imagine, you know, there can be misuse of that as well. Actually, there could be misuse of all of these kinds of technology. So there are, you know, attempts to tell if people are being authentic, you know, and there's a lot there's a lot out there. There's also a lot of wearable affective data that is now understood to be critical for health, really important for mental health, for neurological disorders like epilepsy or M.S., for lots of different things. And as we learn about those connections between our affective system and every organ in the body, we recognize that modeling and understanding how our affective state changes can help us better manage disease and I think prevent a lot of illness, too. 

Steven Parton [00:07:53] Yeah. Are these devices that you're talking about, are these often ones that are very targeted towards particular illnesses or diseases or, you know, are is the regular old Fitbit or Garmin that somebody has? Is it able to really do much to provide that same kind of data? You know, does the oxygen tracker and the galvanic response, is this really telling us enough to kind of have that same impact or is it doesn't need to be more specialized? 

Rosalind Picard [00:08:20] You need to be smart about how you use these things. And it depends what you're trying to conclude from it. If you're trying to just look at one output data point from these things, I'd be really skeptical of whatever people are claiming with that. Also, if you're doing studies with them, which we do it emit a lot. And full disclosure, I'm a co-founder and shareholder of and Pedagogue. It's Italian for empathetic, spelled with a E and at Simpatico we do we partner with people doing studies in industry and academia. Lots of different kinds of research studies. And what we what we find is that you still need to do a lot of extra work. You can't just read one value from the wearable. You need to contextualize the values. You need to look at them in the context of time. What's typical for a person's rhythms? You need to look at signal and noise, which can be different things depending upon, you know, what you're focused on. And there's a lot of smarts you have to have about it. Also, I highly recommend that I'm not the only one getting the raw data if you're doing real scientific analyzes. JP Anello blogged recently at Harvard about his experience downloading heart rate variability data from a extremely well-known consumer wearable that a lot of people are using in studies. And the old data he downloaded. And then he went and he downloaded new data. And then just to have it all in one place, they re downloaded the same old data and to their shock, sorry, variability had changed significantly. Wow. In fact, it would it completely changes the results in the study, which is horrifying. Right. Like, how can how can this happen? This is a heart rate variability. It's a very well-known thing. Well, actually, there are multiple ways to measure it. But, you know, the underlying signal, the photograph is McGrath, you know, and you extract the inner beat intervals and then you do frequency analysis or other kinds of processing of it. Well, that company is very well known for wearables, did not give the raw data. And so when they change their algorithm, his study results changed. 

Steven Parton [00:10:26] Wow. I wonder how many studies were conducted using just the output of data? 

Rosalind Picard [00:10:32] Yeah, it's it's it's a problem if you're going to do a long term study where you're really trying to understand the cycles and rhythms and patterns and get the context of what's going on in a person's health and life. Then get the raw data. 

Steven Parton [00:10:45] Mm hmm. Yeah. I mean, at SEAL area, a lot of what we try to do is also empower, you know, entrepreneurs and innovators who are kind of working more independently in this regard. And it makes me wonder, is there room for people to take these devices and maybe, uh, figure out some ways to kind of life hack or optimize or do health, you know, diagnosing without having years of scientific training and machinery that's going to tell them how to break everything down. 

Rosalind Picard [00:11:16] Yeah. I'm one of those people who's been measuring my data for decades and building regalia, and I have lost count of how many wearables I am floating around my office here. We have we've been learning so much, and I continue to learn a lot. We've had the blessing of working with neurologists who do surgery, and they go deep in the brain. And these patients with epilepsy, you know, bore a hole in your skull. These are not the things that let people like me sign up for unless I suddenly get a really bad case of epilepsy. But then they go in and they can directly stimulate these regions of the brain that they're interested in. Obviously, for treating the seizures I'm interested in, because these are the centers of emotion and memory and attention and stress and anxiety and pain and things that we want to help people, you know, better manage. So as they get that data, we get the wearable data and we start to learn just how complex, but also how much more interesting and specific these interactions are than than we thought. Like, for example, you mentioned galvanic response, which is the old term that we now call electrode dermal activity. I think that's what you're referring to. Mm hmm. And we used to think that just meant general arousal. Right. Like, if you're if you're really excited, your skin conductance as one way to measure electrical activity is high. If you're stressed, if you're trying to hide a lie, this is one of the key signals used in lighting action that's sympathetic. Nervous system response kicks and fight or flight. Your palms get sweaty. And so we used to think this was like a singular arousal level. Now we know from the direct brain stimulation and more fine tuned studies of different kinds of anxiety that actually it's a patterned signal. You can get different responses on the risks in the palm, on the left, on the right, and they mean something. And they're different in different stages of sleep and a whole lot of other really interesting conditions. So what we originally thought was just like a sweat response when you were stressed turns out to carry all this fascinating neurological information. 

Steven Parton [00:13:25] So let me see if I understand correctly. Are you saying that basically, rather than just looking at are you parasympathetic or sympathetic, you're actually looking at, you know, different parts of the body may indicate different aspects of stress? Is that what you're saying? 

Rosalind Picard [00:13:40] Yes, exactly. And the first time I saw something like this, I'm like, well, it is going on. Our sensors must be broken. I saw a kid who was wearing two skin conductance sensors in motion and temperature on his left and right wrist. And one side went through the roof and the other side wasn't responsive. So I thought, well, if the non-responsive is broken and flat or the through the roof is, you know, like a short circuit or something. So I was debugging and I'm a electrical engineer by training, and none of my proper engineering explanations worked. So I gave up and I resorted to the kind of debugging that I'd never done before. I picked up the telephone calls of a student at home on vacation and said, Hi, sorry to bug you on vacation, you know. Any idea what happened? And I gave them the date and time and the signal. And this was his little brother's data who had autism. And he had asked me if he could borrow sensors because his little brother was non-speaking and our sensors could help him see what might be causing stress in his little brother's life. And he wanted to try to reduce that stress. And so I was looking at the boy's stress, and it looked pretty low, except for this one weird place, like, how can you be stressed on one side of your body and not the other? So he checks his diary and he comes back and he says, That was when my little brother had a grand mal seizure. 

Steven Parton [00:15:05] Mm. 

Rosalind Picard [00:15:06] And I didn't really I'd heard of a grand mal seizure, but I didn't really know what it was. I started quickly researching. I learned about. These are the most dangerous kind of seizure. It's also called a generalized tonic clonic seizure. It's the kind that people probably are most familiar with. Where the person shakes, loses consciousness, hits the floor, can really get injured, hitting their head, and it could happen to any of us, can happen to anybody with a brain, nobody. You know, you could be perfectly healthy today and suddenly tomorrow you're having a grandma seizure. Epilepsy affects one in 26 people in America. It's a very common disorder that people often don't tell other people they have. So I learned a lot more about this and called up the top doctor at Children's Hospital Boston to get the answer to my mystery about the one sided signal. 

Steven Parton [00:15:57] Yeah. Wow. I have this. This might be a bit of a tangent, but that makes me think about kind of the relationship between maybe the higher processing and the lower processing in the brain. And one of the things that makes me think of is and this is I guess one of my biggest concerns or fascinations with what you're doing is. If we understand people's emotions and then we can tap into it. I feel it feels like we have a lot of power over them, right? Because if you can get the amygdala active, if you can get that fight or flight sympathetic response going, it seems that you can inhibit some of those frontal cortex functions, some of that more long term planning and strategy. Do you worry then about the ability of, you know, this emotional information being used to in a negative way, kind of being used to hijack people's attention, to make certain things more salient, to put people in a fight or flight response so they're less critical thinkers. 

Rosalind Picard [00:16:53] Absolutely. Absolutely. Or just put them in the aroused, attentive response. Right. I mean, every advertiser and marketing person that's, you know what? They concentrate on their attention. And then how do we make sure we associate that with a brand? Right. You want to engage them, then you want them to associate that with the brand. And that is a huge area. And actually, as a teacher, I think about that a lot too, except that it's not the brand. It's I want to get that attention and engagement turned on and I want it associated with the learning goals. I want them to have a great learning experience. Really? Yeah. You want them revved up, but you want them in a sweet spot. You don't want them so revved up a day like you're scattered and you know, or like you said at the start, like I'm in a good spot or we got my coffee right here 3 hours earlier where you are. You know, you probably know your daily trajectory and you're, you know, doing your own emotion regulation, too, to get in that sweet spot. 

Steven Parton [00:17:53] I'm in my flow channel. 

Rosalind Picard [00:17:55] Yeah, it's a great place to be. That's a real thing to be. And so that's one place where the wearable devices are really interesting, you know, for people to learn about what do I look like in the conditions that precede me getting into the state I want to be in, you know, usually for something that's like a flow state and then how do I how do I get better control over that so that when I want to get in that state quickly, here's what I do. 

Steven Parton [00:18:22] Right? When on that note, what about I guess the the data side or the regulatory side of this? Because as we're kind of discussing here, it is really powerful. It's a really powerful tool, understanding where you are emotionally, whether you're the individual or whether you're an advertiser or whether you're a teacher, really can profoundly impact decision making. And, you know, your ability to kind of, I guess, be self-determined. So like, is there much happening in terms of saying, hey, maybe certain companies shouldn't be allowed to have access to this data or anything like that? 

Rosalind Picard [00:18:55] There are a lot of things happening in Europe, and I think we should have more things happening in the US. Recently, I did a workshop with my students on my affective computing class, which I converted to an effort to creating an ethics class because I thought it's important to teach the ethics intertwined with While you're building an effective computing system, everything from your IAB approval to thinking about what could go wrong, how might people misuse what you build right up front while you're designing it so that you don't get so attached to it? You know, when it's not going to be a good thing, right? Helping people learn to question all that from from the design phases. And we had Jeremiah's, Russell Adams from Oxford come and do a workshop with the proposed EU regulations, which include that I'm oversimplifying this, but think of it as like banning all employer processing of affective data or health data. I sort of in one extreme case, especially any data that could be identified with you and what's there are a lot of challenges with this. But let me just give one one teaser. The. It used to be like in the early days of Fitbit, people thought, Oh, it's just a stupid accelerometer, right? It just knows somebody's shaking and moving. Right? And and if it has this trajectory, they call it a step, right? And people who want to get their steps up would like stick it on the little machine over there that did this and they would suddenly have, you know, 10,000 steps and look really good for their health insurer. Well, we learned in our lab that and we've published this work that when you're holding still a sensitive accelerometer, not only, you know, a smartwatch, but also in your phone sitting in your pocket right now, if you're holding, still can not only pick up, obviously big movements, but while you're still and you think you're not getting any movements, it can pick up your heart rate and respiration. And in fact, it can do it with a signature that can be used to identify you. 

Steven Parton [00:21:00] Wow. 

Rosalind Picard [00:21:02] So even this nonce identifying accelerometer data, right. We think of accelerometer data doesn't carry identity, right? Well, yeah, wrong. You know, we didn't know that upfront, but now we know that. And the more we learn about our data and then as soon as you triangulate and you put multiple pieces together, you can quickly identify people. I just learned to my birth parents were this year of my late life. Wow. Um, you know, through triangulating with some non identifying information. 

Steven Parton [00:21:32] Yeah. Do you feel like that approach to, I guess data and even science is going to have to be updated because I recently talked to a woman, Sandra Matz. She was basically talking about how with her work, we're kind of realizing that every bit of data can now be used to identify somebody because there's just so much data mining that can be done. Eventually you can find your way back to somebody because if you know where their phone is, if you know where their phone is at night, if you're tracking their G.P.S., like you probably know where they sleep and their address and these kind of things. And it feels like even maybe as time goes on will become more comfortable with certain amounts of data being collected about us and not worry as much. So we might feel really resistant now, but in a few years it might be like, Yeah, whatever. Have all my stress data, you can have it. So I guess long, long story short, like do you think we're going to see in the coming years a lot of changes in terms of what we consider to be private information or how we consider to ethically manage this data. 

Rosalind Picard [00:22:39] I hope we see changes. Yeah, I think when it comes to the good, that can be done with a there's a reason to get a lot of it and share it with algorithms and experts who care about you that can help you do good things with it. I worry and. Places where there are people who don't have your best interests in mind where they might use it against you. So we remember, for example, not that long ago when Hitler decided to get rid of people who had one of my autistic friends. She preferred to be called autistic, not a person with autism. She used to call her blog Balanced Existence, the German word for the ballast. You would throw overboard the stuff that the rejects of society, and she would have been one of the rejects of society because she was she had lots of different medical conditions. So they would have said, I don't want you as part of our idealistic society. By the way, she taught me so many amazing things, one of the greatest teachers I could have ever had. And what we what we see is that these data could give people insight into characteristics of each of us that could be used to judge us. And I don't think it's the data that's harmful. I think it's when it gets in the hands of a bad actor who says, you know, people with the stress profile, I think our society would be better off without them or people, you know. This is a profile of epilepsy or this is a profile of a mouse or this is a profile of diabetes or this is a profile of depression. And we don't want these people. And that's where there's potential for real harm. And I think, you know, we as a society are, you know, there maybe people have different views on this. My view is that everybody has a not just value, but infinite value. People are truly amazing, special, wonderful. And I don't care if you're very disabled physically or cognitively or affectively, you're equally wonderful to somebody who has all of those abilities flourishing. And so I see human dignity as a super important thing that bonds us all together. And so if any of this technology were used to diminish that, I would I would think that was a crime. That's wrong. We should be using technology to build better lives for people. I like Hugh Hefner here and his human 2.03.0. We you know, there's no disability. There's only bad technology. There's only you know, we don't have the technology to help me do that thing yet. So we can build physical prostheses, cognitive prostheses, affective prostheses, tools that help, you know, prevent or, you know, limit get rid of disability by convert disability to superpowers. 

Steven Parton [00:25:37] Yeah, absolutely. Do you feel like on that note, we are I mean, maybe you personally, are you gaining a greater appreciation for, I guess, the spectrum of the human condition through your work? And do you feel like even as a field, you know, a scientist, we're making a lot of gains in our understanding about what it means to be human because of this kind of data. 

Rosalind Picard [00:25:58] Yeah, definitely we are. You know, the more I learn about people and how we work, the more I realize I don't know. For starters, it's incredibly humbling. You know, you think you understand something? You know, back to the FDA signal, right? We thought we understood this, and it's like, Oh, no, no. When I published first paper on this over years that, you know, this could be a little controversial, would you be open to, you know, a, you know, a special invitation of people to criticize? And I'm like, yeah, bring it on right now. I know that upends more than 100 years of thinking about these data, this signal. And, you know, it means a lot of the studies out there are wrong that their conclusions are wrong. It also helps to regulate and explain why a lot of them look like they weren't replicating or had different things. Now we we know a lot more. We can actually figure out what's going on a lot better. But do we understand it all now? No, it's still a problem. So more the more data we get, the more we learn, the more good I hope we can do. Unfortunately, the more harm some other people can do with just about all kinds of new insights. And we need to kind of engage in conversations like we're having here, engage with society, and make sure that we're bringing along people's knowledge and hearts and humanity with this. Because ultimately, I don't think it's the technology that goes bad. I think it's the people using it in ways that are bad. It doesn't mean that it's all neutral, though. I do think those of us creating it have a responsibility to make it really easy to do the good things and make it as impossible as possible. Right. Like it's hard as possible to do any misuse and certainly try to anticipate those misuses and prevent them. 

Steven Parton [00:27:43] Well, not to be. I'm going to not be to be cynical here, but a little tongue in cheek. Like, do you feel humans are emotionally intelligent enough to kind of navigate these waters? We're getting into really complicated domains. And are we are we ready to have our programmers who maybe don't even have the philosophical or ethical background creating this technology? 

Rosalind Picard [00:28:06] Well, they're creating it whether we're ready or not. 

Steven Parton [00:28:09] Fair enough. 

Rosalind Picard [00:28:10] You know, we'd sit around on a Friday, like I'd be running my simulations, my programs, programing, whatever, just whatever I was curious to do, you know, like fighting it in the lab. I'm just going to try some crazy new thing. We want to see what we can do. We are makers. We are creators. We are risk takers. We. We. It's our version of Everest climbing because it's like, can we do it? You know, we're attracted to what you know and how we work, right? We understand how we work by building it. So these are driving powerful forces. I've never seen. I mean, maybe it's out there, but I haven't seen engineers trying to do evil with this. You know, most people are really trying to do good. However, most of us haven't thought. Creatively beyond the cool problem we're trying to solve as to how it might hurt people. So, for example, when social networks were set up, you know, they're like, Oh, it's so cool. You can connect people. They weren't thinking about how the kid is going to feel on a Friday night or a Saturday morning when they see that their friends were at a party posting all the pictures and they weren't invited. They weren't told. Right. They you know that you know, you know, sometimes maybe you would hear about that later, but it wouldn't live permanently in front of you on the feed. Right. So they weren't thinking through the feelings, the social dynamics, the way that lots of these little negative hits bring somebody down, aren't they? And that would require real emotional intelligence and emotional imagination, projection, understanding, social interaction. And really, if you had a more diverse team that had non engineers on it. Right. That think through these kinds of scenarios beforehand, maybe that could have been prevented. Right. Maybe that could have been thought about and something more clever built that, you know, made that kind of pain less likely. 

Steven Parton [00:30:05] Yeah, and maybe not, you know, I mean, part of the whole idea of singularity is that you can't see past the horizon. And some of this stuff is just, you know, truly so complicated and complex that there's second, third or fourth order effects are really hard to track and imagine. 

Rosalind Picard [00:30:21] It is absolutely right that we can't see past certain horizons. Right. We can we can build technology to get us higher and further in those horizons. But yeah, I think we have to recognize that, that we always see us through a glass darkly. I we don't see it all. And we I don't think we will ever see it all in this world, in this life. I think we have to recognize we have some limits there, and that's humbling. 

Steven Parton [00:30:50] You've also done some work with Sherry Turkle, I believe, who wrote a book I really enjoyed called Alone Together. And you guys, I believe, did work on social robots. So with that in mind, do you think that we're capable of of creating true connections and relationships with technology as we move forward? Do you think these things like Chat GPT and the I guess emotional understanding that we're we're coming to is going to enable us to provide some true benefits and meaning in this domain. 

Rosalind Picard [00:31:24] Through this interesting word here, right? What what do you mean by true if somebody perceives that they really got a benefit? I mean, in the early days of Eliza, the rosary and chat system, super simple AI people sometimes claimed they got a benefit from chatting with it. Right? It made them. It led them in the dialog to think about something that maybe they needed to think about and they perceived that was beneficial. So that wasn't a by today's standards, that was not a true A.I., right? But it's an art and it was a true experience and it was a true benefit. We can also get true harm from some of these systems. I was chatting with Chat GPT and I won't claim to harm here, but there was it stated authoritatively things that were that were made to look right but that were factually wrong and completely wrong. In fact, you could look up easily online. It could have. It's kind of amazing how stupid it was that it wouldn't look up online. Some basic, simple stuff. But, you know, you can think of everything. The programmers have done some amazing stuff there. And in this particular case, you know, you ask at things like a favorite authors, key works or something, and instead of, you know, going to Google Scholar and listing them, it fabricates keywords, right? And makes them look like they're real and they're written by fake authors and they have fake titles that sound like real titles. And it's just astonishing. So, you know, there's room for improvement, right? I think it what it's really good at there is synthesizing things that are are different, you know, like like Dolly. It's really a great tool when you want to synthesize a lot of new possibilities to consider like like in Scrabble, you know, they always teach you like keep moving your pieces around. You'll be more likely to spot the word when you've got it. And these tools help you move more pieces around. They help you see more images, see more combinations of words, see possibilities that speeds up your ability to do that. And I think, you know, there's real there is real value in real good in that there's also harm if people take it as the authoritative, truthful data that it looks like it's presenting. 

Steven Parton [00:33:47] But do you think that will actually build relationships with these technologies as they advance? 

Rosalind Picard [00:33:51] Yeah, I think we can. I mean, we've demonstrated we can build relationships with AI's even stupid little cartoony ones that that say that apologize, that their voice is some engineer's idea of natural sounding. Right. They're very cartoony and fake. And yet we have crafted interactions with them that people have for several minutes a day over months. Tim Bickmore did this work on relational agents here at MIT. It's part of his doctoral work and then continued that work, building out really effective systems to helping lots of people, helping people, for example, with health information and health behavior change. You know, like four months during a pregnancy or months after discharged from hospital, there's a short term relationship like months, you know, to achieve something. But they're very effective. They are enough of a relationship that they get you to come back and have some accountability, feel cared for, exchanged some information, get some new information, get a reminder of something you're supposed to do, report whether or not you're doing it or need help. You know, those kinds of what we often call working alliance, like coach, counselor, therapists, somebody you would check in with, what they are not yet is able to understand you like a real human being, would contextualize what you're doing outside of the situation that it's very narrowly scoped to handle. They you know, if if they pretend to be able to do all that, then they. Are just not there yet. They'll get better and better at pretending at larger things as we give them more context. But they they don't have the knowledge or the feelings or the caring that a real human being does. Even though they may have more knowledge and some narrow right like we were saying the other day, you know, I may have two or three anecdotes that fit a particular need. The system might have a million. Right. But still, it's got to figure out which ones matter right now. And that's still hard. That's still something that humans usually better at. 

Steven Parton [00:35:59] Bearing that in mind, knowing that the machines that we do interact with kind of lack some of that, I guess, emotional maturity. And we are spending so much of our time exploring our relationships through things like this. You know, I'm looking at you right now out there, Zoom, and there's probably some things with body language or just little subtle cues that I'm not picking up on. Do you think, considering how much of our relationships are navigated through this digital medium, that we are maybe losing some emotional maturity as humans? Are we maybe, you know, putting some sandpaper and rubbing away at some of the finer edges of our understanding? 

Rosalind Picard [00:36:38] Oh, what a sad thought. Yeah. We probably during the pandemic was talking with one of my friends. He's a professor in another part of the MIT. Last night, he was bemoaning the fact that over decades of being at MIT, she'd only seen you. Maybe, you know, less than a half dozen students go through this particular end of term kind of thing, you know, where he can't take the final and on and on. And just last semester it was like doubled the number she's seen in her whole career. And I've seen it two students from Harvard and MIT coming here really more fragile than ever. And I wonder, since one of the key components of mental health and resilience is social. Right. It's having that friend you know, you're sitting in the class and something goes bad and you look for the next year and they look like they've got it even worse. And you kind of share an empathetic glance and they return it. And, you know, it's all those little moments of what's happening when you're not scheduled for that official Zoom lecture or whatever. Right? It's all that other stuff. And that is constantly shaping right. And gosh, you know, the is it the Japanese who have the expression, you never step in the same river twice the water. It's always different shaping you. It's almost like we've removed the river. Right. 

Steven Parton [00:38:00] Well, with that in mind, I mean, are you are you optimistic about where we're going? Like, how do you feel about the future that we're we're going into with the way we are? Maybe learn about our emotion through wearables or optimizing our health or the relationships being, you know, mediated through technology, Like how do you feel about how things are unfolding in general? 

Rosalind Picard [00:38:22] I think we've taken a major hit with the pandemic. I worry that this generation, these students, are more fragile than ever. I worry that this generation also that's kind of grown up on social media, that not all of them have have succumbed to that. But many who have they're more fragile. They they don't have as many of these how to handle people skills. A lot of them don't. There may be great at handling social media, but not handling a lot of the complexity of life. I think there's a lot people are doing trying to handle, you know, reduce their anxiety. They have very high anxiety. One key thing is meditation. That does help with anxiety. Problem, though, is it also tends to reduce motivation, which can, you know, mean they're not going to be the go getters right. Where come they're not going after change in the world. So, you know, I am a little worried about what we've let happen. I'm still an optimist. I was told the other day that the, you know, plants the grapevines that have been most stressed, you know, with drought, wind and all that stuff can produce the best grapes. So maybe, you know, maybe this trauma can be turned into something much better. I am an optimist about that, but usually it takes work to do that. And I think as a society, we have to commit to, you know, loving one another, even when they're acting in ways that you're really, you know, not up to what we expect and helping out, you know, who keep picking up the slack, helping each other. 

Steven Parton [00:39:57] And this is kind of a big question, but in the realm of infinite possibilities with a magic wand, is there a particular issue in the realm of computing or, you know, society, the social sciences right now that you wish you could just kind of fix that? If you could do anything, you would love to see this regulation pass or for us to interact with technology in a different way, like have you. Have you laid in bed at night and thought, if we could just do this one thing? 

Rosalind Picard [00:40:29] No, it's not just one thing. If we could change people's hearts instead of writing laws, I think that would be better. You know, write the songs of the nation, not the laws. Right. Right. You know, change the culture, change the hearts. Get people to care more about people first. And not as a slogan or whatever. HCI people in Seattle, people would argue, you know, are we human computer or computer human? You know, human in the center are human first. That's not the point, Right? The point is, you know, are you trying to build a better future for people or are you just trying to get your gets on on your resume That's A.I. related and technology related and to build a better future for people. I love the African proverb, right? If we're going to going to go fast, you know, go alone. But if you're going to go far, go together. We really do need to build teams that have very different kinds of people who look not only at what we can do, but what we should do. And they bring in lots of different perspectives on the kinds of impact of our ideas. Maybe talk us down from some of them and help us maybe go a little slower, but a little further together. 

Steven Parton [00:41:45] Yeah, I love that. I don't want to ask you any more questions to take away from that final note, but I will. Before we go, offer you just a chance to give us any final words. I mean, that was beautifully said, but do you have any anything at all that you'd like to tell people about that you're working on or something you'd like to just share with people closing sentiment? Anything at all? 

Rosalind Picard [00:42:09] I mean, I'm really excited about the way that technology is helping people get people together. For example, people with epilepsy who've been scared to tell somebody that they have it, you know, are now encouraged to tell people, add them to your care app, have, you know, make sure you're not going to be alone when you have a seizure. Make sure that you use an alert app. You know, we full disclosure, as I said, Malcolm X one is FDA cleared. But, you know, make sure you're not alone, whatever whatever tool you use. Make make sure that you reach out to people and find out what they're struggling with and offer to be there for somebody. And that way, you know, everybody can be a hero, right, to be there to help another person. And I think when we look at when AI and technology are most effective, it's when they're it's when they're used with with people to help people do things better. And they don't always help people do things better. Sometimes they help us do things worse. So even if you're not an AI or technology person, it you know, we technology people need you to be a part of the conversation going forward. We need you to help envision what could happen, what would be terrible, what we don't want to have happen. What would be better to have happen and give us that feedback so that we help shape the kind of future that everybody's excited about being a part of? 

Singularity

Singularity's team of internal thought leadership works to develop interesting resources, articles and insights about our core areas of expertise, programs and global community.

Download the asset