This week my guest is Rama Chellappa, a pioneering AI researcher and distinguished professor at John Hopkins University who recently published his book, Can We Trust AI?
In this episode we try to tackle the question posed by Rama’s book from many angles, exploring topics such as the transparency of AI in regards to the black box issue, the accountability issues for when programmers create algorithms that end lives, the reliability of self-driving cars to navigate an ever-changing environment, and much more. Ultimately, Rama puts forth an argument that is often overlooked: in a world of chaos where things will go wrong and nothing is perfect, are we holding AI to a standard that goes above what we even ask of our human peers?
Use code HAI30 for 30% off Can We Trust AI? when you order from Hopkins Press. Order here: https://bit.ly/3tSZ6K6
Host: Steven Parton - LinkedIn / Twitter
Music by: Amine el Filali
Rama Chellappa [00:00:01] Our technology has to be looked at carefully, but we need to exploit its advantages. And I do think it will make things better.
Steven Parton [00:00:23] Hello everyone. My name is Steven Parton and you are listening to the feedback loop on Singularity Radio this week. My guest is Rama Chellappa, a pioneering A.I. researcher and distinguished professor at Johns Hopkins University who recently published his book, Can We Trust a AI? In this episode, we attempt to tackle the question posed by Rama's book from many different angles exploring topics such as the transparency of AI in regards to the black box issue, the accountability issues for when programmers create algorithms that end lives, the reliability of self-driving cars to navigate an ever changing environment and much, much more. Ultimately, Rama puts forth an argument that is certainly hard to counter. In essence, in a world of chaos where things will go wrong and nothing is perfect. Are we currently holding AI to a standard that goes above what we even ask of our human peers? And are we being too preemptive, in our judgment of AI not yet realizing that we are still in the early stages of development? So let's go ahead and jump into these topics and explore these questions more deeply. Everyone, please welcome to the feedback loop. Rama Chellappa. All right, Rama. Well, I know that just in the last few days, actually, your recent book, Can We Trust? I came out. And I'm wondering to start if you can just tell us about the motivation to write that book.
Rama Chellappa [00:01:53] Yeah. You know, I first took the class on air in spring 1978 when I was a graduate student at Purdue. In those days, it was still fashionable to do established features like pattern recognition and image processing where there it is. You know, it had its origins and in 1956 meeting at Dartmouth. Interestingly, by 1978, its service was kind of in a winter because we had gone through the first round of rule based systems and so on, and I didn't deliver what people thought it could do. And then it had had some issues, so so we were learning it like a course, a graduate course, but very fascinated in those days we were interested in theorem proving play games. You know, everybody's idea is means you can play better checkers with a computer or backgammon or chess than all of that. The idea of, you know, smart cards and things like that, we have seen in TV series, you know, the talking card. Not many kids these days know about this. There was a show.
Steven Parton [00:02:59] Yes, right. Right.
Rama Chellappa [00:03:00] Yeah, exactly how they said what. Attacking card. So anyway, so I then I took another class in spring 1970, so I had been involved in I but I more and especially the topic on computer vision was computer vision was still in its nascent form in the 1980s. Then it became more formal. So I have been involved in this and I have seen the ups and downs and certainly recent in Gamergate in the mid eighties since 2012, a lot of data driven methods have become popular and yeah, as different as deep learning based methods. So before we were big on domain and domain knowledge and all that. Now we are big on data recorders, data everywhere. So but then along with that, there are, you know, concerns about I of course, there are all these Hollywood movies that show the robot will come and take over your house, your neighborhood, your country and all that. So there are lots of things like that. So I thought I should because some of the questions are in in terms of do we know it'll work everywhere? Is it a robust enough that issues it may make some wrong decisions to certain subgroups of people and other concerns. So I thought, I'll write a book that kind of intertwines my work, you know, in three areas and along with addressing some of the other issues and potentially how I can be helpful. So it's a qualitative affirmative aid. Can we trust AI? So, so I thought I will write a book at the Johns Hopkins University Press was very helpful and my coauthor, Eric Miller, was extremely helpful in coming up with the ideas and how to structure these arguments. And so I don't know if you had a chance to browse through it. Yeah. So, so yeah. That's that's why I think I wanted people to understand is here to help and I think that's how I feel. Is it a perfect technology? No, technology is perfect. You know, even a thing like cell phone, people are very worried. If you use too much, you may have cancer and things like that if you remember that, right? Yeah, it's down. I don't know where that research is now. So all technology has to be looked at carefully, but we need to exploit its advantages. And I do think it will make things better.
Steven Parton [00:05:27] Yeah. And when you ask the question, then can we trust AI and you say things like AI is here to help, are we at the point yet where. We're really leaving that kind of decision making up to AI or are we still kind of asking the question, can we trust humans? And I guess, in other words, my question is how much is the A.I. still bound to the limitations and the behavioral value systems that the programmer has?
Rama Chellappa [00:05:58] Yeah. Yeah, that's a great question. You know, there are four things we talk about when we think, can we trust? I, I think I don't know. Do you follow cricket now? Okay. All right. Let me give you an example. A few years ago, the umpires made the decision and you can't question it. They say you are out, you're out and you are not even supposed to throw your bat. And all the tantrums that we see in baseball, because cricket, as you know, is a gentleman's game. Right? I would say it is also a gentle ladies game. There are also women who play. Now, a few years ago, they brought in something known as a dress. Okay. That means that it will be a ball tracker. All right. And there will be a microphone right where the batsman is to see the ball hit the bat. And then it went and the catcher took it because sometimes the crowd is so noisy that player cannot hear and they will get there. The catcher may say, Oh, I got him, he's out. So now they can go back and listen to the microphone. And so the umpire can still make a decision and it can be appeal. And all of these technologies will come and it'll show. And there are cases where that decision can be overruled or not. So to me, the dress is a helping the umpire, and sometimes he can overrule the umpire because umpires can make incorrect decisions. So there is a technology that can sit next to you and kind of, you know, look at the data and so on. Can I tell you, you know, I think this what is going on and the humans have to be the final arbiter, the decisions. I won't take the dress technology too much in the air. You out of the air, you can overrule the umpire. I won't I won't do that. Because when domain knowledge isn't why people with the experience are important. Right. Doctors, for example. So human interaction is where that next to progress has to be. So we can appreciate what he's proposing and whether it makes sense in our current context. And then we jointly make the decision that's the way to go.
Steven Parton [00:08:08] So do you think that that will dominate for a long time into the future, that paradigm, or until maybe something like artificial general intelligence, where you know how humans and AIS will work together until AI's become so intelligent that they're basically superintelligent.
Rama Chellappa [00:08:27] You know, as a technology person, as a person who guides the design of algorithms and so on. I'm always fascinated about that potential. But let's just take a simple example as an autonomous car. You think we're going to have 100% totally autonomous car that works everywhere in the world?
Steven Parton [00:08:48] I don't know.
Rama Chellappa [00:08:49] What works in Phenix, not work in Hong Kong or in London. In London they drive the wrong way, I think, like in India. So all I'm saying is for this to have a general intelligence that the performance has to be so good under all circumstances. Humans have common sense reasoning, which kind of I mean, we're kind of pre wired in sense. There is a set of some conditions that even when we are kids, we may not be comfortable with. And you know, we sense that, you know, and so on, that common sense reasoning is is a is a big thing because it has evolved for us for how many generations and so forth. And sometimes we can't even explain how we react and we just react because it is this tendency to survive, to get out of danger, to be able to do and so forth. Then we know we can rest. To me, that the general AI is is a concept that we should strive towards, like we want 100% autonomous car, but in trying to get 100% autonomous car, we have made significant progress. The lane changing warnings. The car tells you forgot to hit the car in front of you and you reverse, you know, the cameras come. They can even put a box if a human is walking across. If you change lanes without giving the turn signal, it brings you back because it knows, no, you can't do it. So those are very important features which came about because we wanted the the fully autonomous car so that we can cook a pizza in the car and, you know, and still drive. So that is what I would say, that is. And the goal of trying to get general A.I.. Is a good goal. And to achieve that, we will make enough of a progress. Even if you are 90% debt is going to significantly help us with what we have to do. And as you know, as they say, right, if you shoot for the stars, you land on the moon. And we did land. I don't know if there was a goal to go to the stars. So that's how I see these things. But for people who like to predict where the technology is going and so on, this idea that hope, this all purpose, all intelligent AI and that immediately raises the question of, you know, what it would be. I mean, I like reading about those things, but I'm a pragmatic person. I have to build algorithms. Mm hmm. You know, if I tried the all purpose digital intelligence, I think I would be retiring, and then they wouldn't be here. So, yeah, we have to make smart steps.
Steven Parton [00:11:36] Yeah. And along the way, what do you think about the transparency of these algorithms that we're talking about? Because one of the big concerns, obviously, is the black box issue. And. Yeah, so. So, I mean, do you think we're going to see an increase in transparency or do you think we'll just we'll just trust AI more to be making the right decision? Yes. Black box.
Rama Chellappa [00:11:57] Right we are. Everybody uses the TV. Mm hmm. A good number of people don't know how it works, but they're quite happy because they turn it on. They see the sports, they see the notes, they see the comedy scenes, you know? But AI is not that simpler because TV doesn't make decisions as to what you should eat. Maybe one of these days it'll do. I don't know. But all I'm saying is interpretability is extremely important. Transparency is very important. This black box idea is there with deep learning in particular. But I, I like to say it was interpretable before deep learning. Think about decision trees, even cave nearest neighbor or Bayesian hierarchical models. There we explicitly model the causal relationships between these things and throw probabilities on them. Do inference are they're all interpretable? Right now when you come to a we have some ways of figuring out which layer of the deep network is contributing to the decision. So we can understand that. More importantly, I can if you give me a algorithm, I can go and probe it and find where it creates problems, where it is, and then I can fix it. Things like adversarial training and other kinds of methods. So let me ask you this question and people ask me this. I said, you know, it's a it's an algorithm, as, you know, stew in an algorithm. And to do the same input, it gives the same output. It cannot give a different output. Right. It's a set of rules. Mm hmm. If you give a human the same conditions, depending on their mode. Fitting on how good their day was. Yeah. Yeah. They were going to. Probably not, although. But it can happen. But do we know how to measure how humans make decisions? This is not to point to the question. This not the question, but this is a question that we have to ask. What are you comparing with? Hmm? When you have an algorithm, whether it is transparent and the human making a decision. I can probe an algorithm, but with humans, I have no idea. So I would like us to understand that there is a set of complicated rules and that we should try. So transparency here means we should be able to say when it has issues. For example, there. All the drugs that are approved by FDA. But, you know, when you listen to their commercials, sometimes the side reactions they create looks worse than the original disease. Yeah, but they have to be transparent. Mm hmm. But still. Still, there is no guarantee that there will be a certain number of people who would be affected by the drug in an unintentional intended way. Because you can't test every drug on 8 billion people, right? Soon the world is going to be 8 billion you can't trust. So you do your best and then you caution.
Steven Parton [00:15:08] So it sounds like you're basically suggesting that we just accept that life has chaos and limitations and that's going to happen and everything, including I. So we shouldn't be unfairly harsh to our judgment of I.
Rama Chellappa [00:15:20] That's right. For example, every year, several thousands of people die in car accidents because of various issues at the moment or an autonomous car and acts of some, you know, a cat or. No, I don't. I mean, I like all animals. I don't want to use any animal that I think. And I had something happen. Are you hits a lamppost? Mm hmm. My God. We get our body right. So I think that's where we have to be at. I think it get is getting better technology. That's that technology does get better and so on. So we have to give some time for for the technology to be fully developed and it will continue to develop and get better.
Steven Parton [00:16:11] And on the subject, I guess, of self-driving cars. You've done a lot of talk about that. Obviously, computer vision was, you know, a huge part of the self-driving car movement. Can you can you talk a little bit about maybe the future of self-driving cars and where that where that's at, where you think it's going?
Rama Chellappa [00:16:29] Yeah, I think the idea for self-driving car came about. One of my mentors, Professor Ashley Lawson, but I still remember in the early eighties he was kind of imagining what if he just put some cameras in a car and make the car and around the campus? It was called Autonomous Land Vehicle Project, but started it in 1985. I started getting interested in the early nineties unmanned drone vehicles, bulky. They were kind of vehicles with big sensors and so on. I just want to give a little bit of history, you know. So in those days, we will have initial units in the car because we thought if we know where the car is exactly, and then we can combine it with which video images and do better processing any time we tried that sacred blossom fuze. So, you know, now we know exactly where we are, right? So this is how it started. And we had bulky radars and all that stuff. And now I think, you know, Tesla is everywhere. Tesla has done using just video information and a bit more, I believe that's laid out 3D information and so on, I think. Still I don't want to Tesla and people are surprised because I've been working on it for some like some time. I think it'll get better. It'll have more functionalities. But what is now going to probably happen if you have a heterogeneous sort of situation, some Tesla with automatic skills, and then somebody like me like to drive the old fashioned way, how are we going to interact on the. And then we are driving just like the human. Now what you can think is machine is Tesla and my car is human because it's driven by a human. So we are coming back to the same old problem of human interaction. So you should not view Tesla when it is fully automated as a car. It's an AI system. It has its own way of doing things. And then I am driving my car with the gas powered and it's not automated. I am a human. So the same situation comes. But the interesting take here is everything is instantaneous. If I am driving at 65 miles on the freeway and the Tesla is going at 70 miles and there are some other Teslas and I'm there. It's a very highly dynamic thing. So how we all interconnect interact with each other. So that is at a global. Looking back, everything looks very stable and safe locally. It's like in a Brownian motion, not quiet, right? You can do a Brownian motion on freeways. The cops will give you a ticket. You got to follow the rules. But if you do these two kinds of, you know, I is the Tesla and I am the human dynamically, how we chain across lanes and make sure we are safe. There was a project even in the mid eighties in UC Berkeley called Caltrans and UC Berkeley Path. I think where the idea was to put sensors on the road and have the sense that has helped the car be on a convoy and follow each other. And so on. I think now we are spending more on the sensors on the car because it's a lot of work to dig up I-95, you know, and I-495 405 to put the census and so forth. So we're believing on the sensor, but every car has to know where it is with respect to everything else, where the world is so that it ideally they can all go like the race car thing, you know, you see that. I mean, they are like it looks like they are two feet separate just by two feet and they're all going at like 200 miles an hour. I said, Oh, my God. But of course, that's when things don't go right. I mean, it's it's catastrophic. But you see how closely the foreign humans are doing that. So if we really become very automated and everything is, they can just go at like, you know, six inch separated, but then, wow, that'll be a joyride, but maybe not for me. So that is where I think. And can this be done over a period of time or one day we all say, that's it, it's all I cars now that transition because you have and think about the people how many of them signing up to date what are the issues and the challenges and so on. So I think it's it's eventually going to get better. Maybe there will be some applications that, you know, like I've seen even trucks, 18 wheelers underwater. I've seen demos of that. And then they say it cannot go inside the city. It's going to come somewhere off ramp. I say fine. And then I go to grab everything from that truck and put in a little trucks and then they get inside. Makes this look like a good engineering solution because point to point is what we were automation anyway. So yeah.
Steven Parton [00:21:34] I was going to say it seems like it would be a hard transition to make to get to that fully automated path. I guess just because if you have a person driving a car, having an AI car come up to you just six inches away, I don't know how you can make that switch without switching the whole thing at once. Like that looks like a recipe for disaster if you don't fix everybody doing A.I. at once.
Rama Chellappa [00:22:00] Right. Yeah. Yeah. That's like it's like, you know, we all speak English tomorrow. We all go visiting French, is it? Yes. Yeah, yeah, that's right. So. But it's a good thing to strive towards that because what'll happen when you're trying to, you are going to improve the networking technology, you're going to improve the processor technology for this to be real time. And you will understand how tens and hundreds of dynamically moving things out interact so that everybody, you know, gets to a place safely. I mean, those are all beautiful. You might have heard about this swarm. I didn't know that. Planes for like, you know, biology does it. Haven't you seen hundreds of birds beautifully fly it and suddenly the leader decides to take a left and they all go left and they all go right. So biology has figured it out. But, you know, but it's a little more complicated for us yet.
Steven Parton [00:22:52] You just made me think of something with that statement. Biology has figured it out. Do you think that the direction that A.I. is going should be a replication of the human brain and how it functions? Or do you think it's going to take an entirely different kind of intelligent route? You know what I mean? Like, are we going to replicate human intelligence or is it going to be a new form of intelligence?
Rama Chellappa [00:23:16] Yeah. Oh, this question is a great question. This has been discussed for 60, 70 years. You know, in fact, there's an old book I read. I forget who wrote it. They said, if you have to design an electronic system like a human brain, they were talking about vacuum tubes. So you can do you can did it. They said that will be as big as Empire State Building. And then it required Niagara Falls to call it. Right. Of course, you know, our brain is like so efficient and dumb. So how many what's its expense? And it makes all these decisions. There is a whole field bio inspired computing, you know, neuromorphic computing, which has been developed in parallel to regular CPU, GPU based computing. And I have many distinguished colleagues in my department at Johns Hopkins who have been pursuing this. It's again, a noble goal. And then what it will do is help us how to optimize power consumption by this GPU cluster. So if you see. Data centers. You know, if you come to Ashburn, a neighborhood in Virginia, there's just huge buildings. They certainly sprang up in the last five years. So what are these buildings? Data centers, you know, they consume a lot of power, right? It actually is racing a green I snow. Discipline. If that kind of power had to be spent by our brain, I think our brain would have melted into fluid long time ago. So there is a proof of existence that complicated decisions can be made by integrating of a sensing, with computing, with reasoning, and all of that is the most efficient way. Now. Should we try to replicate that? Somebody would say, yes, somebody should be inspired by it. I want to be inspired by it because we can do experiments that wholly perceive things and how we compute things and so forth. So some would argue the neural network, the hierarchical architecture itself, is somewhat. Not exactly what, but hierarchy is important in visual processing. So we have these deep networks and nonlinear is very important processing the neurons non-linear. So we do have relative what we do have a sigmoid nonlinear functions in order networks. My advisor again meant that both errors and values signal processing will help computer vision only it stops being linear and Gaussian and so on. So, so those are the good inspirations we derive. And the study of brain is, is a lifetime endeavor. So definitely I would go with bio inspired computing. I definitely because great advances, less power, more generalization and able to hand in surprises does such a things.
Steven Parton [00:26:26] And while we're on the topic of biology, a lot of what you talk about as well obviously deals with the artificial intelligence impact in the medical industry and diagnosing and things like this. Could you maybe talk a little bit about what's happening in that realm? Because I know it's a very robust realm, but I would just love any anything that you can share.
Rama Chellappa [00:26:46] Oh, this what I like to say. You need 2001 ice for medicine. But look at how medicine is practiced. You know, is there one doctor for everything? Well, that doctor is family practice, internal medicine. But then you have all the specialties. So I would be able to be structured like that. I don't think we have to exactly mimic, but we may have to look at it that way for the simple reason. Every disease has a domain knowledge that. That's why we have the specialties. And every disease has certain kinds of data that goes with it that people look at. But there are the common data, like your vitals, your more high level things that are common for everything. And some inferences can be drawn from that. Right? But then once you become specific to a particular thing, it has to be a different data, different domain knowledge, different way of diagnosing it. So that's why I used to tell. I'm telling my doctors friends, I think one eye is not going to cut it really. Thousand, 1000. When I said I like that number. I don't know if there are that many specialties. I'm not sure. Obviously, what works for one type of cancer is not going to work for the other type. Even within cancer, there are differences. So I think where we are right now. Domain knowledge is still there. Advertisement that if you don't believe that that you are telling you don't need doctors, that's not going to happen. Now, the doctors are going to be helped by mining their data. Electronic health, record diagnostic images, previous visits. Everything being more and more quantitative. At Hopkins, we have what is called bitmap precision medical, you know, medicine analytic platform, where all the data can be kept and then you can run your algorithms on that. As you know, medical data is very heterogeneous, right? Read report comes with numbers and the images come and then you have the conversations, natural language conversation between the doctor and the patient. So how do you multimodal is becoming very important. The other issues we have is let's take that ology, for example. Every labs have different procedures. Many labs, you know, it's like every microscope. You have to view this. Every camera cameras are different, how they collect data, the quality of data and all of that. So like in pathology, a sample from one lab will look different from a sample from the other. Lab changes, same thing, but the process is slightly different. So if I train using pathology lab a the legwork in pathology lab, the samples will be universal. So that is something we are looking at. The last point I want to make in medicine is the interactions among doctors, physician, patient and I. Yeah. He is now sitting. Imagine you have to imagine a sitting next to the doctor. Hmm. Hmm. Hmm, hmm. Okay. You might have seen the movie Beautiful Mind. I don't remember. There is one guy that shows up who. Who seems to haunt MASH. You know, I was going to be like that. And so now you have to understand the interactions among the three. A patient may trust you as a doctor. I think based on this, this is what we have to do. But now you say, you know, my air buddy is saying singing in the patient's body. You know, your best buddy notes. But he didn't go to medical school embedded into CAD, but he didn't do residency. That fact that a game and so they're the captain provides confidence that you know yeah I know what this guy says and I'm comfortable with it but I think this is a good decision. So those are the things that are going to come into play.
Steven Parton [00:30:34] What about the ethics and accountability in the medical space? Because I studied computer science at university, and one of the things that we were talking about was this case where there was a form that basically determined how much radiation was given to a cancer patient. And there was a bug in the code for the form that if you highlighted and deleted, rather than just hitting backspace and deleting it would leave the previous numbers there. So whatever you entered would be appended or added to the number. So instead of 30 units of radiation, the patient would get 3030 units of radiation. Who is accountable? And in such a situation, you know, how do we you know, in the realm of engineering, if you build a bridge, there's a lot of things you have to go through to make sure that you can build that bridge. Yeah, there's a lot of programmers in the medical field who just can write code and send out software, and there's no accountability, no licensing. How do we navigate that ethics and accountability as that becomes more advanced?
Rama Chellappa [00:31:38] It has to be it has to be regulated in some sense, accountable accountability is very important. And technology that that's not too good for everybody it's applied to is not going to be accepted. There will be consequences. Now, the particular thing that you mentioned, I have to probably say it probably happened before I came because it seemed to be. Yes, right. It seems to be a programing issue and software issue. You know, you know what happened with 77, the most recent Boeing and Boeing is is best builder of aircraft. And what happened? There was some sensor issue and the pilots did not know how to handle that. And a few of them, the max plane. Right. And this day and age, despite Boeing building these things, going back to 1976 and 1970, I think is when the first sample seven game or something like that, you can't shake it up. So all I'm saying is it's a complex system like in Boeing, but Boeing is accountable. National NSG became and FAA came and said, come on, guys, they grounded it. And Boeing took a big, big hit. And there were some conversation how this happen. But from what I can remember, the FAA was actually letting Boeing come up with its own certifications and so forth. Now, so I think, you know, we have to look at that, if that were the right thing to do and so forth. But it was fixed. I'm actually taking a seven, seven aircraft back from California to Washington, D.C. I won't think much about this particular. If the plane jerks a little bit off, maybe this is up to 80% say, yeah, you know, so the example you gave is before air. And with air coming in, there's more of those things that can happen and so forth. So there has to be, you know, an FDA approved this many software for air medicine. I hope somebody is really looking at it, but look at it this way, Stephen. Even the non air prescription medications like we talked before. Right. Does it help everybody? No, not sure. But that was a bad guy doing something. But even medicine. For some people, the reactions are different. Right. So what happens there? Immediately you record. Ah, you make sure that particular if you look at all the advertisements and say if you are having this problem, that problem, that problem, that problem, please do not use this. That's accountability because they have done that. So we have to like I told you before, it rains. Don't use my vehicle detection after I declared it. Mm hmm. In a way, I know the user is going to use that. So if it rain is going to come back from. It's not working. What should I do? So I say, okay, I will declare upfront, don't use it when this happens. That happened. So they trust. Out. Right. So the only issue comes when you don't when the users are just throw some software updates. For example, we give updates for our software various after. Sometimes that will mess up something you have. They may not tell you. They just keep coming. I don't know. Sometimes these updates are good or bad, and after time I don't want to even respond. I ask somebody, No, it's just, Oh, that's okay. Because you worry through the update. Maybe you will have some kind of a virus that comes and wipes out your laptop. So there are so you have to press the brand name and so forth. So to get that story shot, accountability is extremely important. But there is a limit. One cannot spend the entire space of things that could go wrong. Like, sure, most things. But if there is a failure, there must be a place immediately to grab it, fix it, and then put it out.
Steven Parton [00:35:46] Yeah. And this this feels like it gets to one of the big topics of the book as well, which is, you know, your point that we're still in the very early phases of of AI and our regulatory systems simply haven't caught up yet. Yeah. So can you maybe talk about some of the concerns you have where maybe things need to be more regulated or maybe some of the regulations that you would just like to see happen in the future?
Rama Chellappa [00:36:11] Yeah, sure. I think anything has to do with the you know, I'm also a professor in the School of Medicine at Hopkins in biomedical engineering department. Anything that goes with AI and medicine has to go through the rigorous evaluations that FDA does for regular medicines, except this is software. So they have to have the right type of paper who understands? And this is becoming an issue not only in health, but also in Department of Defense, and that is getting the right workforce, knowledgeable workforce. For example, if I had a company building a system for the US Navy Air Force, I'd say, Hey, this is a very intelligent system is going to do X, Y and Z. Somebody on the receiving side should be able to check it out and put a stamp of approval. Right. So that's important. So regulation, accountability, they're all kind of coming together. So there are. So in health, it's obvious. And I think to do that, papers have it has to be regulated just like the FDA. Right? I would say likewise in smart cars, there are, you know, Tesla and other cars. There's some sometime anything, something happens and the NTSB gets involved. They want to understand why it's happening. So a car has to be certified that it is safe to drive. And most cars, as we know, you know, accepted. Right. They don't just fall apart. The tires don't come off the thing and so on. So definitely we need to have people who are knowledgeable about the functioning of air systems and that adversarial conditions. You know, that is another issue we talk about. Can you hack a system and make it? Do you know the other things? You want to go left but it is safe to go, right? Sure. So we are working on those problems. So. So that is what we really regulations in every aspect of air that controls, that has something to do with our lives, day to day lives. Now, those will be different for different segments now, you know, and so that is basically it's not one set of rules, but if you really want to have one set of rules that become very general, for example, it should work everywhere. Now, tell me, how do I define that? Right. Okay, that's. That's that's what you want. I want the Tesla to go to work in Bombay and Mumbai and Hong Kong and London and Phenix and Indiana. Okay. What does it mean? But you have you know, you have to provide constraints for it because that product will be so simple. Sometimes, you know, they say if you are risk averse. You have to this lowest common denominator kind of an argument. Right. Just be very, very similar. So you're not going to fully realize the benefits of it. So it's a company that developed the AI and ethicist. There's a big fail. A lot of people from bioethics kind of are contributing to air ethics. We have a Berman Institute at Hopkins that a lot of research is looking at AI ethics issues, ethical data collection, ethical class functions, because we used them to train A.I.. When do we tell people when it won't work? When it works as as a it's a good faith argument and so on. So I said the regulations, if you look at our political regulations, are considered to be bad. So even if you take the two major parties that everyone parties kind of looks like they want to regulate and make things save the other parties or that kids business, small business cannot do this simple, even simple paperwork. This is all you have, you know. So we have to kind of face that. Okay. So.
Steven Parton [00:40:07] Yeah, are you concerned about the thinking there of like the government and technology? It makes me think of, you know, governments using AI to use facial recognition, psychological profile, building, surveillance, capitalism, etc.. And I think a lot of people are concerned about this idea that, you know, in the sense that Cambridge Analytica seemed to kind of manipulate the public opinion that these tools are going to be used against us. Are you concerned about things like this with facial recognition and surveillance capitalism?
Rama Chellappa [00:40:41] Definitely I'm concerned because, as they say, a technology that doesn't work for all. It's not going to be good. And we do know from the 2018 it might work. You may be familiar with the ten Mitt Romney's working this quarter. When they tried the gender classification, it didn't work well for dark skinned males and females and it worked well for later. And that was that raised a lot. You know, so immediately companies have to react. Some pulled out their software, some improve their software and so on. We've been system and we were concerned that it's just happening to us. But we have a method to probe our system to see where the bias is and how much it is. So I have a couple of papers on there. I'm happy to send them to you. We saw that our work, our system, the Jenness system, for example, showed bias to skin tone, bias to gender. So we then developed methods using adversity, training, knowledge, distillation to read, used to mitigate bias ethical approaches. You throw everything. Start a fresh. There's no guarantee that when you do that, you are going to have the high performing network like you had before. So we said, let's take the best performance system and reduce the bias. You know, so that I am able to do that. Okay. So I. Can you give me a system? I will know what it may be sensitive to. Then I can go and probe. How sensitive are you to this fact? That how sensitive are you to that fact? Ethnicity, skin tone. Age, gender. Okay. And then I can improve it. Am I going to have a zero bias? Can I tell you? If you don't want to make any errors, then you don't make any decision. She said. That is how decision theory works. I'm afraid to make error. So zero bias. It may not be a performance system. Now how much bias you want to accommodate. But at least if I come and tell you my system has this image of bias so that it makes a decision the final human educator knows. Take it or leave it and make his or her decision. So that is how this if it is done this way, people's concerns will be assuaged. You know, they'll say, okay, so there's somebody here. So I think it just happening, not just my group. Other groups are working on this. So fairness and I using optimization approaches, using adversity and learning adversarial training and so on, is a very active area. There's even a conference called fact, I think, which are dusty sorts of things. So I'm going to ask I like to ask people who ask me questions in a question. I can quantify a bias. Do you think we have metrics about quantifying human bias?
Steven Parton [00:43:47] No.
Rama Chellappa [00:43:49] Again, this is not to point as I said, you know, you can't answer a question with a question. But I did answer your question to some extent. So I think I can ask the question. We don't know.
Steven Parton [00:44:02] I mean, it's it's a good point. I mean, like you're saying, at least I a lot of the issues that we're concerned about is the same issues that we're concerned about with humans. But at least in the case of AI, we usually have data to understand it, whereas with humans it's a true black box and we could just be lied to very clearly. So, I mean, it's a it's a great point.
Rama Chellappa [00:44:23] I'm going to pick up on what you just said. My catch eating is. AA does not like humans. Me? Yeah, I don't say humans do that as humans do. No, that's not true. A That's not like humans may turn off.
Steven Parton [00:44:41] Well, Rama, we're coming up on our time here, and I want to respect yours. I know that ya'll are going on, but I'd like to give you a moment to obviously give us some closing thoughts, maybe tell us a little bit more about the book or share anything you'd like to tell people.
Rama Chellappa [00:44:54] Yeah, sure. I would like to encourage the readers to read the book cover to cover and hopefully be answered, providing some assurances for how we can better their lives. And hopefully we have given them some comforting arguments that they don't have fears about new technology. Every new technology brings in certain concerns and steam engine was introduced at the coast may not not give milk anymore and thereby the track. There are less stories, so they should not be concerned. And the best way to be not too concerned is to know about. Right. If they don't know about it, then you're going to believe what people say and get worried. Just go and take a book. It doesn't have to be in my book. I hope it is my book. But go and take a book and read a lot of fantastic books on, you know, depending on like somebody like you studied in the Graduate Computer Science. That book by Stewart resident Peter Norvig Fourth Edition is an amazing book. I use it for my class. It's a great book, beautiful book. And so, you know, they have upgraded the first edition with deep learning and other things. That's what I would tell everybody. Just read, be informed. Knowledge is power. Knowledgeable, dispel ignorance, not dispel the myth. So to me, this is what I am worried students are saying who reads books these days? I say, Oh, don't tell me that I'm a professor. You have to read books. I love books. There. There you go. I tell them, books are my great friends. I talk to them down, contact to me. And, you know, so I think knowledge is what dispel these things, however much I can say. So they should read Chapter five. We are very optimistic about the how it can be used for disaster relief, how it can be used for climate change related issues. Climate change is real and those sorts of things. And even in chapter two, we have a great example of my one of my colleagues works exactly how she's able to help sepsis the case and so forth. So you're seeing more and more of these examples. So air is going to be your friend is going to be there to help you and so on. I think it's going to be okay. So that's what I do.
Steven Parton [00:47:18] I love it. I love that. Optimistic note. Rama, again, thank you so much for your time.
Rama Chellappa [00:47:22] Yeah, you are welcome, Steve, and have a great day.