top of page

13 - Beena Ammanath, Deloitte: How do you build trustworthy AI?

Would you trust an AI that told you when a plane engine was going to fail? What if the information gathered by that AI lost you your job?


On this week’s episode, we discuss ethics, bias, fairness, and how building trustworthy AI isn’t a one size fits all endeavor with Beena Ammanath, director of the Deloitte AI Institute.


Read on for the transcript!


Monte 0:07


Hi, I’m Monte Zweben, CEO of Splice Machine. You’re listening to ML Minutes, where cutting-edge thought leaders discuss Machine Learning in one minute or less.


This week, our guest is Beena Ammanath. Beena is the executive director of the Deloitte AI Institute and founder of the nonprofit Humans for AI. Beena thrives on envisioning and architecting how data, artificial intelligence and technology in general can make our world a better, easier place to live for all humans. Welcome, Beena.


Beena 0:42

Thank you. Thank you for having me, Monte.


Monte 0:44


Thank you for coming. And we're very excited about our talk today. Tell us about your journey, though: how did you get to where you are now?


Beena 0:53


Yes, that's a great question. I think I've had a very traditional computer science path. You know, I studied computer science, did my bachelor's and master's and then did an MBA in finance. And, you know, started out as a SQL Developer, and hands on program grew through the ranks of the data journey, from a DBA, to data base developer. And then came the phase of BI and data warehousing, right? And then, then we are now in the era of big data and machine learning and artificial intelligence. My career has been always anchored in data. But I was very curious about the different industries where data is being used, and how it is being used, and the nuances around it. So my career spans across telecom, to industrial to services, to hardware, to financial services. So it's been a fascinating journey so far.


Monte 1:54


Awesome. Well, tell us about your work with Deloitte. What's the problem you're trying to solve with the trustworthy AI framework?


Beena 2:03


Yes, that is, you know, that is one of the pressing challenges in front of AI for today, right? How do you solve for AI ethics? Ethics tends to be this very buzzword-y topic around and goes down this rabbit hole of bias and fairness, which I do think is very important. But remember that, you know, ethics is a much broader topic, depending on the industry you're in. And depending on the use case you're trying to solve, if your AI is predicting when a factory floor machine might fail, fairness and bias are not as crucial as reliability and robustness of the algorithm. Right? So what trustworthy AI framework does is really provide all the principles around how to think about ethics, and then being able to implement it across technology process and people in large organizations, I've been part of several ethics discussions. And, you know, my frustration has been how do we solve for it, we really need to solve for it for AI to succeed.


Monte 3:07


Excellent. I have a question about ethics for AI. And that question is, how is AI ethics different than ethics for experts that are human?


Beena 3:19


That's a great question. And, you know, and I think you've articulated it so well, Monte-- there are challenges when we encode human intelligence into machines into automated systems, our ethics don't necessarily carry over. And ethics is also very individualistic and very personal as well, right. So when you, we all have our biases, let's stick to biases, which, which is an easy topic to understand. We all have our biases. That's what makes us human. And that's what makes us interesting, right? We need those biases. But when if, if my biases are encoded in the AI solutions that I built, then that's not a good thing, because that's getting amplified at a scale by which me as a person individually could never have done. So that's the challenge behind how, you know, taking out the-- how do you encode human ethics into systems in a way that can be unbiased or that can be relevant across all of humanity?


Monte 4:28


That's great. Can you give an example of an AI ethics problem that you've solved with your team?


Beena 4:38


Yes, of course. So here, here is one and this is an interesting one. And that's why I say we need to think about ethics a little bit broader than just bias and fairness. So this was a solution that we were building for an aircraft engine manufacturing company, and it was really around being able to predict when a jet engine might fail, look at all the historical data or look at the service records and predict when that jet engine might fail. As we started going through the data discovery phase, we saw that we could not only, you know, identify that, you know, how much time was there before this engine failed, but we could also see how the pilot was flying the plane, even though that was not the original intent. It was it, you know, we could see was the pilot flying in the way they were taught, or were they hit, hitting thrust sooner, but they're hitting it more faster, right, which caused engine deterioration, which caused engine failure in a way. So this came to problem, but like, where you this could impact the performance review for the pilot? Right? And that's where you need to get in FAA and additional authorities involved? Like is this data shareable? What should we do with this because there is additional things you might find, once you start looking at data, you might be trying to solve a specific problem, but you might find additional correlations. And what do you do with that?


Monte 6:08


That's very interesting. So what you revealed was that just by solving one problem, which was diagnosing the remaining useful life of an engine or predicting an outage of an engine, you actually were shining a light on the performance of the pilot, which had human human impact on their performance reviews. And that's a really intriguing overlap there. One of the things that we've been looking into here on our podcast is the set of tools that people are using for their work. Many of our guests are developing machine learning systems, but you're looking at AI ethics. And I'm wondering, are there any specific tools that your team has been looking at or deploying in order to help with this particular area?


Beena 7:01


Yes, absolutely. And I think, you know, the challenge with AI ethics is it's not going to be a single tool. That's the whole challenge with the topic of AI ethics, it has to be more nuanced than putting it all into one bucket, right? The ethical challenges, even if you think about just healthcare, the ethical challenges for from a healthcare perspective would be very different than the ethical challenges around a jet engine prediction. Right? Even within healthcare, the ethical challenges for hospital management versus patient care is going to be different. We think about it from, you know, solving from three dimensions, technology, process, and people. So technology is where the tools come in. And they are wanting to, you know, right here, where we live, there are hundreds of startups trying to solve for this problem. But I think it needs to be more nuanced, you need to think about the domain, the context of where the solution is used, and have those relevant tools. So we do evaluate those tools and recommended depending on the industry and the sector. The second is the process, which is where we really look at the governance and the controls that need to be put in place. And you can have all this in place. But if you don't educate the workforce, to raise the alarm, and to, you know, to have that ethical mindset, and really light on what AI ethics means for the company, it won't work. So that's three dimensions we think through


Monte 8:26


Great. So technology process, and people are all important elements of controlling the impact of AI on an organization. And so in your efforts of pursuing trustworthy AI and delving deep into the AI ethics, what's one specific challenge that you faced along the way that you might share with our listeners?


Beena 8:57


Yes, the biggest challenge as I feel is that there is a misunderstanding that is that, you know, you can solve for ethics as a one size fits all. And I think unless and until we get more into the details and think about ethics the same way as we think about policies or regulations for different industries are different. We have to be able to think about ethics from these different domain specific aspects. And you know, that there is, you know, it tends to be a headline topic. And I truly believe that we need to do a lot of work. We know how to build a technology, we just have to make sure that we think about not only all the good ways that this technology can be used, but also all the bad ways or all the ways this could go wrong and put in those right guardrails. Right. So I think you know, getting more nuanced about ethics is is is the biggest challenge. That I see broadly that, you know, actually stopping us from making progress.


Monte 10:06


Wonderful. Let's dig in a little bit to some of the other examples; you gave a great example of AI ethics in the context of a diagnostic problem for an engine. How about moving to the healthcare examples? Can you give us an example of an AI ethics problem in healthcare with regard to patient outcomes?


Beena 10:28


Yes, in the event there is a human involved, there is always a challenge around bias and fairness. And we and you, you and I both know that machine learning is based on historical data. And historical data tends to be biased by themselves, right? We've seen that repeated again and again. So in healthcare, there is, you know, assumptions that are being made. And we've seen examples where you know, the kind of healthcare services that you get is sometimes matched your zip code is matched to your demographic, and it's not as, as relevant to what you might be experiencing as an individual. So there's a recent article that I just read, which was like, you need to get more focused on listening to the patient. And actually, it can help remove the human biases of looking at that person and forming an opinion. Right, so one of the biggest challenges in healthcare, especially from a patient care perspective, is around bias. And then also the need to drive transparency, how is an AI system arriving at the recommended treatment? What's the path to get there?


Monte 11:41


Excellent. Or do you have any tips for how we can avoid bias in a patient outcome type of application, and how we can utilize patient reported outcomes in that listening mode?


Beena 11:57


Absolutely. And this is where, you know, having our, you know, our population, our healthcare population educated about AI, is, is going to be super helpful. And the tip is, obviously, you need to start with a trustworthy AI framework, you it's all about, you know, putting trust at the center of every solution. To be successful at ethics, you have to be able to build systems that are trustworthy, and be addressed it by using certain tools that we have. But also, one of the things that we do is also when you are building an AI solution, proactively think about what are the ways this algorithms could go wrong? Will this work for every demographic, we actually have an assessment tool that can evaluate what are the biases that can creep in? It's the set of questions that you answer at the end of it, you know, it'll help you identify those gaps that you've not thought about, and then put in those guardrails.


Monte 13:00


Excellent. So this assessment tool helps the practitioner think about ethics. And as you said, it helps that practitioner that may be quite technological put trust at the center of any application of AI. Absolutely. One general question I have for you is, what do you see coming next in AI?


Beena 13:22


Oh, you know, I really believe last year, the, you know, the AI adoption was just accelerated. Right? The need for remote work, digital transformation, you know, there was no, it was no longer, you know, a debate, it was a necessity, and that has accelerated. I truly believe that in 2021 and beyond, we're going to see ethics being operationalized, being implemented, we're going to move beyond just the headlines to the real world, you're going to see ethics related, more relevant services, and also real products that come into the market, which truly brings in the domain experts and the AI technology experts to build out these ethics are tools, right? So we're going to see the rise of AI ethics in large organizations, and even in small companies all coming to the forefront, those discussions of you know, how can my technology go wrong and lead? How can I prevent it that will be early on, as opposed to how can I drive value from my data?


Monte 14:33


Excellent. How can this technology go wrong? And how can I prevent it? Those are two excellent questions that every project should have at the center of its study before it goes live.


Beena 14:47


Absolutely.


Monte 14:49


Well, thank you, Beena. It's been a pleasure. Thank you so much for joining us today on ML Minutes.


Beena 14:54


Thank you so much, Monte, this was fun.


1 comment
bottom of page