In brief

In brief

  • As AI-based decisions have increasing impact on human lives, a new imperative becomes clear: businesses now need to “raise” their AI systems so they act responsibility.
  • This requires addressing many of the same challenges faced in human education and growth.
  • Watch or read the interview with Kishore Durg where he explains how businesses need to embrace a new testing regime with a Teach and Test framework, tailor-made for AI systems.


ROGER MAGOULAS: We understand that there is an AI imperative Accenture has been working on. Can you tell me a little about it.

KISHORE DURG: A lot of the AI systems that are being built for businesses are taking decisions, and it's impacting human lives. So, the clear imperative is that these systems need to be transparent, responsive, align with societal values, and essentially ethically they are taking the right decisions. The imperative for us is to ensure that the systems, as they are built, are in the right track. It's just like when you look at a kid which has to be taught between right and wrong from a societal perspective, and you want to ensure that they do grow up to be strong adults who contribute to the society. We look at the systems similarly. The imperative for us is – how do we ensure these systems are responsive, and they imbibe the societal values.

RM: So, you're making a compelling case for that, but it's also why should businesses care about this?

KD: Business executives today want to win customers' trust as growth comes from customers trusting the business. And, aligned to that customers have to trust you. You need to ensure that your systems are supporting that trust imperative. And that's exactly why businesses need to care. We've seen a lot of things that have gone bad. Conversational agents aren't learning things they should be learning. There have been cases of autonomous vehicles going off-track. There have been cases where you have machine learning algorithms picking up the wrong behavior. If businesses are going to implement these AI systems, we believe that they need to care, because customers trust businesses which have verifiable, explainable, trustworthy systems.

RM: So how does the Accenture Teach and Test framework raise responsible AI systems. Raise is a good term given in the analogy with kids.

KD: When you raise AI systems and just like kids, you need to teach it the right way. One of the things that we need to be worried about – lot of the AI systems right now have gender and ethnic biases. The corpus of data that is used to train them are managed by humans. When you actually use the same data to train these AI systems, you are going to perpetuate the biases that you have, into a system. Now this could be different in different parts of the world. In the Teach phase we try to neutralize these biases.. Just likekids make mistakes as they learn new things. And when kids make mistakes, we teach them how to do it. We also have the Test phase wherewe monitor for behaviors that are not ethically right, and we address it. So, it's a very simple concept of Teach and Test. It's just like bringing up your kids.

RM: I was curious in any reference to reinforcement learning, hearing the describing, it sounds a little like it.

KD: It is very aligned with that, and we are trying to simplify it, so that people can understand what it exactly means. It is a very complicated algorithm in terms of how we debias these AI systems andhow we address these biases. We use metamorphic testing for some of the algorithm issues that are out there. So, in a simplified way, we're looking at how you raise kids, you need to ensure that the systems behave similarly.

RM: That's great. You know, a use case will probably help explain this.

KD: Sure. If you look at autonomous vehicles, you know how you need to ensure that these systems know there is a stop or not. It's not that you can train everything. It would take few years to actually train the systems to get every possible condition that is out there. And there are cases where you are actually putting them out for humans to test.. They may not end up with the most likely alternative of what you would like it to be, because there are unknown parameters that you would have never taken care as you validate the systems. One of the constants we have there, is around knowledge representation and qualitative reasoning/ Bringing that together with machine learning is a way to go to address these systems on autonomous sites. And essentially, that will help us understand the knowledge gaps and reasoning on why it took a decision the way it did, and builds in transparency, in the decision making. And that is something that we have been working on. Similarly on the data part of the equation we have been working with banks to develop virtual agents which are neutralized from gender and racial bias, so that the corpus of data that's used to train these agents are neutral in nature and unbiased, and as they pick up and learn, we do look at monitoring of the activities that are out there. So even a virtual agent can go rouge. That's a very simple way of looking at, you know, these systems that are out there. You need parenting. You need to raise them properly, and it needs some governance. And that's the construct of responsible AI.

Kishore Durg

Senior Managing Director – Accenture Technology Growth and Strategy Lead

MORE ON THIS TOPIC


Subscription Center
Stay in the Know with Our Newsletter Stay in the Know with Our Newsletter