ROGER MAGOULAS: We are here to talk about AI and we understand that there
is an AI imperative you have been working on.
You can tell me a little about it.
KISHORE DURG: As you look at a lot of these AI systems that are being built, these systems
are being built for businesses which are taking decisions, and it's impacting human lives.
So, the clear imperative is that these systems that are being built need to be transparent,
responsive, have to align with some of the societal values that are out there, and essentially
ethically they' are taking the right decisions.
The imperative for us is to ensure that the systems, as they are built, are in the right
track.
It's just like, you know, when you look at a kid which has to be taught the right things,
between right and wrong, the right values from a societal perspective, and you want
to ensure that they do grow up to be strong adults who contribute to the society.
We look at the systems similarly.
The imperative for us is – how do we ensure these systems are responsive, and they imbibe
the societal values.
ROGER MAGOULAS: So, you're making a compelling case for that, but it's also why should businesses
care about this.
KISHORE DURG: 92% of the business executives said: oh, we really want to get customers'
trust, we want to win their trust, lot of the growth comes from a lot of our customers
trusting our businesses.
And, aligned to that customers have to trust you, you need to ensure that your systems
are supporting that trust imperative.
And that's, that's exactly why businesses need to care, because we've seen a lot of
things that have gone bad.
A lot of the conversational agents aren't learning things they should be learning, and
there have been cases of autonomous vehicles going off-track, there have been cases where
you have machine-learning algorithms picking up the wrong behavior.
So, you know, businesses, if they are going to implement these AI systems AI systems,
we believe that they need to care, because customers trust businesses which have verifiable,
explainable, trustworthy systems.
That's a big imperative for business.
ROGER MAGOULAS: So how does the Accenture Teach and Test framework raise responsible
AI systems.
Raise is a good term given in the analogy with kids.
KISHORE DURG: When you raise these AI systems and just like kids, you need to teach it the
right way.
One of the things that we need to be worried about, lot of the AI systems right now have
gender bias, racial bias, ethnic biases.
And lot of the corpus of data that is used to train them are done by humans.
When you actually use the same data to train these AI systems, you're going to perpetuate
the biases that you have, into a system.
Now this could be different in different parts of the world, but essentially what we have
is a Teach phase in which we try to neutralize a lot of these biases.
Since you have a corpus of data, which is neutral to the biases that are out there,
that is what we call it the Teach phase.
And in the Test phase, just like kids, kids make mistakes as they learn new things, they're
going out of your house, they're learning a lot and picking up a lot of new things.
And when kids make mistakes, we teach them how to do it.
And similarly, for the AI systems we have the Test phase, so we monitor for behaviors
that are not ethically right, and we address it.
So, it's a very simple concept of Teach and Test.
It's just like bringing up your kids.
ROGER MAGOULAS: I was curious in any reference to reinforcement learning, hearing the describing,
it sounds a little like it.
KISHORE DURG: It is very aligned with that, and I'm trying to simplify it for us, so that
people can understand what exactly means.
It's a very complicated algorithm in terms of how we debias these systems, how we address
these biases.
There is also metamorphic testing that we use for some of the algorithm issues that
are out there.
So, in a simplified way, we're looking at how you raise kids, you need to ensure that
the systems behave similarly.
ROGER MAGOULAS: That's great.
You know, a use case will probably help explain this.
KISHORE DURG: Sure.
I mean, just taking autonomous vehicles.
And if you look at it, you know how you need to ensure that these systems know there's
a stop or there.
It's not that you could be able to train everything.
It would take one and after two years to actually train the systems to get every possible conditions
that are out there.
And there are cases where you are actually putting them out for the human to test them,
and obviously.
They may not end up with the most likely alternative in terms of what you would like it to be,
because there are unknown parameters that you would have never taken care as you validate
the systems.
So, one of the constants we have there, is around knowledge to presentation, qualitative
reasoning, bringing that together with machine learning is a way to go to address these systems
that are out there on autonomous sites.
And essentially, that will help us understand the knowledge gaps and reasoning on why it
took a decision the way it took, and builds in transparency, in the decision making.
And that is something that we have been working on.
Similarly on the data part of the equation we have been working with banks to kind of
develop virtual agents which are neutralized from a gender bias, racial bias and others,
so that the corpus of data that's used to train these agents are neutral in nature and
unbiased, and as they pick up and learn, we do look at monitoring of the activities that
are out there.
So even a virtual agent can go rouge.
That's a very simple way of looking at, you know, these systems that are out there.
You need parenting.
You need to raise them properly, and it needs some governance.
And that's the construct of responsible AI.
Không có nhận xét nào:
Đăng nhận xét