Saltar al contenido
DTR_Speaker-Banner_1920x480_Grainge-1
Data and AI

Taking a human-centered approach for building ethical and transparent AI

Michael Natusch, Prudential

Michael Natusch is the global head of AI at Prudential and also founder of the AI Center of Excellence in Prudential Corporation Asia. With over 20 years of experience in data analytics and machine learning, he enjoys working with data and leading-edge statistical methods to tackle real-world problems, which today means applying machine learning and neural networks to large-scale, multi-structured data sets.

The Capgemini Research Institute spoke with Michael to understand more about creating an ethical and transparent AI and the technological challenges involved.


ETHICS AND TRANSPARENCY IN AI AT PRUDENTIAL

What is the model you have deployed to scale AI at Prudential?

At Prudential, we have both a centralized and a localized model. I am a big believer that a centralized-only or a localized-only model would be doomed to fail. In the former, you would find people who build amazingly clever things that nobody ever wants to implement. And in the latter, you would find people who would spend literally all their time on minute process improvement without ever being able to truly reinvent the business and move beyond sub-optimization.

So, we want to have some centralized capability as that brings efficiency in terms of being able to copy-paste approaches to different countries and the ability to hire AI experts. But, to supplement that, we need to have localized capability. If you only have one, then it has not going to work very well.

How do you define ethics and transparency in AI at Prudential and what is driving action in the organization?

We do not have a working definition. Our position around AI and ethics is still evolving. We are still in the process of formulating as to what the position of the company is and what that means in practice. We have a program of action that, by the end of this year, we hope to have clearer views of where we stand as a company around ethics, AI, data, and all the associated aspects of transparency, privacy, and compliance.

Why is it an important issue for Prudential?

There are three different strands that lead us to take this issue seriously. One is that there is an overarching conversation in society. For instance, our regulators are starting to look at it. The Monetary Authority of Singapore has published a paper called FEAT, which lays out some very basic principles. So, our vital stakeholders, our regulators, and even our board members, have thoughts and questions.

The second strand comes from our business. We are trying to build something that either replaces or compliments an existing process with an AI solution. So, people are asking – “how do you actually make a decision?” One aspect of the “how” is obviously around accuracy. Are you making the right decision? What is your false positive rate? What is your true positive rate? Those kinds of questions. The second aspect to that is transparency. Can I, as an employee, understand it? If challenged by a regulator or customer, can I justify the decision that has been made? The question that employees also need to ask themselves is, “am I making the right decision?” Even though the decision might be precise and transparent, it might still be the wrong decision. And that has a legal and ethical component to it. So, for instance, am I explicitly or implicitly discriminating against a particular demographic?

The third and final aspect is that we believe that ethical and transparent AI will be a competitive differentiator for us in the marketplace. We have a unique opportunity to seek consumer trust and a short window of time to realize this opportunity. We should demonstrate to people that they can trust us. And they can trust us not just in the world of the 1990s or the early 2000s, but they can also trust us going forward because we will deal with their data in the right way. We will not take their privacy for granted, we will not misuse their personal data, we will not infer things about them from the data that we have that they would consider inappropriate. By being cautious and doing the right thing by our customers, we hope to differentiate ourselves in the marketplace.

Have you ever experienced any ethical issues in AI systems that you have deployed?

We recently looked at facial recognition, in terms of identifying the kind of diagnostic aspects that we can read from a selfie. So, we started with some pre-trained models. And what came out clearly was that while the pre-trained model worked almost perfectly on some of our team members, it did not work at all on others. And it did not take a great genius to realize what was going on. For Caucasians, the model came out with the correct age, but people of South Asian origin tended to be estimated as being older than they were. People of East Asian ethnicity were estimated as being significantly younger than they were. So, even with this sort of five-minute playing around – and without doing anything really sophisticated – you realize that you cannot just bluntly apply pre-trained models using an off-the-shelf algorithm. There must be feedback in the middle. So, this is one simple and trivial example of that third aspect in our own work – where we became aware of ethical issues and what we need to do to attack these ethical issues head-on.

ROLE OF DIVERSITY AND AN ETHICAL CODE OF CONDUCT

How important is the diversity of AI teams when identifying potential biases?

Diversity in every way – ethnic, gender, sexual orientation – are all very important. It is not just about modeling accuracy, but also about asking the right questions and doing things that are culturally sensitive. I think diversity in everyday interactions is extremely important for an AI team because you are not going to ask yourself questions that somebody from a different background would come up with.

Does Prudential already have an ethical code of conduct and does AI feature in it?

There is and it goes back quite a long time. What we are going through right now is translating it for the AI world. We are taking those principles, adapting them to AI, and extending them from an AI point of view. Hopefully, by the end of this year, we will get to a much more holistic, all-encompassing, ethical framework that is applicable across everything that we do.

In a low-scale, largely manual world, you can do things at a fairly slow, straightforward, manual manner. The ethical component is manageable because you can achieve that by training and very limited remedial actions. In a world that is dominated by AI, and where you work at scale, if you do something wrong, you do something wrong on a huge scale. And therefore, you need to be much more careful regarding ethics, transparency, privacy, and compliance. All these need to be incorporated by design right from the start. And that requires a very different way of working and thinking. Therefore, purely from an ethical point of view, the way we choose products and run processes in an AI-dominated world must be done in a very different way.

ETHICS BY DESIGN

What does ethics by design mean in your business?

Ethics by design has three different aspects. One, it is about mindset. As much as we want to move fast, we cannot afford to break things. And that is a mindset thing. The second is about automated and continuous, softwareenabled checks. Are we doing the right things? Is there something that is coming up that that looks unusual? And that then leads to the third piece which is that, sooner or later, every model will misbehave. That is just a fact of life. So, based on the second step, you also need to have a level of human control. You have to have humans who every now and then look at what is coming out, re-think if we are doing the right things, and then adjust the model.

ENSURING AWARENESS AND RESPONSIBILITY FOR ETHICS IN AI

How do you ensure that the relevant teams are aware and responsive of ethics and transparency in AI?

We have some really smart and empathetic people in the AI Center of Excellence. So, we have an understanding of the kind of biases that we need to watch out for. But, what I am really hoping for, is two things. I am looking for validation and completeness, and additions from the overall process that I described earlier. And the other thing that I am looking for is a checklist of things that need to be done less frequently, maybe just at the inception of a particular type of activity, and so on. Some of the checklists might be literal, whereas others might be more intangible. But those are the two kinds of things that I am hoping to get out of this effort, which will help us to supplement our own limited understanding around ethical issues and how to present them.

Where should the responsibility lie if some systems do not act the way they should?

The seat of responsibility will not shift. Ultimately, the people who are accountable for what is happening in Prudential are the chairman and the CEO of Prudential. Our shareholders would ask, “Why did you not prevent this?” So, that will not change. In the case of ethics, this is not something where responsibility lies with any particular individual in the company. It is a shared responsibility for all of us. My team and I are cogs in the wider machinery. We are not the only ones. There are other people who have their part to play as well. It is a shared activity in every sense.

TECHNOLOGICAL CHALLENGES IN ACHIEVING ETHICAL AI

What are the technological challenges with respect to achieving ethics in AI?

It is essentially about applying the right type of technology in the right manner and for the right problem. For this, I actually have a framework in my mind which has two different axes. One axis is the volume axis, be it data points, the volume of transactions, or the volume of events. And the other axis is a value axis. And so, if you look at that space of volume versus cost of making the wrong decision, there are two extreme points that you can immediately identify. One is extremely high volume, extremely low cost.

A good example of that is doing a Google search. So, with 3.5 trillion Google searches a day, what is the cost of Google showing you the wrong ad on one of those searches? It is obviously virtually zero – the impact is minimal. And then, there is the other extreme. For instance, you are in a hospital, and you have a cancer patient, and you need to decide about the radiation dose for radiation therapy for that patient. Clearly, the volume is much lower, but the cost of making the wrong decision can be extremely high.

And then, you have kind of a gray area in the middle. And everything that we do in terms of the kind of algorithms and technology we use, and what kind of considerations we need to get to, depends on where you are on this chart. In the high-volume, low-impact scenario, there are no real ethical considerations there because the impact is so low. On the other extreme, you need to think very hard about what to do. Regulators need to look very hard at what is happening there so that they protect the consumers or whoever they are serving.