Skip to Content

Decoding trust and ethics in AI for business outcomes

Anne-Laure Thibaud (Thieullent)
19 July 2021

From our AI and the Ethical Conundrum report, we know that 70% of customers expect organizations to provide AI interactions that are transparent and fair.

Do you trust Artificial Intelligence (AI) to do what you intend it to do, and do your customers trust you to use AI responsibly? These two perspectives, one internal, one external, are central to your ability to succeed with AI.

So how do we succeed?

The answer is far from simple, but I do think that it has simple foundations, and that from those foundations strong solutions can be built.

To ensure AI can be trusted both internally and externally, organizations must demonstrate that ethics and accountability are embedded across the entire lifecycle, from design to operations.

Trust is a must, Ethics are forever

The European Commission’s digital chief, Margrethe Vestage, agrees with this perspective, saying “On artificial intelligence, trust is a must, not a nice-to-have”.

The Commission’s Regulation on a European approach for Artificial Intelligence, released in April this year, also the ethical foundations of trust. It emphasizes the need to base innovation on rules that ensure people’s safety and fundamental rights.

In our Data & AI community at Capgemini, we call this ‘human-centered AI”, AI solutions which ensure that human ethical values are never undermined.

The responsibility of organizations

Waiting for regulations to tell you what to do isn’t enough, as the pace of technical advancement increases so the potential for AI to become an existential threat to your organization increases. If you can’t trust your machine learning models to do what they should, how do you know they won’t disrupt your business and decision-making process in the near future? If your customers don’t trust you to use AI responsibly, why would they continue to do business with you?

To help guide organizations, and help them demonstrate to their customers that they are responsible users of AI, Capgemini has developed its Code of Ethics for AI, which includes seven key principles:

1.   Have carefully delimited impact

2.   Be sustainable

3.   Be fair

4.   Be transparent and explainable

5.   Be controllable, with clear accountability

6.   Be robust and safe

7.   Be respectful of privacy and data protection

A business case for trusted AI

From our AI and the Ethical Conundrum report, we know that 70% of customers expect organisations to provide AI interactions that are transparent and fair. 45% of customers saying they would share a negative AI experience with family and friends and urge them not to engage with the organization. There is a real risk of reputational damage and the associated revenue impact of not being able to demonstrate the ethical use of AI to consumers. Beyond these external risks, trusted AI provides the robust foundations to ensure AI delivers the expected, positive, impact for your business, your customers and employees, and society as a whole.

An example of the sort of active Trust that organizations can develop is shown by SAIA  (Sustainable Artificial Intelligence assistant) a tool developed by teams from Capgemini Invent. SAIA recognizes, analyses, and corrects bias in different AI models, and helps ensure that organizations are not unfairly discriminating against people due to their gender, race, or socioeconomic background when assessing credit risk.

AI techniques such as Generative Adversarial Networks can also help us respect the privacy of individuals while accelerating innovation, for instance our Sogeti teams have enabled a European health agency to accelerate its research by using their ADA tool (Artificial Data Amplifier), to produce synthetic data that accurately reflects real world information.

Trusted AI is accelerating business outcomes

From reputational risk and regulatory obligations to moral duty, it’s clear that organizations need to be able to trust AI and to demonstrate they have mastered Ethical AI; applying AI technologies the right way and for the right purpose to build and nurture trust with their customers, citizens and partners.

Trust is about accelerating and ensuring outcomes, it’s about being able to have confidence that your AI solutions will do what you need, and only that. For your customers and employees, their trust in you to use AI responsibly will be based on their confidence in your ethical foundations for AI. Trusted AI is about accelerating towards an assured outcome, and about being able to deploy AI in a way that doesn’t risk reputational damage with your customers.

So, how do you build this trust in “AI”?

On July 12, I have had the honor to joining incredible panelists like Sally Eaves, known as the ‘torchbearer for ethical tech’, Capgemini’s Chief Innovation Officer Pascal Brier, Francesca Rossi, AI Scientist, IBM Fellow, IBM AI EThics Global Leader, and Sandrine Murcia, CEO & Co-founder of a fantastic company called Cosmian. You can watch our discussion again here.

For those of you who don’t have time to watch I will just share my closing statement on how I think you can progress towards a trusted use of AI:

  1. Build your own code of conduct, that is in line to your values. Speak about it at board level, with your Data & AI teams, as well as with your teams that will use AI solutions – Ethics & AI is not only an expert discussion! Obviously once you have “your” code, you need to build a simple but efficient governance to apply it, otherwise it will only be nice principles that won’t be applied
  2. Training your teams on your code, why it’s important, how to apply it. Provide tools for them to ask the right questions right at the design phase, and equip them with best practices for the delivery of projects, for them to be able to build AI solutions within the right framework
  3. Set up instances for people to reach out to when they need help, it’s not easy topic! On our side we built what we call « Flying squads » specifically on Ethics & AI – a group of experts that every Data & AI project can reach out to when there is a question to be addressed
  4. Be very intentional about building diverse and inclusive teams to ensure your Data & AI teams are representative of society as a whole, to avoid “perspective blindness” if all people from your team think and look the same (and to understand what is perspective blindness, I recommend the excellent book Rebel Ideas from Matthew Syed).

Share with us your thoughts and challenges – and your progress! It’s not an easy topic, so it’s worth exchanging best practices to move the industry forward.