Skip to Content

AI Act in focus: How legal and strategic consulting jointly set new standards

Lars Bennek
Oct 03, 2024
capgemini-invent

CMS and Capgemini Invent: Joint consulting on digital regulation and transformation

By combining the expertise of international law firm, CMS, and leading strategy and technology consultancy, Capgemini Invent, we provide comprehensive and seamless advice on all aspects of digital transformation. Together, we present the foundational elements of AI governance, AI governance frameworks and platforms, and the importance of AI regulatory compliance.

We would like to thank the authors Björn Herbers, Philipp Heinzke, David Rappenglück and Sara Kapur (all CMS) and Philipp Wagner, Oliver Stuke, Lars Bennek and Catharina Schröder (all Capgemini Invent).

The “Regulation on Harmonized Rules for Artificial Intelligence” (AI Act), adopted by the European Parliament and the Council of the European Union, came into force on August 1st 2024. This concludes a long path of tough negotiations that began in 2021 with the European Commission’s proposal for EU-wide regulation of AI. Due to its direct applicability in all 27 member states, the AI Act will have far-reaching impacts on providers, operators, and users of AI.

AI Act in focus - Ai governance blog infographic Final
  • Prohibited practices under the AI Act (Art. 5) are those AI systems deemed incompatible with the fundamental rights of the EU.
  • High-risk AI systems (Art. 6) are divided into those that are products or safety components of certain products and subject to third-party conformity assessment and such used in specific areas. Providers of such AI systems face high compliance requirements throughout the system’s lifecycle.
  • Certain AI systems, such as those interacting with humans (e.g., chatbots), are subject to specific transparency obligations (Art. 50).
  • General Purpose AI (GPAI) models (Art. 51 ff.) are versatile AI models that can perform various tasks and be integrated into systems. Compliance obligations vary based on classification as “normal” GPAI models or those with systemic risk.

From the date the AI Act came into effect, provisions on prohibited practices will apply for six months, 12 months for GPAI models, and between 24 and 36 months for high-risk AI systems.

Violations of the provisions can result in fines up to EUR 35 million or up to 7% of the previous year’s worldwide total revenue. In other cases, the penalty may amount to EUR 7.5 million or up to 1% of the previous year’s worldwide total revenue.

Strategic and operational implementation through AI governance

Implementing the requirements of the AI Act requires an overarching approach. With our comprehensive AI Governance Framework, we help organizations use AI responsibly and efficiently while minimizing risks. Processes and responsibilities must be defined and adhered to throughout the AI lifecycle, covering data, models, systems, and use cases, aligning with technical, procedural, and regulatory requirements.

Ai act in focus - Ai governance blog infographic 2

Formulating a long-term vision for AI governance and developing ethical guidelines within the organization lays the foundation for any AI strategy. This strategy must then be effectively conveyed through a comprehensive communication plan. Subsequently, roles and responsibilities related to AI projects can be identified and defined, and processes for the development, implementation, and monitoring of AI projects can be adjusted.

Creating a handbook with security standards, best practices, and guidelines for implementing AI is recommended. Given the AI Act impacts such areas as data protection, copyright, and IT security law, it is advisable to continuously analyze regulatory requirements and translate them into technically measurable KPIs.

For providers of high-risk AI, setting up a risk management system is mandatory (specific components to be explored in a subsequent blog post). Here, an AI governance framework is essential. To effectively scale AI deployment and optimize operational processes, it is crucial to take an inventory of all AI systems and subsequently automate processes across development, deployment, monitoring, and documentation stages.

Establishing and monitoring metrics for quality, fairness, and robustness is a cornerstone of effective strategy. To foster knowledge among employees and mitigate biases against AI, continuous training and awareness initiatives should be integrated throughout the AI lifecycle in an iterative process, complemented by change management to ensure a seamless transition.

Application example

To illustrate this approach, consider a fictional federal ministry intending to use AI for partial automation of administrative services.

Legal assessment

The first challenge in an implementation project is the legal assessment of whether and to what extent the AI Act applies to the intended AI integration. Only after clarifying fundamental legal questions can specific compliance obligations be determined, and the AI Act’s provisions be technically implemented.

Initially, it is necessary to assess whether the AI system falls into a risk category under the AI Act and what role the ministry plays concerning the AI system. Depending on the specific use, an AI system for partial automation of administrative services could be classified as a high-risk AI system under the AI Act. Annex III of the AI Act regulates specific “high-risk areas.” In the context of public administration, the following example areas for usage of AI systems can be mentioned:

  • As a safety component in administration
  • For assessing claims for public services and benefits
  • For law enforcement by judicial authorities.

While providers primarily bear the compliance obligations of high-risk AI systems, operators also possess certain responsibilities under the AI Act. Therefore, the following questions must be asked:

  • Is the ministry planning to develop and deploy an AI system? If so, the ministry acts as a provider.
  • Is the ministry using an existing AI system under its own responsibility? If so, the ministry acts as an operator.
  • Is the ministry planning to adapt an existing AI system significantly to meet its specific needs? If so, the ministry would transition from being solely an operator to also acting as a provider.

After addressing the core questions, the next step involves specific planning to implement compliance obligations and establish governance structures. In this scenario, legal engineers, leveraging their interdisciplinary approach, translate legal requirements into concrete specifications and solution designs. They achieve this by identifying and developing technical and organizational measures that align with these obligations.

This approach encompasses a wide range of actions. For instance, data governance requirements outlined in Art. 10 of the AI Act can be seamlessly integrated into the architectural design. Techniques, such as anonymization or applying privacy by design principles, can be employed to handle training data. Additionally, differential privacy methods serve to enhance data confidentiality. Furthermore, meticulous selection and evaluation of training data can significantly mitigate potential biases.

To allow humans to carry out their oversight responsibilities, a sufficient level of explainability of the AI governance models is crucial, as outlined in Article 14 of the AI Act. Explainability methods become particularly crucial, especially in contexts where AI is involved in decision-making processes. Additional explanations or visual outputs can act as useful tools to clarify AI outputs, thereby facilitating a clearer understanding and oversight by humans.

Given the variety of requirements, the selection of the AI governance model is a pivotal consideration. Various training models differ in their transparency regarding training data, making careful selection particularly dependent on specific use cases. An agile and interdisciplinary approach, which considers both legal and technological aspects, is essential to make an informed decision and achieve the desired project success.

In our application example, the complexity and multifaceted nature of the topics are evident. Only by first clarifying the legal questions surrounding AI actors and the type of AI system can the resulting obligations be identified and effectively integrated into AI governance.

Strategic approaches to AI governance and risk management

The use of AI offers enormous potential for value creation but also carries functional and legal risks. Ongoing legislation, such as the AI Act, leads to a comprehensive regulatory framework but requires detailed implementation in organizations, considering technical, procedural, and human dimensions. Various projects have shown that AI governance only adds value if it keeps pace with the constant evolution of AI technology. Clear regulatory frameworks can catalyze increased AI applications in boundary-pushing areas following initial uncertainty. This not only fosters technological solutions for known issues but also unveils new use cases made possible by emerging capabilities. To remain competitive, implementation guidelines should strike a balance: offering necessary support while maintaining flexibility from the outset. Keep an eye out for our forthcoming deeper exploration of the risk management requirements outlined in Article 9 of the AI Act.

EU AI Act compliance

With the EU AI Act, the European Union has introduced a landmark regulation that reshapes the governance of Artificial Intelligence.

Author

Lars Bennek

Senior Manager, AI Governance & Data Law, Capgemini Invent
Lars is a Senior Manager and Head of the Legal Engineering Team. He focuses on Lawful AI and AI Governance, translating regulatory requirements into technical designs, functional architectures, and processes. His academic background as an engineer, business lawyer, and business informatics specialist enables him to take a holistic view of the aforementioned topics.