Skip to Content

The rise of autonomous AI agents and the challenges

Pascal Brier
Oct 2, 2024

It seems our predictions were spot-on, with AI agents being announced everywhere and becoming the new business conversation topic (some hype maybe?).

Indeed, the concept of multi-agent systems, where multiple #AI agents interact and cooperate to achieve defined objectives is very promising. No longer limited to simple task execution, AI agents are now evolving towards greater autonomy, capable of making decisions, learning from their environments, and performing complex actions without continuous human intervention.

But as we step into this future, I can’t help but ask myself: What will govern the interactions between AI agents when they become autonomous?

To understand this question, we can draw a parallel with human social behavior. As individuals, our interactions are shaped by character, social norms, cultural values, learned behaviors, and a myriad of other rules that are implicitly followed by all (at least in theory!). These mechanisms allow us to collaborate, make decisions, and solve conflicts when we disagree.

AI currently lacks this framework to navigate complex and unexpected situations. As an interesting example, my friend Brett Bonthron shared how his driverless taxi got frozen in place when faced with the chaos of a traffic accident in front of it: https://lnkd.in/eGTCuMgS
An unexpected situation which would have been easily navigated by the average human utterly confounded the AI systems of his car (funnily enough, Brett eventually had to exit his driverless taxi and call for a good-old human driven one).

In the future, what will happen when several AI Agents run into each other and that they have to get to a clear outcome but their assigned tasks happen to be in contradiction? Who will go first? Who will have to step back and give priority to the other?

If you want to learn more about this, our colleague Jonathan Aston from our Capgemini Generative AI Lab recently posted a very interesting piece exploring how Game Theory may provide some of the answers:
https://lnkd.in/e_efTnY9

In the physical world, individuals essentially follow three main tracks to resolve such conflicts: we endorse the rules of courtesy, we negotiate, or we go to war (figuratively or not). Will AI agents follow a similar reasoning?

Meet the author

Pascal Brier

Group Chief Innovation Officer, Capgemini
Pascal Brier was appointed Group Chief Innovation Officer and member of the Group Executive Committee on January 1st, 2021. Pascal oversees Technology, Innovation and Ventures for the Group in this position. Pascal holds a Masters degree from EDHEC and was voted “EDHEC of the Year” in 2017.