Skip to Content

How can multi-agent systems communicate? Is game theory the answer? 

Jonathan Aston
Aug 28, 2024

Multi-agent systems in AI are those where autonomous agents work to achieve the desired outcome.

An agent in this context can be as generic as an entity that is acting on another entity’s behalf. In multi-agent AI systems, this can be an AI agent (a bot) acting to help achieve the desire of the people who build and run the process. There are many challenges with multi-agent system setups, but at the heart of the setup is the fundamental question: How can they communicate? 

Interactions between agents can be either collaborative or competitive, but the overall task of the system needs to be achieved. We may well see very diverse and differentiated ecosystems of multi-agent systems, and therefore, communication to achieve a common goal could be a big challenge. We are familiar with some more linear multi-agent AI systems where the agents work sequentially with agent A collecting data, agent B analyzing it, and agent C communicating it but what if we have a system where agents work in parallel? 

Enter game theory – a mathematical framework designed to analyse interactions among rational decision-makers. Game theory offers a practical approach to solving the communication challenges in AI multi-agent systems in an effective way. 

Understanding game theory from a philosophical viewpoint 

From a philosophical perspective, game theory explores the nature of rational decision-making and strategic interaction. It addresses fundamental questions about how individuals, with their own preferences and goals, can make decisions that consider the actions and responses of others. How do the decisions you make influence others and how do you change the decisions you make based on the decisions of others?  

Game theory can be traced back to writings as far back as Sun Tzu’s The Art of War, where he wrote: 

  • Knowing the other and knowing oneself means in one hundred battles, there is no danger. 
  • Not knowing the other and knowing oneself equates to one victory for one loss. 
  • Not knowing the other and not knowing oneself means certain defeat in every battle. 

Game theory’s place in multi-agent systems 

Game theory provides tools for analyzing scenarios where multiple agents make decisions that affect each other’s outcomes. It relies on agents working in parallel and then coming together to pool their learning/outcomes for the common aims of the system. These interactions are often strategic where the outcome for each participant depends on the actions of all. By modeling these interactions, game theory helps in anticipating the behaviour of agents and designing mechanisms to achieve desired collective outcomes. 

In the context of AI multi-agent systems, game theory can be leveraged to three key concepts: 

  1. Facilitate coordination 
  2. Enhance communication
  3. Optimize collective outcomes

Why game theory is a good fit 

Strategic decision-making 

Agents in a multi-agent system often operate in environments where they must make decisions that account for the potential actions of others. Game theory provides a structured approach to model these decisions. This can be both for: 

  • Cooperative scenarios: Agents can use game-theoretic principles to form coalitions and share resources. 
  • Competitive scenarios: Agents can predict opponents’ moves and strategize accordingly. 

Incentive Alignment 

One of the core aspects of game theory is the design of incentives. Each agent has individual incentives, but we can use game theory and principles like mechanism design to align these incentives. Mechanism design is the science of designing rules of a game to achieve a specific outcome, even though each participant may be self-interested, thus aligning all agents to a common goal.  

What are some examples of game theory in multi-agent systems? 

Cooperative robotics 

In robotics, teams of robots often need to collaborate to complete tasks, such as automated manufacturing in places like factories. Game theory can facilitate effective communication and coordination among robots to achieve both local aims (such as the making of an item with a piece of machinery) as well as global aims (such as the creation of a whole product). For example, many agents working together in a smart factory could make operations far more efficient than those agents working alone or without any collective, collaborative aims. 

Below, we show a demonstration of two smart factory agents (GPT 3.5 turbo) collaborating and compromising to achieve a collective aim. We have a head of quality control and a head of production deciding where to spend their inspection time to ensure the best quality but compromising quality control can lead to higher production and therefore higher profit.

It is interesting to observe two phenomena in the demo. The first is the agents compromising and collaborating to achieve a common outcome and the second is that over multiple runs of the same prompt we get different responses and different agreed outcomes. The consistency in the outcome and the negotiation, however, of collaboration and compromise is a common feature showing how we can give common incentives to ensure regardless of the outcome all agents are working toward the same aim.  

Traffic management systems 

In urban environments, traffic lights, autonomous vehicles, and human drivers represent a complex multi-agent system. Game theory can be used to optimize traffic flow by modeling the interactions between these agents. For instance, adaptive traffic signal control can be framed as a game where each traffic light is an agent that decides its signaling strategy based on the traffic conditions and the strategies of neighboring lights. By using game-theoretic algorithms, the system can minimize congestion and reduce travel time while being fair to all road users. 

Below, we show a demonstration of this with two traffic light agents (GPT 3.5 turbo) arguing that the cars waiting at their traffic lights should be prioritized. We see them compromising and coming to agreements not only in the individual session but between sessions as well, acting in a tit-for-tat way and making agreements for the future. 

We can see an interaction occurring here whereby the obvious action to allow the traffic light with the most cars to go first is not always being done. The traffic lights themselves are negotiating and compromising between sessions to ensure that the strategy is fairer than that. You may ask if this is necessary because for the best traffic flow, we should let the most cars through first. However, put yourself in the position of the person who lives in the street with fewer cars and ask yourself if you would prefer the most-cars-first approach or would you want your traffic lights to fight for your right to go first occasionally. 

Conclusion 

Game theory offers a rich input to a robust framework for addressing the challenges of multi-agent communication in AI multi-agent systems both collaboratively and competitively. Key concepts such as strategic decision-making, incentive alignment, and ensuring robustness to manipulation make game theory a useful tool for those building multi-agent systems.  

As AI multi-agent systems continue to improve, the integration of game-theoretic principles could well be a useful tool for helping us set up these systems to achieve the outcomes we desire and deliver more sophisticated, efficient, and intelligent multi-agent systems. 

So, the next time you ponder the complexity of multiple AI agents interacting seamlessly, remember that game theory might just be the hidden force to enable the orchestration of their harmony. 

About Generative AI Lab:

We are the Generative AI Lab, expert partners that help you confidently visualize and pursue a better, sustainable, and trusted AI-enabled future. We do this by understanding, pre-empting, and harnessing emerging trends and technologies. Ultimately, making possible trustworthy and reliable AI that triggers your imagination, enhances your productivity, and increases your efficiency. We will support you with the business challenges you know about and the emerging ones you will need to know to succeed in the future.

One of our three key focus areas is multi-agent systems, alongside small language models (SLM) and hybridAI. This blog is part of a series of blogs, Points of View (POVs) and demos around multi-agency to start a conversation about how multi-agency will impact us in the future. For more information on the AI Lab and more of the work we have done visit this page: AI Lab.

  

Meet the author

Jonathan Aston

Data Scientist, AI Lab, Capgemini’s Insights & Data
Jonathan Aston specialized in behavioral ecology before transitioning to a career in data science. He has been actively engaged in the fields of data science and artificial intelligence (AI) since the mid-2010s. Jonathan possesses extensive experience in both the public and private sectors, where he has successfully delivered solutions to address critical business challenges. His expertise encompasses a range of well-known and custom statistical, AI, and machine learning techniques.