Skip to Content

Knowledge Graphs improve Gen AI
Validating results builds trust for organizations

Joakim Nilsson
18th July 2024

Generative AI can make recommendations that will transform decision-making for organizations – but how can people trust the answers Gen AI provides? Knowledge Graphs can play a vital role in ensuring the accuracy of Gen AI’s output, bolstering its reliability and effectiveness.

In Douglas Adams’ The Hitchhiker’s Guide to the Galaxy, a supercomputer called Deep Thought is asked for the answer to “Life, the universe, and everything.” After 7.5 million years, Deep Thought responds “42.” Representatives from the civilization that built Deep Thought immediately ask how it arrived at the answer, but the computer cannot tell them. When Adams wrote this scene in the 1970s, he was (arguably) making a joke – but today, many people find themselves in this situation when interacting with generative AI (Gen AI).

Gen AI works by drawing upon millions of pieces of data – a volume that’s impossible for humans to effectively analyze. Businesses are excited by its potential to deliver valuable insights and make well-informed predictions – but if different Gen AI tools are asked the same question and give different answers, how could an organization decide which result is more correct? How would a person fact-check the responses?

Addressing the shortcomings of unstructured, implicit data

The challenge relates to the Large Language Models Gen AI relies upon. An LLM can contain massive amounts of data, but it’s commonly stored in an unstructured, implicit manner. This makes it difficult to investigate how a Gen AI tool arrived at its answer.

Since the release of ChatGPT in late 2022, Neo4j and Capgemini have been working independently / collaborating to overcome this challenge by using Knowledge Graphs. These store complex, structured data and the relationships between them. Instead of relying solely on LLMs to directly generate database queries, our solution incorporates a high-level interface that allows the LLM to interact seamlessly with a Knowledge Graph via database query templates. These templates serve as structured frameworks, guiding the LLM to fill in specific parameters based on the user’s request. This simplifies the task for the LLM by abstracting away complex logic. (See Figure 1.) This separation of concerns ensures the LLM focuses on natural language understanding and generation, while the query templates handle the technical aspects of database interaction – improving the overall accuracy and efficiency of retrieval.

In this example, the query template uses a vector search to locate relevant nodes within the Knowledge Graph that correspond to the entities present in the user’s question. This identifies the nodes relevant to the query, which are then used to retrieve neighborhoods or shortest paths around the nodes within the graph. This helps contextualize the retrieved information and provides a more comprehensive answer to the user’s query. More information about this specific query template is available in this blog post.

Tailored templates

Query templates can be tailored to discrete domains such as finding dependencies within supply chains or executing aggregation operations for business intelligence purposes, enabling organizations to address specific challenges. This more targeted approach best leverages the LLM’s capabilities to generate insights by ensuring they are not only relevant but deeply informed by the underlying data structures, helping enterprises to efficiently transform their raw data into actionable intelligence.

That said, the complexity of business requirements often exceeds what a single query template can accommodate when an LLM interfaces with a Knowledge Graph. Therefore, it’s essential to embrace an adaptive approach, providing a rich assortment of query templates that can be selectively deployed to match specific business scenarios. Leveraging the LLM’s capability to invoke functions, Gen AI can dynamically select and employ multiple query templates based on the context of the user’s request or the specific task at hand. This results in a more nuanced and flexible interaction with the database, and significantly amplifies the LLM’s ability to solve intricate business intelligence and analytics problems. (See Figure 2.)

This LLM-powered movie agent uses several tools, orchestrated through carefully designed query templates, to interact with the Knowledge Graph.

  • The information tool retrieves data about movies or individuals, ensuring the agent has access to the latest and most relevant information.
  • The recommendation tool provides movie recommendations based on user preferences and input.
  • The memory tool stores information about user preferences in the Knowledge Graph, allowing for a personalized experience over multiple interactions.

More information on this movie agent project can be found on GitHub.

“We expect Knowledge Graphs to help Large Language Models embrace iterative processes to improve their output.”

Democratizing data and empowering business users

The Knowledge Graph acts as a bridge, translating user intent into specific, actionable queries the LLM can execute with increased accuracy and reliability. By allowing any user – regardless of technical knowledge – to inspect how the LLM arrived at its answers, people can validate the information sources themselves. Benefits include:

  • Results that are explainable, repeatable, and transparent. This can enhance trust in Gen AI in everything from research and discovery in life sciences to digital twins in sectors such as manufacturing, aerospace, and telecommunications.
  • Better-informed and better trusted business decisions
  • Freed up time for experts such as prompt engineers to concentrate on tasks that require their specialized skills.

As we look ahead, we expect Knowledge Graphs to help Large Language Models embrace iterative processes to improve their output. Our enthusiasm is shared by other experts in the field including Andrew Ng at DeepLearningAI, underscoring the widespread recognition of their transformative capabilities. As we help create the future, it’s clear the journey with these intelligent systems is only just beginning – and is moving much faster than Deep Thought ever did – so it’s critical that people are given the means to fact-check generative AI as it evolves.

INNOVATION TAKEAWAYS:

TRUST IS IMPORTANT – Knowledge Graphs can boost confidence in the output from Gen AI systems – making it easier for people and organizations to embrace them.

TOOLS FOR THE TOOL – With Knowledge Graphs, Large Language Models can dynamically employ multiple query templates to match specific business scenarios, making interactions with Gen AI more nuanced.

DEMOCRATIZING DATA – By making it easier for everyone in an organization to interact with generative AI, Knowledge Graphs can free up experts to focus on tasks that require their specific skills.

Interesting read?

Capgemini’s Innovation publication, Data-powered Innovation Review | Wave 8 features contribution from leading experts from Capgemini and esteemed partners like Dassault SystèmesNeo4j, and The Open Group. Delve into a myriad of topics on the concept of virtual twins, climate tech, and a compelling update from our ‘Gen Garage’ Labs, highlighting how data fosters sustainability, diversity, and inclusivity. Embark on a voyage of innovation today. Find all previous Waves here.

Author

Joakim Nilsson

Knowledge Graph Lead, Insights & Data Sweden, Capgemini
Based in Malmö Sweden, Joakim is part of the CTO office where he drives the expansion of Knowledge Graphs forward in the region. He has been involved in Knowledge Graph projects as a consultant both for Capgemini and Neo4j. Joakim holds a master’s degree in mathematics and has been working with Knowledge Graphs since 2021.

Tomaz Bratanic

Senior GenAI Developer, Neo4j
Tomaz Bratanic has extensive experience with graphs, machine learning, and generative AI. He has written an in-depth book about using graph algorithms in practical examples. Nowadays, he focuses on generative AI and LLMs by contributing to popular frameworks like LangChain and LlamaIndex and writing blog posts about LLM-based applications.

Magnus Carlsson

VP & Head of CoE – Insights and Data, Sweden, Capgemini
All businesses and organizations can be run smarter and more efficiently with help of data, analytics, and AI. Many of the most pressing challenges we face today, can be solved using data. Magnus is passionate about solving real-world problems and developing new businesses based on data and the latest technology.