A Human-in-the-Loop, LLM-Centered Architecture for Knowledge-Graph Question Answering
Larissa Pusch, Tim Conrad, Alexandre Courtiol
Open MIND
Problems Identified (5)
LLM knowledge limitations: LLMs are limited in knowledge-intensive domains by hallucinations, outdated information, and limited explainability.
RAG multi-hop limitation: Text-based retrieval-augmented generation struggles with multi-hop reasoning.
KG query-language barrier: Knowledge graphs support precise and explainable querying but require users to know query languages.
LLM knowledge limitations: LLMs are limited in knowledge-intensive domains by hallucinations, outdated information, and limited explainability.
RAG multi-hop limitation: Text-based retrieval-augmented generation struggles with multi-hop reasoning.
Proposed Solutions (4)
Human-in-the-loop LLM KGQA: An interactive human-in-the-loop framework uses LLMs to generate and explain Cypher graph queries while users refine them through natural language.
KGQA benchmark evaluation: The work evaluates LLM query explanation and fault detection using a 90-query synthetic movie KG benchmark plus smaller real-world KG query-generation experiments.
Human-in-the-loop LLM KGQA: An interactive human-in-the-loop framework uses LLMs to generate and explain Cypher graph queries while users refine them through natural language.
KGQA benchmark evaluation: The work evaluates LLM query explanation and fault detection using a 90-query synthetic movie KG benchmark plus smaller real-world KG query-generation experiments.
Results (3)
Improved KG accessibility:
Preserved factual accuracy and semantic rigor:
Cross-domain performance insight:
Research Domain
Knowledge-graph question answering with LLMs