Confidential — Stefan Michaelcheck Only

A Human-in-the-Loop, LLM-Centered Architecture for Knowledge-Graph Question Answering

2026system implementationincrementalsystem

Larissa Pusch, Tim Conrad, Alexandre Courtiol

Open MIND

https://doi.org/10.48550/arxiv.2602.05512OpenAlex: W7128063870arXiv: 2602.05512
1
URLs Found
0
Internal Citations
3
Authors
usable
Abstract Quality
GPT-5.5 Abstract Analysis

Problems Identified (5)

LLM knowledge limitations: LLMs are limited in knowledge-intensive domains by hallucinations, outdated information, and limited explainability.

RAG multi-hop limitation: Text-based retrieval-augmented generation struggles with multi-hop reasoning.

KG query-language barrier: Knowledge graphs support precise and explainable querying but require users to know query languages.

LLM knowledge limitations: LLMs are limited in knowledge-intensive domains by hallucinations, outdated information, and limited explainability.

RAG multi-hop limitation: Text-based retrieval-augmented generation struggles with multi-hop reasoning.

Proposed Solutions (4)

Human-in-the-loop LLM KGQA: An interactive human-in-the-loop framework uses LLMs to generate and explain Cypher graph queries while users refine them through natural language.

KGQA benchmark evaluation: The work evaluates LLM query explanation and fault detection using a 90-query synthetic movie KG benchmark plus smaller real-world KG query-generation experiments.

Human-in-the-loop LLM KGQA: An interactive human-in-the-loop framework uses LLMs to generate and explain Cypher graph queries while users refine them through natural language.

KGQA benchmark evaluation: The work evaluates LLM query explanation and fault detection using a 90-query synthetic movie KG benchmark plus smaller real-world KG query-generation experiments.

Results (3)

Improved KG accessibility:

Preserved factual accuracy and semantic rigor:

Cross-domain performance insight:

Research Domain

Knowledge-graph question answering with LLMs

← Back to all papers