A Framework for Explaining LLM Reasoning with Knowledge Graphs
Moein Shirdel, Jarek Szlichta, Parke Godfrey, Joel Rorseth, Divesh Srivastava, Lukasz Golab
SSRN Electronic Journal
Problems Identified (5)
Opaque LLM reasoning: LLM decision processes and outputs are difficult to interpret and may contain factual inconsistencies.
Lack of external-knowledge alignment: Existing methods typically do not align LLM reasoning with external knowledge sources.
LLM reasoning gaps and hallucinations: LLM outputs can contain reasoning gaps, hallucinations, and missing facts that users need to identify.
Opaque LLM reasoning: LLM decision processes and outputs are difficult to interpret and may contain factual inconsistencies.
Lack of external-knowledge alignment: Existing methods typically do not align LLM reasoning with external knowledge sources.
Proposed Solutions (5)
KG-grounded LLM reasoning validation framework: Apr`esCoT is a lightweight, model-agnostic framework that validates LLM reasoning by grounding it in an external knowledge graph.
Subgraph retrieval for query grounding: The framework retrieves a knowledge graph subgraph relevant to the input query.
LLM-output triple extraction: The framework converts LLM outputs into factual triples for comparison with structured knowledge.
Triple-to-KG matching: The framework aligns extracted triples with entities and relations in a retrieved knowledge graph subgraph.
KG-grounded LLM reasoning validation framework: Apr`esCoT is a lightweight, model-agnostic framework that validates LLM reasoning by grounding it in an external knowledge graph.
Results (3)
Traceable structured explanations:
Retrieval and matching trade-off analysis:
Surfaces reasoning failures:
Research Domain
LLM reasoning explainability with knowledge graphs