AdaMMG: adaptive multimodal graph model for KGC
Weidong Zhao, Shuaishuai Li, Dong Wang, Haiyang Wang
Problems Identified (3)
single-modal KGC representations: Existing KGC methods rely on single-modal textual information, weakening entity representation.
indiscriminate neighbor aggregation: Existing KGC methods indiscriminately aggregate neighbors, weakening structural modeling.
missing KG entities and relations: Knowledge graph completion addresses missing entities and relations in knowledge graphs.
Proposed Solutions (4)
multimodal attention graph KGC: The paper proposes a multimodal KGC model with attention-based graph reasoning.
BERT-ResNet multimodal fusion: The model fuses textual and visual features extracted by BERT and ResNet into unified multimodal representations.
GAT adaptive neighbor aggregation: The model uses a Graph Attention Network to adaptively aggregate important neighbor information.
ConvKB triple scoring: The model uses ConvKB for triple scoring.
Results (3)
baseline outperformance on KGC benchmarks:
improved accuracy and robustness:
attention filters structural information:
Research Domain
Knowledge graph completion; multimodal graph neural networks