Towards Quality and General Knowledge Representation Learning

  • Zhenwei Tang (King Abdullah University of Science and Technology (KAUST) (Creator)



Knowledge representation learning (KRL) has been a long-standing and challenging topic in artificial intelligence. Recent years have witnessed the rapidly growing research interest and industrial applications of KRL. However, two important aspects of KRL remains unsatisfactory in the academia and industries, i.e., the quality and the generalization capabilities of the learned representations. This thesis presents a set of methods target at learning high quality distributed knowledge representations and further empowering the learned representations for more general reasoning tasks over knowledge bases. On the one hand, we identify the false negative issue and the data sparsity issue in the knowledge graph completion (KGC) task that can limit the quality of the learned representations. Correspondingly, we design a ranking-based positive-unlabeled learning method along with an adversarial data augmentation strategy for KGC. Then we unify them seamlessly to improve the quality of the learned representations. On the other hand, although recent works expand the supported neural reasoning tasks remarkably by answering multi-hop logical queries, the generalization capabilities are still limited to inductive reasoning tasks that can only provide entity-level answers. In fact, abductive reasoning that provides concept-level answers to queries is also in great need by online users and a wide range of downstream tasks. Therefore, we design a joint abductive and inductive knowledge representation learning and reasoning system by incorporating, representing, and operating on concepts. Extensive experimental results along with case studies demonstrate the effectiveness of our methods in improving the quality and generalization capabilities of the learned distributed knowledge representations.
Date made available2022
PublisherKAUST Research Repository

Cite this