TY - GEN
T1 - Global Interpretation for Patient Similarity Learning
AU - Huai, Mengdi
AU - Miao, Chenglin
AU - Liu, Jinduo
AU - Wang, Di
AU - Chou, Jingyuan
AU - Zhang, Aidong
N1 - Generated from Scopus record by KAUST IRTS on 2022-09-15
PY - 2020/12/16
Y1 - 2020/12/16
N2 - As an important family of learning problems in healthcare domain, patient similarity learning has received much attention in recent years. Patient similarity learning aims to measure the similarity between a pair of patients according to their historical clinical information, which helps to improve the clinical predictions of the patient of interest. Although patient similarity learning has achieved tremendous success in many real-world applications, the lack of transparency behind the behavior of the learned patient similarity model impedes users from trusting the predicted results, which hampers its further applications in the real world. To tackle this problem, in this paper, we investigate how to enable interpretation in patient similarity learning and propose a global interpretation method for patient similarity learning. Based on the proposed global interpretation method, we can identify a minimal sufficient subset of data features that are sufficient in themselves to justify the global predictions made by the well-trained patient similarity model. The identified minimal sufficient feature subset can help us to better understand the overall behaviors of the learned model across different subpopulations of patients. We also conduct experiments on real-world datasets to evaluate the performance of the proposed global interpretation method.
AB - As an important family of learning problems in healthcare domain, patient similarity learning has received much attention in recent years. Patient similarity learning aims to measure the similarity between a pair of patients according to their historical clinical information, which helps to improve the clinical predictions of the patient of interest. Although patient similarity learning has achieved tremendous success in many real-world applications, the lack of transparency behind the behavior of the learned patient similarity model impedes users from trusting the predicted results, which hampers its further applications in the real world. To tackle this problem, in this paper, we investigate how to enable interpretation in patient similarity learning and propose a global interpretation method for patient similarity learning. Based on the proposed global interpretation method, we can identify a minimal sufficient subset of data features that are sufficient in themselves to justify the global predictions made by the well-trained patient similarity model. The identified minimal sufficient feature subset can help us to better understand the overall behaviors of the learned model across different subpopulations of patients. We also conduct experiments on real-world datasets to evaluate the performance of the proposed global interpretation method.
UR - https://ieeexplore.ieee.org/document/9313255/
UR - http://www.scopus.com/inward/record.url?scp=85100357593&partnerID=8YFLogxK
U2 - 10.1109/BIBM49941.2020.9313255
DO - 10.1109/BIBM49941.2020.9313255
M3 - Conference contribution
SN - 9781728162157
SP - 589
EP - 594
BT - Proceedings - 2020 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2020
PB - Institute of Electrical and Electronics Engineers Inc.
ER -