Abstract
Conversational recommender systems (CRS) aim to capture user’s current intentions and provide recommendations through real-time multi-turn conversational interactions. As a human-machine interactive system, it is essential for CRS to improve the user experience. However, most CRS methods neglect the importance of user experience. In this paper, we propose two key points for CRS to improve the user experience: (1) Speaking like a human, human can speak with different styles according to the current dialogue context. (2) Identifying fine-grained intentions, even for the same utterance, different users have diverse fine-grained intentions, which are related to users’ inherent preference. Based on the observations, we propose a novel CRS model, coined Customized Conversational Recommender System (CCRS), which customizes CRS model for users from three perspectives. For human-like dialogue services, we propose multi-style dialogue response generator which selects context-aware speaking style for utterance generation. To provide personalized recommendations, we extract user’s current fine-grained intentions from dialogue context with the guidance of user’s inherent preferences. Finally, to customize the model parameters for each user, we train the model from the meta-learning perspective. Extensive experiments and a series of analyses have shown the superiority of our CCRS on both the recommendation and dialogue services.
Original language | English (US) |
---|---|
Title of host publication | Machine Learning and Knowledge Discovery in Databases |
Publisher | Springer International Publishing |
Pages | 740-756 |
Number of pages | 17 |
ISBN (Print) | 9783031263897 |
DOIs | |
State | Published - Mar 17 2023 |
Externally published | Yes |
Bibliographical note
KAUST Repository Item: Exported on 2023-04-05Acknowledgements: This work is also supported by the National Natural Science Foundation of China under Grant (No. 61976204, U1811461, U1836206). Zhao Zhang is supported by the China Postdoctoral Science Foundation under Grant No. 2021M703273.