김윤기/김태리’s paper has been accepted in
Title: MONET: Modality-Embracing Graph Convolutional Network and Target-Aware Attention for Multimedia Recommendation
Author: Yungi Kim*, Taeri Kim*, Won-Yong Shin, and Sang-Wook Kim
Abstract
In this paper, we focus on multimedia recommender systems using graph convolutional network (GCN) where the multimodal features as well as user-item interactions are employed together. Our study aims to exploit multimodal features more effectively in order to accurately capture users’ preferences for items. To this end, we point out following two limitations of existing GCN-based multimedia recommender systems: (L1) although multimodal features of interacted items by a user can reveal her preferences on items, existing methods utilize GCN designed to focus only on capturing collaborative signals, resulting in insufficient reflection of the multimodal features in the final user/item embeddings; (L2) although a user decides whether to prefer the target item by considering its multimodal features, existing methods represent her as only a single embedding regardless of the target item’s multimodal features and then utilize her embedding to predict her preference for the target item. To address the above issues, we propose a novel multimedia recommender system, named MONET, composed of following two core ideas: modality-embracing GCN (MeGCN) and target-aware attention. Through extensive experiments using four real-world datasets, we demonstrate i the significant superiority of MONET over seven state-of-the-art competitors (up to 30.32% higher accuracy in terms of recall@20, compared to the best competitor) and ii the effectiveness of the two core ideas in MONET. All MONET codes are available at: https://github.com/Kimyungi/MONET.