정동호/김태리’s paper has been accepted in


Title: MELON: Learning Multi-Aspect Modality Preferences for Accurate Multimedia Recommendation
Author: Dongho Jeong, Taeri Kim, Donghyeon Cho, and Sang-Wook Kim
Abstract
Existing multimedia recommender systems have made the best efforts to predict user preferences for items by utilizing behavioral similarities between users and the modality features of items a user has interacted with. However, we identify two key limitations in existing methods regarding preferences for modality features: (L1) although preferences for modality features is an important aspect of users’ preferences, existing methods only leverage neighbors with similar interactions and do not consider the neighbors who may have similar preferences for modality features while having different interactions; (L2) although modality features of a user and an item may have a complex geometric relationship in the latent space, existing methods overlook and face challenges in precisely capturing this relationship. To address these two limitations, we propose a novel multimedia recommendation framework, named MELON, which consists of two core modules and is built upon the existing multimedia recommendation backbone: (Idea 1) Modality-cEntered embedding extraction module; (Idea 2) reLatiOnship-ceNtered embedding extraction module. We validate the effectiveness and validity of MELON through extensive experiments with four real-world datasets, showing 10.51% higher accuracy compared to the best competitor in terms of recall@10. The code and dataset of MELON will be available upon acceptance.

업데이트: