유현식/이연창’s paper has been accepted in
Title: Disentangling Degree-related Biases and Interest for Out-of-Distribution Generalized Directed Network Embedding
Author: Hyunsik Yoo, Yeon-Chang Lee, Kijung Shin and Sang-Wook Kim
Abstract
The goal of directed network embedding is to represent the nodes in a given directed network as embeddings (i.e., low-dimensional vectors) that preserve the asymmetric relationships between nodes. While a number of directed network embedding methods have been proposed, we empirically show that the existing methods lack out-of-distribution generalization abilities against degree-related distributional shifts. To mitigate this problem, we propose ODIN (Out-of-Distribution Generalized Directed Network Embedding), a new directed NE method where we model multiple factors in the formation of directed edges. Then, for each node, ODIN learns multiple embeddings, each of which preserves its corresponding factor, by disentangling interest factors and biases related to in- and out-degrees of nodes. Our experiments on four real-world directed networks demonstrate that disentangling multiple factors enables ODIN to yield out-of-distribution generalized embeddings that are consistently effective under various degrees of shifts in degree distributions. Specifically, ODIN universally outperforms 9 stateof-the-art competitors in 2 LP tasks on 4 real-world datasets under both identical distribution (ID) and non-ID settings.