SINr: Fast Computing of Sparse Interpretable Node Representations is not a Sin! - l'unam - université nantes angers le mans Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

SINr: Fast Computing of Sparse Interpretable Node Representations is not a Sin!

Résumé

While graph embedding aims at learning low-dimensional representations of nodes encompassing the graph topology, word embedding focus on learning word vectors that encode semantic properties of the vocabulary. The first finds applications on tasks such as link prediction and node classification while the latter is systematically considered in natural language processing. Most of the time, graph and word embeddings are considered on their own as distinct tasks. However, word co-occurrence matrices, widely used to extract word embeddings, can be seen as graphs. Furthermore, most network embedding techniques rely either on a word embedding methodology (Word2vec) or on matrix factorization, also widely used for word embedding. These methods are usually computationally expensive, parameter dependant and the dimensions of the embedding space are not interpretable. To circumvent these issues, we introduce the Lower Dimension Bipartite Graphs Framework (LDBGF) which takes advantage of the fact that all graphs can be described as bipartite graphs, even in the case of textual data. This underlying bipartite structure may be explicit, like in coauthor networks. However, with LDBGF, we focus on uncovering latent bipartite structures, lying for instance in social or word co-occurrence networks, and especially such structures providing conciser and interpretable representations of the graph at hand. We further propose SINr, an efficient implementation of the LDBGF approach that extracts Sparse Interpretable Node Representations using community structure to approximate the underlying bipartite structure. In the case of graph embedding, our near-linear time method is the fastest of our benchmark, parameter-free and provides state-of-the-art results on the classical link prediction task. We also show that low-dimensional vectors can be derived from SINr using singular value decomposition. In the case of word embedding, our approach proves to be very efficient considering the classical similarity evaluation.
Fichier principal
Vignette du fichier
SINr_fast_computing_of_Sparse_Interpretable_Node_Representations_is_not_a_sin.pdf (506.78 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03197434 , version 1 (14-04-2021)

Identifiants

Citer

Thibault Prouteau, Victor Connes, Nicolas Dugué, Anthony Perez, Jean-Charles Lamirel, et al.. SINr: Fast Computing of Sparse Interpretable Node Representations is not a Sin!. Advances in Intelligent Data Analysis XIX, 19th International Symposium on Intelligent Data Analysis, IDA 2021, Apr 2021, Porto, Portugal. pp.325-337, ⟨10.1007/978-3-030-74251-5_26⟩. ⟨hal-03197434⟩
419 Consultations
377 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More