k-Hopped Link Prediction With Graph Embedding

Abstract

Graph embedding approaches aim to provide a low-dimensional latent representation of the graph with minimal reconstruction error. In addition, these approaches attempt to capture local and global topological neighborhood information and data distribution in the latent representation. The primary purpose of the graph’s latent representation is simply implementing them into various straightforward machine learning models for graph prediction tasks such as link prediction, clustering, and visualization. Among these tasks, link prediction is a critical task in which researchers mainly analyze the performance of the embeddings on the information of adjacent nodes. Although many embedding techniques claim to capture the hopped neighborhood into the embedding, researchers need to pay more attention to analyzing the performance of the graph embeddings on hopped link prediction. Hopped link prediction demonstrates the performance of capturing a global view of the neighborhood. Our proposed framework develops $k$-hopped graph topological and feature information to analyze six widely recognized graph embeddings, ARGE, ARVGE, Node2vec, Attri2Vec, GraphSage, and GCN with $k$-hopped link prediction. We experiment with three graph datasets and show that $k$-hopped link prediction performance significantly increases for $1$-hopped graph information and continuously depletes after $1$-hop, demonstrating the importance of embedding performance analysis with $k$-hopped link prediction.

Publication
In The 2023 World Congress in Computer Science, Computer Engineering, & Applied Computing
Tonni Das Jui
Tonni Das Jui
PhD student at Baylor University

Related