Abstract: Continual learning (CL) has emerged as a crucial paradigm for learning from sequential data while retaining previous knowledge. In the realm of continual graph learning (CGL), where graphs change continually based on streaming data, unique challenges arise that require adaptive and efficient methods, as well as addressing catastrophic forgetting. The first challenge stems from the interdependencies between different graph data, in which previous graphs influence new data distributions. The second challenge is handling large graphs in an efficient manner. To address these challenges, we propose an efficient continual graph learner (E-CGL) in this paper. We address the interdependence issue by demonstrating the effectiveness of replay strategies and introducing a combined sampling approach that considers for both node importance and diversity. To improve efficiency, E-CGL leverages a simple yet effective multi-layer perceptron (MLP) model that shares weights with a graph neural network (GNN) during training, thereby accelerating computation by circumventing the expensive message-passing process. Our method achieves state-of-the-art results on four CGL datasets under two settings, while significantly lowering catastrophic forgetting to an average of -1.1%. Additionally, E-CGL accelerates training and inference times by an average of 15.83× and 4.89×, respectively, across four datasets. These results indicate that E-CGL not only effectively manages correlations between different graph data during continual training but also enhances efficiency in large-scale CGL.
Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article
Reference
Open peer comments: Debate/Discuss/Question/Opinion
Open peer comments: Debate/Discuss/Question/Opinion
<1>