Full Text:  <1>

CLC number: 

On-line Access: 2025-08-15

Received: 2025-03-14

Revision Accepted: 2025-06-03

Crosschecked: 0000-00-00

Cited: 0

Clicked: 4

Citations:  Bibtex RefMan EndNote GB/T7714

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering 

Accepted manuscript available online (unedited version)


E-CGL: an efficient continual graph learner


Author(s):  Jianhao GUO, Zixuan NI, Yun ZHU, Siliang TANG

Affiliation(s):  DCD Lab, College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China

Corresponding email(s):  guojianhao@zju.edu.cn, zixuan2i@zju.edu.cn, zhuyun_dcd@zju.edu.cn, siliang@zju.edu.cn

Key Words:  Graph neural networks (GNN); Continual learning (CL); Dynamic graphs; Continual graph learning (CGL); Graph acceleration


Share this article to: More <<< Previous Paper|Next Paper >>>

Jianhao GUO, Zixuan NI, Yun ZHU, Siliang TANG. E-CGL: an efficient continual graph learner[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.2500162

@article{title="E-CGL: an efficient continual graph learner",
author="Jianhao GUO, Zixuan NI, Yun ZHU, Siliang TANG",
journal="Frontiers of Information Technology & Electronic Engineering",
year="in press",
publisher="Zhejiang University Press & Springer",
doi="https://doi.org/10.1631/FITEE.2500162"
}

%0 Journal Article
%T E-CGL: an efficient continual graph learner
%A Jianhao GUO
%A Zixuan NI
%A Yun ZHU
%A Siliang TANG
%J Frontiers of Information Technology & Electronic Engineering
%P
%@ 2095-9184
%D in press
%I Zhejiang University Press & Springer
doi="https://doi.org/10.1631/FITEE.2500162"

TY - JOUR
T1 - E-CGL: an efficient continual graph learner
A1 - Jianhao GUO
A1 - Zixuan NI
A1 - Yun ZHU
A1 - Siliang TANG
J0 - Frontiers of Information Technology & Electronic Engineering
SP -
EP -
%@ 2095-9184
Y1 - in press
PB - Zhejiang University Press & Springer
ER -
doi="https://doi.org/10.1631/FITEE.2500162"


Abstract: 
Continual learning (CL) has emerged as a crucial paradigm for learning from sequential data while retaining previous knowledge. In the realm of continual graph learning (CGL), where graphs change continually based on streaming data, unique challenges arise that require adaptive and efficient methods, as well as addressing catastrophic forgetting. The first challenge stems from the interdependencies between different graph data, in which previous graphs influence new data distributions. The second challenge is handling large graphs in an efficient manner. To address these challenges, we propose an efficient continual graph learner (E-CGL) in this paper. We address the interdependence issue by demonstrating the effectiveness of replay strategies and introducing a combined sampling approach that considers for both node importance and diversity. To improve efficiency, E-CGL leverages a simple yet effective multi-layer perceptron (MLP) model that shares weights with a graph neural network (GNN) during training, thereby accelerating computation by circumventing the expensive message-passing process. Our method achieves state-of-the-art results on four CGL datasets under two settings, while significantly lowering catastrophic forgetting to an average of -1.1%. Additionally, E-CGL accelerates training and inference times by an average of 15.83× and 4.89×, respectively, across four datasets. These results indicate that E-CGL not only effectively manages correlations between different graph data during continual training but also enhances efficiency in large-scale CGL.

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2025 Journal of Zhejiang University-SCIENCE