Full Text:   <1441>

CLC number: 

On-line Access: 2021-03-05

Received: 2020-11-22

Revision Accepted: 2021-01-10

Crosschecked: 0000-00-00

Cited: 0

Clicked: 2776

Citations:  Bibtex RefMan EndNote GB/T7714

-   Go to

Article info.
Open peer comments

Journal of Zhejiang University SCIENCE C 1998 Vol.-1 No.-1 P.

http://doi.org/10.1631/FITEE.2000657


NGAT: attention in breadth and depth exploration for semi-supervised graph representation learning


Author(s):  Jianke HU, Yin ZHANG

Affiliation(s):  College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China

Corresponding email(s):   yinzh@zju.edu.cn

Key Words:  Graph learning, Semi-supervised learning, Node classification, Attention


Jianke HU, Yin ZHANG. NGAT: attention in breadth and depth exploration for semi-supervised graph representation learning[J]. Frontiers of Information Technology & Electronic Engineering, 1998, -1(-1): .

@article{title="NGAT: attention in breadth and depth exploration for semi-supervised graph representation learning",
author="Jianke HU, Yin ZHANG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="-1",
number="-1",
pages="",
year="1998",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2000657"
}

%0 Journal Article
%T NGAT: attention in breadth and depth exploration for semi-supervised graph representation learning
%A Jianke HU
%A Yin ZHANG
%J Journal of Zhejiang University SCIENCE C
%V -1
%N -1
%P
%@ 2095-9184
%D 1998
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2000657

TY - JOUR
T1 - NGAT: attention in breadth and depth exploration for semi-supervised graph representation learning
A1 - Jianke HU
A1 - Yin ZHANG
J0 - Journal of Zhejiang University Science C
VL - -1
IS - -1
SP -
EP -
%@ 2095-9184
Y1 - 1998
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2000657


Abstract: 
Recently graph neural networks (GNNs) have achieved remarkable performance in representation learning on graph-structured data. However, as the number of network layers increase in number, GNNs based on the neighborhood aggregation strategy deteriorate due to the problem of oversmoothing, which is the major bottleneck for applying GNNs to real-world graphs. Many efforts have been made to improve the process of feature information aggregation from directly connected nodes, i.e., breadth exploration. But these models only perform the best in the case of three or fewer layers, and the performance drops rapidly for deep layers. To alleviate oversmoothing, we propose a nested graph attention network (NGAT), which can work in a semi-supervised manner. In addition to breadth exploration, a k-layer NGAT also uses a layer-wise aggregation strategy guided by the attention mechanism to selectively leverage feature information from the k-order neighborhood, i.e. depth exploration. Even with a 10-layer or deeper architecture, NGAT can balance the need for preserving the locality (including root node features and local structure) and aggregating information from a large neighborhood. In a number of experiments on standard node classification tasks, NGAT outperforms other novel models and achieves state-of-the-art performance.

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - Journal of Zhejiang University-SCIENCE