CLC number:
On-line Access: 2024-02-29
Received: 2023-07-21
Revision Accepted: 2023-12-17
Crosschecked: 0000-00-00
Cited: 0
Clicked: 117
Xiali LI, Yanyin ZHANG, Licheng WU, Yandong CHEN, Junzhi YU. TibetanGoTinyNet: a light weight U-Net style network for Zero learning of Tibetan Go[J]. Frontiers of Information Technology & Electronic Engineering, 1998, -1(-1): .
@article{title="TibetanGoTinyNet: a light weight U-Net style network for Zero learning of Tibetan Go",
author="Xiali LI, Yanyin ZHANG, Licheng WU, Yandong CHEN, Junzhi YU",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="-1",
number="-1",
pages="",
year="1998",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300493"
}
%0 Journal Article
%T TibetanGoTinyNet: a light weight U-Net style network for Zero learning of Tibetan Go
%A Xiali LI
%A Yanyin ZHANG
%A Licheng WU
%A Yandong CHEN
%A Junzhi YU
%J Journal of Zhejiang University SCIENCE C
%V -1
%N -1
%P
%@ 2095-9184
%D 1998
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300493
TY - JOUR
T1 - TibetanGoTinyNet: a light weight U-Net style network for Zero learning of Tibetan Go
A1 - Xiali LI
A1 - Yanyin ZHANG
A1 - Licheng WU
A1 - Yandong CHEN
A1 - Junzhi YU
J0 - Journal of Zhejiang University Science C
VL - -1
IS - -1
SP -
EP -
%@ 2095-9184
Y1 - 1998
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300493
Abstract: The game of tibetan Go faces the scarcity of expert knowledge and studied literature. Therefore, we studied the zero learning model of tibetan Go under limited computing power resources and proposed a novel scaleinvariant u-Net style two-headed output lightweight network TibetanGoTinyNet. The lightweight convolutional neural networks (CNN) and capsule structure are applied to the encoder and decoder of the network to reduce computational burden and achieve better feature extract results. Several autonomous self-attentive mechanisms are integrated into the network to capture the tibetan Go boardâĂŹs spatial and global information and select important channels. The training data were generated entirely from self-play games. TibetanGoTinyNet achieved 62%âĂŞ78% winning rates against four other u-Net style models including Ghost-UNet and Res-UNet. It also achieved 75% winning rates in the ablation experiments on the attention mechanism with embedded positional information. The model saved about 33% of the training time with 45%âĂŞ50% winning rates for different Monte Carlo tree search (MCTS) counts when migrated from 9×9 to 11×11 boards. Code for our model is available at https://github.com/paulzyy/TibetanGoTinyNet.
Open peer comments: Debate/Discuss/Question/Opinion
<1>