CLC number:
On-line Access: 2024-09-06
Received: 2024-01-27
Revision Accepted: 2024-06-27
Crosschecked: 0000-00-00
Cited: 0
Clicked: 156
Lijian GAO, Qing ZHU, Yaxin SHEN, Qirong MAO, Yongzhao ZHAN. Prompting class distribution optimization dynamically for semi-supervised sound event detection[J]. Frontiers of Information Technology & Electronic Engineering, 1998, -1(-1): .
@article{title="Prompting class distribution optimization dynamically for semi-supervised sound event detection",
author="Lijian GAO, Qing ZHU, Yaxin SHEN, Qirong MAO, Yongzhao ZHAN",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="-1",
number="-1",
pages="",
year="1998",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2400061"
}
%0 Journal Article
%T Prompting class distribution optimization dynamically for semi-supervised sound event detection
%A Lijian GAO
%A Qing ZHU
%A Yaxin SHEN
%A Qirong MAO
%A Yongzhao ZHAN
%J Journal of Zhejiang University SCIENCE C
%V -1
%N -1
%P
%@ 2095-9184
%D 1998
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2400061
TY - JOUR
T1 - Prompting class distribution optimization dynamically for semi-supervised sound event detection
A1 - Lijian GAO
A1 - Qing ZHU
A1 - Yaxin SHEN
A1 - Qirong MAO
A1 - Yongzhao ZHAN
J0 - Journal of Zhejiang University Science C
VL - -1
IS - -1
SP -
EP -
%@ 2095-9184
Y1 - 1998
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2400061
Abstract: Semi-supervised sound event detection (SSED) tasks typically leverage a large amount of unlabeled and synthetic data to facilitate model generalization during training, reducing overfitting on a limited set of labeled data. However, the generalization training process often encounters challenges associated with noise interference introduced by pseudo-labels or domain knowledge gaps. To alleviate noise interference in class distribution learning, we propose an efficient semi-supervised class distribution learning method through dynamic prompt tuning, named prompting class distribution optimization (PADO). Specifically, when modeling real labeled data, PADO dynamically incorporates independent learnable prompt tokens to explore prior knowledge about the true distribution. Then, the prior knowledge serves as prompt information, dynamically interacting with the posterior noisy class distribution information. In this case, PADO achieves class distribution optimization while maintaining model generalization, leading to a significant improvement in the efficiency of class distribution learning. Compared with state-of-the-art (SOTA) methods on the DCASE 2019, 2020, and 2021 challenge SSED datasets, PADO demonstrates significant performance improvements. Furthermore, it is ready to be extended to other benchmark models.
Open peer comments: Debate/Discuss/Question/Opinion
<1>