Full Text:   <1234>

Summary:  <262>

CLC number: TP391

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2023-10-22

Cited: 0

Clicked: 1133

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Yuxin HUANG

https://orcid.org/0000-0003-1277-6212

Zhengtao YU

https://orcid.org/0000-0002-4012-461X

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2024 Vol.25 No.1 P.121-134

http://doi.org/10.1631/FITEE.2300296


Enhancing low-resource cross-lingual summarization from noisy data with fine-grained reinforcement learning


Author(s):  Yuxin HUANG, Huailing GU, Zhengtao YU, Yumeng GAO, Tong PAN, Jialong XU

Affiliation(s):  Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650504, China; more

Corresponding email(s):   huangyuxin2004@163.com, ztyu@hotmail.com

Key Words:  Cross-lingual summarization, Low-resource language, Noisy data, Fine-grained reinforcement learning, Word correlation, Word missing degree


Yuxin HUANG, Huailing GU, Zhengtao YU, Yumeng GAO, Tong PAN, Jialong XU. Enhancing low-resource cross-lingual summarization from noisy data with fine-grained reinforcement learning[J]. Frontiers of Information Technology & Electronic Engineering, 2024, 25(1): 121-134.

@article{title="Enhancing low-resource cross-lingual summarization from noisy data with fine-grained reinforcement learning",
author="Yuxin HUANG, Huailing GU, Zhengtao YU, Yumeng GAO, Tong PAN, Jialong XU",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="25",
number="1",
pages="121-134",
year="2024",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300296"
}

%0 Journal Article
%T Enhancing low-resource cross-lingual summarization from noisy data with fine-grained reinforcement learning
%A Yuxin HUANG
%A Huailing GU
%A Zhengtao YU
%A Yumeng GAO
%A Tong PAN
%A Jialong XU
%J Frontiers of Information Technology & Electronic Engineering
%V 25
%N 1
%P 121-134
%@ 2095-9184
%D 2024
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300296

TY - JOUR
T1 - Enhancing low-resource cross-lingual summarization from noisy data with fine-grained reinforcement learning
A1 - Yuxin HUANG
A1 - Huailing GU
A1 - Zhengtao YU
A1 - Yumeng GAO
A1 - Tong PAN
A1 - Jialong XU
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 25
IS - 1
SP - 121
EP - 134
%@ 2095-9184
Y1 - 2024
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300296


Abstract: 
cross-lingual summarization (CLS) is the task of generating a summary in a target language from a document in a source language. Recently, end-to-end CLS models have achieved impressive results using large-scale, high-quality datasets typically constructed by translating monolingual summary corpora into CLS corpora. However, due to the limited performance of low-resource language translation models, translation noise can seriously degrade the performance of these models. In this paper, we propose a fine-grained reinforcement learning approach to address low-resource CLS based on noisy data. We introduce the source language summary as a gold signal to alleviate the impact of the translated noisy target summary. Specifically, we design a reinforcement reward by calculating the word correlation and word missing degree between the source language summary and the generated target language summary, and combine it with cross-entropy loss to optimize the CLS model. To validate the performance of our proposed model, we construct Chinese-Vietnamese and Vietnamese-Chinese CLS datasets. Experimental results show that our proposed model outperforms the baselines in terms of both the ROUGE score and BERTScore.

基于细粒度强化学习增强噪声数据的低资源跨语言摘要

黄于欣1,2,顾怀领1,2,余正涛1,2,高玉梦1,2,潘通1,2,徐佳龙1,2
1昆明理工大学信息工程与自动化学院,中国昆明市,650504
2昆明理工大学云南省人工智能重点实验室,中国昆明市,650504
摘要:跨语言摘要是从源语言文档生成目标语言摘要的任务。最近,端到端跨语言摘要模型通过使用大规模、高质量数据集取得令人瞩目的结果,这些数据集通常是通过将单语摘要语料库翻译成跨语言摘要语料库而构建的。然而,由于低资源语言翻译模型性能有限,翻译噪声会严重降低模型性能。提出一种细粒度强化学习方法解决基于噪声数据的低资源跨语言摘要问题。引入源语言摘要作为黄金信号,减轻翻译后噪声目标摘要的影响。具体来说,通过计算源语言摘要和生成目标语言摘要之间的词相关性和词缺失度设计强化奖励,并将其与交叉熵损失相结合优化跨语言摘要模型。为验证所提出模型性能,构建汉语-越南语和越南语-汉语跨语言摘要数据集。实验结果表明,所提出模型在ROUGE分数和BERTScore方面优于其他基线。

关键词:跨语言摘要;低资源语言;噪声数据;细粒度强化学习;词相关性;词缺失度https://doi.org/10.1631/FITEE.2300296

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Ayana, Shen SQ, Chen Y, et al., 2018. Zero-shot cross-lingual neural headline generation. IEEE/ACM Trans Audio Speech Lang Process, 26(12):2319-2327.

[2]Bai Y, Gao Y, Huang HY, 2021. Cross-lingual abstractive summarization with limited parallel resources. Proc 59th Annual Meeting of the Association for Computational Linguistics and the 11th Int Joint Conf on Natural Language Processing, p.6910-6924.

[3]Böhm F, Gao Y, Meyer CM, et al., 2019. Better rewards yield better summaries: learning to summarise without references. Proc Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing, p.3110-3120.

[4]Cao Y, Liu H, Wan XJ, 2020. Jointly learning to align and summarize for neural cross-lingual summarization. Proc 58th Annual Meeting of the Association for Computational Linguistics, p.6220-6231.

[5]Dou ZY, Kumar S, Tsvetkov Y, 2020. A deep reinforced model for zero-shot cross-lingual summarization with bilingual semantic similarity rewards. Proc 4th Workshop on Neural Generation and Translation, p.60-68.

[6]Dyer C, Chahuneau V, Smith NA, 2013. A simple, fast, and effective reparameterization of IBM Model 2. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.644-648.

[7]Hermann KM, Kočiský T, Grefenstette E, et al., 2015. Teaching machines to read and comprehend. Proc 28th Int Conf on Neural Information Processing Systems, p.1693-1701.

[8]Hu BT, Chen QC, Zhu FZ, 2015. LCSTS: a large scale Chinese short text summarization dataset. Proc Conf on Empirical Methods in Natural Language Processing, p.1967-1972.

[9]Javed A, Ali Khan A, 2022. Shot classification and replay detection for sports video summarization. Front Inform Technol Electron Eng, 23(5):790-800.

[10]Jiang SY, Tu DB, Chen XS, et al., 2022. ClueGraphSum: let key clues guide the cross-lingual abstractive summarization. https://arxiv.org/abs/2203.02797

[11]Kang XM, Zhao Y, Zhang JJ, et al., 2020. Dynamic context selection for document-level neural machine translation via reinforcement learning. Proc Conf on Empirical Methods in Natural Language Processing, p.2242-2254.

[12]Kim S, Jang JY, Jung M, et al., 2021. A model of cross-lingual knowledge-grounded response generation for open-domain dialogue systems. Findings of the Association for Computational Linguistics, p.352-365.

[13]Kumar G, Foster G, Cherry C, et al., 2019. Reinforcement learning based curriculum optimization for neural machine translation. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.2054-2061.

[14]Lai H, Gao YM, Huang YX, et al., 2022. Evaluation method of text generation based on multi-granularity feature. J Chin Inform Process, 36(3):45-53,63(in Chinese).

[15]Leuski A, Lin CY, Zhou L, et al., 2003. Cross-lingual C*ST*RD: English access to Hindi information. ACM Trans Asian Lang Inform Process, 2(3):245-269.

[16]Li HQ, Huang J, Cao Z, et al., 2023. Stochastic pedestrian avoidance for autonomous vehicles using hybrid reinforcement learning. Front Inform Technol Electron Eng, 24(1):131-140.

[17]Li P, Tang C, Xu XH, 2021. Video summarization with a graph convolutional attention network. Front Inform Technol Electron Eng, 22(6):902-913.

[18]Liang YL, Meng FD, Zhou CL, et al., 2022. A variational hierarchical model for neural cross-lingual summarization. Proc 60th Annual Meeting of the Association for Computational Linguistics, p.2088-2099.

[19]Lim JM, Kang IS, Lee JH, 2004. Multi-document summarization using cross-language texts. Proc NTCIR-4.

[20]Lin CY, 2004. ROUGE: a package for automatic evaluation of summaries. Proc 4th Workshop on Annual Meeting of the Association for Computational Linguistics, p.74-81.

[21]Nguyen TT, Luu AT, 2022. Improving neural cross-lingual abstractive summarization via employing optimal transport distance for knowledge distillation. Proc 36th AAAI Conf on Artificial Intelligence, 36(10):11103-11111.

[22]Orǎsan C, Chiorean OA, 2008. Evaluation of a cross-lingual Romanian-English multi-document summariser. Proc Int Conf on Language Resources and Evaluation.

[23]Ouyang J, Song BY, McKeown K, 2019. A robust abstractive system for cross-lingual summarization. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.2025-2031.

[24]Paulus R, Xiong CM, Socher R, 2017. A deep reinforced model for abstractive summarization. https://arxiv.org/abs/1705.04304

[25]Rennie SJ, Marcheret E, Mroueh Y, et al., 2017. Self-critical sequence training for image captioning. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1179-1195.

[26]Rippeth E, Post M, 2022. Additive interventions yield robust multi-domain machine translation models. Proc 7th Conf on Machine Translation, p.220-232.

[27]Takase S, Okazaki N, 2020. Multi-task learning for cross-lingual abstractive summarization. https://arxiv.org/abs/2010.07503

[28]Unanue IJ, Parnell J, Piccardi M, 2021. BERTTune: fine-tuning neural machine translation with BERTScore. https://arxiv.org/abs/2106.02208

[29]Vaswani A, Shazeer N, Parmar N, et al., 2017. Attention is all you need. Proc 31st Int Conf on Neural Information Processing Systems, p.6000-6010.

[30]Wang JA, Meng FD, Lu ZY, et al., 2022. ClidSum: a benchmark dataset for cross-lingual dialogue summarization. Proc Conf on Empirical Methods in Natural Language Processing, p.7716-7729.

[31]Wu LJ, Zhu JH, He D, et al., 2019. Machine translation with weakly paired documents. Proc Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing, p.4375-4384.

[32]Xiong LL, Tang Y, Liu CS, et al., 2023. A home energy management approach using decoupling value and policy in reinforcement learning. Front Inform Technol Electron Eng, 24(9):1261-1272.

[33]Yoon W, Yeo YS, Jeong M, et al., 2021. Learning by semantic similarity makes abstractive summarization better. https://arxiv.org/abs/2002.07767

[34]You YJ, Jia WJ, Liu TY, et al., 2019. Improving abstractive document summarization with salient information modeling. Proc 57th Annual Meeting of the Association for Computational Linguistics, p.2132-2141.

[35]Zhang TY, Kishore V, Wu F, et al., 2020. BERTScore: evaluating text generation with BERT. https://arxiv.org/abs/1904.09675

[36]Zhao H, Xie J, Lv Y, et al., 2013. Common error analysis of machine translation output. The 9th China Workshop on Machine Translation.

[37]Zhao J, Zhao YP, Wang WX, et al., 2022. Coach-assisted multi-agent reinforcement learning framework for unexpected crashed agents. Front Inform Technol Electron Eng, 23(7):1032-1042.

[38]Zhou J, Ke P, Qiu XP, et al., 2023. ChatGPT: potential, prospects, and limitations. Front Inform Technol Electron Eng, early access.

[39]Zhu JN, Wang Q, Wang YN, et al., 2019. NCLS: neural cross-lingual summarization. Proc Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing, p.3054-3064.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE