CLC number:
On-line Access: 2023-10-27
Received: 2022-11-23
Revision Accepted: 2023-10-27
Crosschecked: 2023-04-20
Cited: 0
Clicked: 687
Citations: Bibtex RefMan EndNote GB/T7714
Shengyuan LIU, Ke CHEN, Tianlei HU, Yunqing MAO. Uncertainty-aware complementary label queries for active learning[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.2200589 @article{title="Uncertainty-aware complementary label queries for active learning", %0 Journal Article TY - JOUR
基于主动学习的不确定性感知补标签查询1浙江大学浙江省大数据智能计算重点实验室,中国杭州市,310027 2浙江大学区块链与数据安全全国重点实验室,中国杭州市,310027 3城云科技(中国)有限公司,中国杭州市,310000 摘要:许多主动学习方法假设学习者可便捷地向注释者询问训练数据的完整标注信息。这些方法主要试图通过最小化标注数量降低标注成本。然而,对于许多现实中的分类任务来说,精确标注实例仍然非常昂贵。为降低单次标注行为成本,本文试图解决一种新的主动学习范式,称为具有补标签的主动学习(ALCL)。ALCL学习器只针对样例特定类别提出是或否的问题。在收到标注者答案后,ALCL学习器获得一些有监督实例和更多具有补标签的训练实例,这些补标签仅表示对应标签与该实例无关。。ALCL具有两个挑战性问题:如何选择要查询的实例以及如何从这些补标签和普通标签中提取信息。针对第一个问题,在主动学习范式下提出一种基于不确定性的抽样策略。针对第二个问题,改进了一种已有的ALCL方法,同时适配了我们的抽样策略。在各种数据集上的实验结果验证了本文方法的有效性。 关键词组: Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article
Reference[1]Arnab A, Sun C, Nagrani A, et al., 2020. Uncertainty-aware weakly supervised action detection from untrimmed videos. Proc 16th European Conf on Computer Vision, p.751-768. [2]Blundell C, Cornebise J, Kavukcuoglu K, et al., 2015. Weight uncertainty in neural network. Proc 32nd Int Conf on Machine Learning, p.1613-1622. [3]Cipolla R, Gal Y, Kendall A, 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.7482-7491. [4]Clanuwat T, Bober-Irizar M, Kitamoto A, et al., 2018. Deep learning for classical Japanese literature. https://arxiv.org/abs/1812.01718 [5]Culotta A, McCallum A, 2005. Reducing labeling effort for structured prediction tasks. Proc 20th National Conf on Artificial Intelligence, p.746-751. [6]Feng L, Kaneko T, Han B, et al., 2020. Learning with multiple complementary labels. Proc 37th Int Conf on Machine Learning, p.3072-3081. [7]Gal Y, Ghahramani Z, 2016. Dropout as a Bayesian approximation: representing model uncertainty in deep learning. Proc 33rd Int Conf on Machine Learning, p.1050-1059. [8]Geng Y, Han ZB, Zhang CQ, et al., 2021. Uncertainty-aware multi-view representation learning. Proc 35th AAAI Conf on Artificial Intelligence, p.7545-7553. [9]Gonsior J, Thiele M, Lehner W, 2020. WEAKAL: combining active learning and weak supervision. Proc 23rd Int Conf on Discovery Science, p.34-49. [10]He KM, Zhang XY, Ren SQ, et al., 2016. Deep residual learning for image recognition. IEEE Conf on Computer Vision and Pattern Recognition, p.770-778. [11]Hu PY, Lipton ZC, Anandkumar A, et al., 2019. Active learning with partial feedback. https://arxiv.org/abs/1802.07427v2 [12]Ishida T, Niu G, Hu WH, et al., 2017. Learning from complementary labels. Proc 31st Conf on Neural Information Processing Systems, p.5639-5649. [13]Ishida T, Niu G, Menon A, et al., 2019. Complementary-label learning for arbitrary losses and models. Proc 36th Int Conf on Machine Learning, p.2971-2980. [14]Kingma DP, Ba J, 2015. Adam: a method for stochastic optimization. Proc 3rd Int Conf on Learning Representations, p.14-17. [15]Krizhevsky A, Hinton G, 2009. Learning Multiple Layers of Features from Tiny Images. MS Thesis, University of Toronto, Toronto, Canada. [16]LeCun Y, Bottou L, Bengio Y, et al., 1998. Gradient-based learning applied to document recognition. Proc IEEE, 86(11):2278-2324. [17]Liu SY, Hu TL, Chen K, et al., 2023. Complementary label queries for efficient active learning. Proc 6th Int Conf on Image and Graphics Processing, p.1-7. [18]Ren PZ, Xiao Y, Chang XJ, et al., 2021. A survey of deep active learning. ACM Comput Surv, 54(9):180. [19]Scheffer T, Decomain C, Wrobel S, 2001. Active hidden Markov models for information extraction. Proc 4th Int Conf on Intelligent Data Analysis, p.309-318. [20]Settles B, 2009. Active Learning Literature Survey. Technical Report No. 1648, University of Wisconsin-Madison, USA. [21]Settles B, 2011. From theories to queries: active learning in practice. Active Learning and Experimental Design Workshop in Conjunction with AISTATS, Article 18. [22]Settles B, Craven M, 2008. An analysis of active learning strategies for sequence labeling tasks. Conf on Empirical Methods in Natural Language Processing, p.1070-1079. [23]Sinha S, Ebrahimi S, Darrell T, 2019. Variational adversarial active learning. IEEE/CVF Int Conf on Computer Vision, p.5971-5980. [24]Wang HB, Liu WW, Zhao Y, et al., 2019. Discriminative and correlative partial multi-label learning. Proc 28th Int Joint Conf on Artificial Intelligence, p.3691-3697. [25]Xiao H, Rasul K, Vollgraf R, 2017. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. https://arxiv.org/abs/1708.07747 [26]Yoo D, Kweon IS, 2019. Learning loss for active learning. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.93-102. [27]Younesian T, Epema D, Chen LY, 2020. Active learning for noisy data streams using weak and strong labelers. https://arxiv.org/abs/2010.14149 [28]Zhang CC, Chaudhuri K, 2015. Active learning from weak and strong labelers. Proc 28th Int Conf on Neural Information Processing Systems, p.703-711. [29]Zhang T, Zhou ZH, 2018. Semi-supervised optimal margin distribution machines. Proc 27th Int Joint Conf on Artificial Intelligence, p.3104-3110. [30]Zhang ZZ, Lan CL, Zeng WJ, et al., 2020. Uncertainty-aware few-shot image classification. Proc 30th Int Joint Conf on Artificial Intelligence, p.3420-3426. [31]Zhou ZH, 2018. A brief introduction to weakly supervised learning. Nat Sci Rev, 5(1):44-53. Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou
310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE |
Open peer comments: Debate/Discuss/Question/Opinion
<1>