Full Text:  <865>

Summary:  <137>

CLC number: 

On-line Access: 2023-10-27

Received: 2022-11-23

Revision Accepted: 2023-10-27

Crosschecked: 2023-04-20

Cited: 0

Clicked: 687

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Shengyuan LIU

https://orcid.org/0000-0002-2880-695X

Ke CHEN

https://orcid.org/0000-0002-3062-0900

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering 

Accepted manuscript available online (unedited version)


Uncertainty-aware complementary label queries for active learning


Author(s):  Shengyuan LIU, Ke CHEN, Tianlei HU, Yunqing MAO

Affiliation(s):  Key Lab of Intelligent Computing Based Big Data of Zhejiang Province, Zhejiang University, Hangzhou 310027, China; more

Corresponding email(s):  liushengyuan@zju.edu.cn, chenk@cs.zju.edu.cn, htl@zju.edu.cn, myq@citycloud.com.cn

Key Words: 


Share this article to: More <<< Previous Paper|Next Paper >>>

Shengyuan LIU, Ke CHEN, Tianlei HU, Yunqing MAO. Uncertainty-aware complementary label queries for active learning[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.2200589

@article{title="Uncertainty-aware complementary label queries for active learning",
author="Shengyuan LIU, Ke CHEN, Tianlei HU, Yunqing MAO",
journal="Frontiers of Information Technology & Electronic Engineering",
year="in press",
publisher="Zhejiang University Press & Springer",
doi="https://doi.org/10.1631/FITEE.2200589"
}

%0 Journal Article
%T Uncertainty-aware complementary label queries for active learning
%A Shengyuan LIU
%A Ke CHEN
%A Tianlei HU
%A Yunqing MAO
%J Frontiers of Information Technology & Electronic Engineering
%P 1497-1503
%@ 2095-9184
%D in press
%I Zhejiang University Press & Springer
doi="https://doi.org/10.1631/FITEE.2200589"

TY - JOUR
T1 - Uncertainty-aware complementary label queries for active learning
A1 - Shengyuan LIU
A1 - Ke CHEN
A1 - Tianlei HU
A1 - Yunqing MAO
J0 - Frontiers of Information Technology & Electronic Engineering
SP - 1497
EP - 1503
%@ 2095-9184
Y1 - in press
PB - Zhejiang University Press & Springer
ER -
doi="https://doi.org/10.1631/FITEE.2200589"


Abstract: 
Many active learning methods assume that a learner can simply ask for the full annotations of some training data from annotators. These methods mainly try to cut the annotation costs by minimizing the number of annotation actions. Unfortunately, annotating instances exactly in many real-world classification tasks is still expensive. To reduce the cost of a single annotation action, we try to tackle a novel active learning setting, named active learning with complementary labels (ALCL). ALCL learners ask only yes/no questions in some classes. After receiving answers from annotators, ALCL learners obtain a few supervised instances and more training instances with complementary labels, which specify only one of the classes to which the pattern does not belong. There are two challenging issues in ALCL: one is how to sample instances to be queried, and the other is how to learn from these complementary labels and ordinary accurate labels. For the first issue, we propose an uncertainty-based sampling strategy under this novel setup. For the second issue, we upgrade a previous ALCL method to fit our sampling strategy. Experimental results on various datasets demonstrate the superiority of our approaches.

基于主动学习的不确定性感知补标签查询

刘圣源1,陈珂2,胡天磊1,毛云青3
1浙江大学浙江省大数据智能计算重点实验室,中国杭州市,310027
2浙江大学区块链与数据安全全国重点实验室,中国杭州市,310027
3城云科技(中国)有限公司,中国杭州市,310000
摘要:许多主动学习方法假设学习者可便捷地向注释者询问训练数据的完整标注信息。这些方法主要试图通过最小化标注数量降低标注成本。然而,对于许多现实中的分类任务来说,精确标注实例仍然非常昂贵。为降低单次标注行为成本,本文试图解决一种新的主动学习范式,称为具有补标签的主动学习(ALCL)。ALCL学习器只针对样例特定类别提出是或否的问题。在收到标注者答案后,ALCL学习器获得一些有监督实例和更多具有补标签的训练实例,这些补标签仅表示对应标签与该实例无关。。ALCL具有两个挑战性问题:如何选择要查询的实例以及如何从这些补标签和普通标签中提取信息。针对第一个问题,在主动学习范式下提出一种基于不确定性的抽样策略。针对第二个问题,改进了一种已有的ALCL方法,同时适配了我们的抽样策略。在各种数据集上的实验结果验证了本文方法的有效性。

关键词组:主动学习;图片分类;弱监督学习

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Arnab A, Sun C, Nagrani A, et al., 2020. Uncertainty-aware weakly supervised action detection from untrimmed videos. Proc 16th European Conf on Computer Vision, p.751-768.

[2]Blundell C, Cornebise J, Kavukcuoglu K, et al., 2015. Weight uncertainty in neural network. Proc 32nd Int Conf on Machine Learning, p.1613-1622.

[3]Cipolla R, Gal Y, Kendall A, 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.7482-7491.

[4]Clanuwat T, Bober-Irizar M, Kitamoto A, et al., 2018. Deep learning for classical Japanese literature. https://arxiv.org/abs/1812.01718

[5]Culotta A, McCallum A, 2005. Reducing labeling effort for structured prediction tasks. Proc 20th National Conf on Artificial Intelligence, p.746-751.

[6]Feng L, Kaneko T, Han B, et al., 2020. Learning with multiple complementary labels. Proc 37th Int Conf on Machine Learning, p.3072-3081.

[7]Gal Y, Ghahramani Z, 2016. Dropout as a Bayesian approximation: representing model uncertainty in deep learning. Proc 33rd Int Conf on Machine Learning, p.1050-1059.

[8]Geng Y, Han ZB, Zhang CQ, et al., 2021. Uncertainty-aware multi-view representation learning. Proc 35th AAAI Conf on Artificial Intelligence, p.7545-7553.

[9]Gonsior J, Thiele M, Lehner W, 2020. WEAKAL: combining active learning and weak supervision. Proc 23rd Int Conf on Discovery Science, p.34-49.

[10]He KM, Zhang XY, Ren SQ, et al., 2016. Deep residual learning for image recognition. IEEE Conf on Computer Vision and Pattern Recognition, p.770-778.

[11]Hu PY, Lipton ZC, Anandkumar A, et al., 2019. Active learning with partial feedback. https://arxiv.org/abs/1802.07427v2

[12]Ishida T, Niu G, Hu WH, et al., 2017. Learning from complementary labels. Proc 31st Conf on Neural Information Processing Systems, p.5639-5649.

[13]Ishida T, Niu G, Menon A, et al., 2019. Complementary-label learning for arbitrary losses and models. Proc 36th Int Conf on Machine Learning, p.2971-2980.

[14]Kingma DP, Ba J, 2015. Adam: a method for stochastic optimization. Proc 3rd Int Conf on Learning Representations, p.14-17.

[15]Krizhevsky A, Hinton G, 2009. Learning Multiple Layers of Features from Tiny Images. MS Thesis, University of Toronto, Toronto, Canada.

[16]LeCun Y, Bottou L, Bengio Y, et al., 1998. Gradient-based learning applied to document recognition. Proc IEEE, 86(11):2278-2324.

[17]Liu SY, Hu TL, Chen K, et al., 2023. Complementary label queries for efficient active learning. Proc 6th Int Conf on Image and Graphics Processing, p.1-7.

[18]Ren PZ, Xiao Y, Chang XJ, et al., 2021. A survey of deep active learning. ACM Comput Surv, 54(9):180.

[19]Scheffer T, Decomain C, Wrobel S, 2001. Active hidden Markov models for information extraction. Proc 4th Int Conf on Intelligent Data Analysis, p.309-318.

[20]Settles B, 2009. Active Learning Literature Survey. Technical Report No. 1648, University of Wisconsin-Madison, USA.

[21]Settles B, 2011. From theories to queries: active learning in practice. Active Learning and Experimental Design Workshop in Conjunction with AISTATS, Article 18.

[22]Settles B, Craven M, 2008. An analysis of active learning strategies for sequence labeling tasks. Conf on Empirical Methods in Natural Language Processing, p.1070-1079.

[23]Sinha S, Ebrahimi S, Darrell T, 2019. Variational adversarial active learning. IEEE/CVF Int Conf on Computer Vision, p.5971-5980.

[24]Wang HB, Liu WW, Zhao Y, et al., 2019. Discriminative and correlative partial multi-label learning. Proc 28th Int Joint Conf on Artificial Intelligence, p.3691-3697.

[25]Xiao H, Rasul K, Vollgraf R, 2017. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. https://arxiv.org/abs/1708.07747

[26]Yoo D, Kweon IS, 2019. Learning loss for active learning. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.93-102.

[27]Younesian T, Epema D, Chen LY, 2020. Active learning for noisy data streams using weak and strong labelers. https://arxiv.org/abs/2010.14149

[28]Zhang CC, Chaudhuri K, 2015. Active learning from weak and strong labelers. Proc 28th Int Conf on Neural Information Processing Systems, p.703-711.

[29]Zhang T, Zhou ZH, 2018. Semi-supervised optimal margin distribution machines. Proc 27th Int Joint Conf on Artificial Intelligence, p.3104-3110.

[30]Zhang ZZ, Lan CL, Zeng WJ, et al., 2020. Uncertainty-aware few-shot image classification. Proc 30th Int Joint Conf on Artificial Intelligence, p.3420-3426.

[31]Zhou ZH, 2018. A brief introduction to weakly supervised learning. Nat Sci Rev, 5(1):44-53.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE