Full Text:   <859>

Summary:  <355>

Suppl. Mater.: 

CLC number: TP391.4

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2023-02-20

Cited: 0

Clicked: 1613

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Chuyun SHEN

https://orcid.org/0009-0001-3622-1193

Wenhao LI

https://orcid.org/0000-0003-2985-1098

Xiangfeng WANG

https://orcid.org/0000-0003-1802-4425

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2023 Vol.24 No.9 P.1332-1348

http://doi.org/10.1631/FITEE.2200299


Interactive medical image segmentation with self-adaptive confidence calibration


Author(s):  Chuyun SHEN, Wenhao LI, Qisen XU, Bin HU, Bo JIN, Haibin CAI, Fengping ZHU, Yuxin LI, Xiangfeng WANG

Affiliation(s):  School of Computer Science and Technology, East China Normal University, Shanghai 200062, China; more

Corresponding email(s):   cyshen@stu.ecnu.edu.cn, 52194501026@stu.ecnu.edu.cn, xfwang@cs.ecnu.edu.cn

Key Words:  Medical image segmentation, Interactive segmentation, Multi-agent reinforcement learning, Confidence learning, Semi-supervised learning


Chuyun SHEN, Wenhao LI, Qisen XU, Bin HU, Bo JIN, Haibin CAI, Fengping ZHU, Yuxin LI, Xiangfeng WANG. Interactive medical image segmentation with self-adaptive confidence calibration[J]. Frontiers of Information Technology & Electronic Engineering, 2023, 24(9): 1332-1348.

@article{title="Interactive medical image segmentation with self-adaptive confidence calibration",
author="Chuyun SHEN, Wenhao LI, Qisen XU, Bin HU, Bo JIN, Haibin CAI, Fengping ZHU, Yuxin LI, Xiangfeng WANG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="24",
number="9",
pages="1332-1348",
year="2023",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2200299"
}

%0 Journal Article
%T Interactive medical image segmentation with self-adaptive confidence calibration
%A Chuyun SHEN
%A Wenhao LI
%A Qisen XU
%A Bin HU
%A Bo JIN
%A Haibin CAI
%A Fengping ZHU
%A Yuxin LI
%A Xiangfeng WANG
%J Frontiers of Information Technology & Electronic Engineering
%V 24
%N 9
%P 1332-1348
%@ 2095-9184
%D 2023
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2200299

TY - JOUR
T1 - Interactive medical image segmentation with self-adaptive confidence calibration
A1 - Chuyun SHEN
A1 - Wenhao LI
A1 - Qisen XU
A1 - Bin HU
A1 - Bo JIN
A1 - Haibin CAI
A1 - Fengping ZHU
A1 - Yuxin LI
A1 - Xiangfeng WANG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 24
IS - 9
SP - 1332
EP - 1348
%@ 2095-9184
Y1 - 2023
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2200299


Abstract: 
Interactive medical image segmentation based on human-in-the-loop machine learning is a novel paradigm that draws on human expert knowledge to assist medical image segmentation. However, existing methods often fall into what we call interactive misunderstanding, the essence of which is the dilemma in trading off short- and long-term interaction information. To better use the interaction information at various timescales, we propose an interactive segmentation framework, called interactive medical image segmentation with self-adaptive Confidence CAlibration (MECCA), which combines action-based confidence learning and multi-agent reinforcement learning. A novel confidence network is learned by predicting the alignment level of the action with short-term interaction information. A confidence-based reward-shaping mechanism is then proposed to explicitly incorporate confidence in the policy gradient calculation, thus directly correcting the model’s interactive misunderstanding. MECCA also enables user-friendly interactions by reducing the interaction intensity and difficulty via label generation and interaction guidance, respectively. Numerical experiments on different segmentation tasks show that MECCA can significantly improve short- and long-term interaction information utilization efficiency with remarkably fewer labeled samples. The demo video is available at https://bit.ly/mecca-demo-video.

基于自适应置信度校准的交互式医疗图像分割框架

沈楚云1,李文浩1,徐琪森1,胡斌2,金博1,蔡海滨3,朱凤平2,李郁欣2,王祥丰1
1华东师范大学计算机科学与技术学院,中国上海市,200062
2复旦大学附属华山医院,中国上海市,200040
3华东师范大学软件工程学院,中国上海市,200062
摘要:基于人机交互的医疗图像分割方法是一种新的范式,其通过引入专家交互信息来指导算法完成图像分割任务。然而,现有医疗图像分割模型往往容易产生"交互误解",即无法合理权衡短期和长期交互信息的重要性。为更好地利用不同时间尺度上的交互信息,本文提出一种基于自适应置信度校准的交互式医疗图像分割框架MECCA,其结合了基于分割决策的置信度学习技术和多智能体强化学习技术,并通过预测分割决策与短期交互信息的对齐水平来学习一个新颖的置信度网络。随后,提出一种基于置信度的奖励塑造机制,在策略梯度计算中引入置信度,从而直接纠正模型产生的交互误解。MECCA还通过标签生成和交互指导来降低交互强度和难度,从而实现用户友好交互。实验结果表明,MECCA在不同分割任务中可以显著提高短期和长期交互信息的利用效率,且仅需较少的标注样本。演示视频可通过https://bit.ly/mecca-demo-video访问。

关键词:医疗图像分割;交互式分割;多智能体强化学习;置信度学习;半监督学习

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Abel D, Jinnai Y, Guo SY, et al., 2018. Policy and value transfer in lifelong reinforcement learning. Proc 35th Int Conf on Machine Learning, p.20-29.

[2]Achanta R, Shaji A, Smith K, et al., 2012. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Patt Anal Mach Intell, 34(11):2274-2282.

[3]Acuna D, Ling H, Kar A, et al., 2018. Efficient interactive annotation of segmentation datasets with Polygon-RNN++. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.859-868.

[4]Aljabri M, AlAmir M, AlGhamdi M, et al., 2022. Towards a better understanding of annotation tools for medical imaging: a survey. Multim Tools Appl, 81(18):25877-25911.

[5]Bredell G, Tanner C, Konukoglu E, 2018. Iterative interaction training for segmentation editing networks. Proc 9th Int Workshop on Machine Learning in Medical Imaging, p.363-370.

[6]Castrejón L, Kundu K, Urtasun R, et al., 2017. Annotating object instances with a polygon-RNN. IEEE Conf on Computer Vision and Pattern Recognition, p.4485-4493.

[7]DeVries T, Taylor GW, 2018a. Learning confidence for out-of-distribution detection in neural networks.

[8]DeVries T, Taylor GW, 2018b. Leveraging uncertainty estimates for predicting segmentation quality.

[9]Feng RW, Zheng XS, Gao TX, et al., 2021. Interactive few-shot learning: limited supervision, better medical image segmentation. IEEE Trans Med Imag, 40(10):2575-2588.

[10]Furuta R, Inoue N, Yamasaki T, 2020. PixelRL: fully convolutional network with reinforcement learning for image processing. IEEE Trans Multim, 22(7):1704-1719.

[11]Glorot X, Bengio Y, 2010. Understanding the difficulty of training deep feedforward neural networks. Proc 13th Int Conf on Artificial Intelligence and Statistics, p.249-256.

[12]Hung W, Tsai Y, Liou Y, et al., 2018. Adversarial learning for semi-supervised semantic segmentation. Proc British Machine Vision Conf, p.65.

[13]Jungo A, Reyes M, 2019. Assessing reliability and challenges of uncertainty estimations for medical image segmentation. Proc 22nd Int Conf on Medical Image Computing and Computer-Assisted Intervention, p.48-56.

[14]Kendall A, Gal Y, 2017. What uncertainties do we need in bayesian deep learning for computer vision? Proc 3rd Int Conf on Neural Information Processing System, p.5580-5590.

[15]Kingma DP, Ba J, 2015. Adam: a method for stochastic optimization. Proc 3rd Int Conf on Learning Representations.

[16]Lee KM, Song G, 2018. SeedNet: automatic seed generation with deep reinforcement learning for robust interactive segmentation. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.1760-1768.

[17]Li L, Zimmer VA, Schnabel JA, et al., 2021. AtrialGeneral: domain generalization for left atrial segmentation of multi-center LGE MRIs. Proc 24th Int Conf on Medical Image Computing and Computer-Assisted Intervention, p.557-566.

[18]Liao X, Li WH, Xu QS, et al., 2020. Iteratively-refined interactive 3D medical image segmentation with multi-agent reinforcement learning. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.9394-9402.

[19]Lin D, Dai JF, Jia JY, et al., 2016. ScribbleSup: scribble-supervised convolutional networks for semantic segmentation. IEEE Conf on Computer Vision and Pattern Recognition, p.3159-3167.

[20]Lin TY, Goyal P, Girshick R, et al., 2017. Focal loss for dense object detection. Proc IEEE Int Conf on Computer Vision, p.2999-3007.

[21]Ma CF, Xu QS, Wang XF, et al., 2021. Boundary-aware supervoxel-level iteratively refined interactive 3D image segmentation with multi-agent reinforcement learning. IEEE Trans Med Imag, 40(10):2563-2574.

[22]Menze BH, Jakab A, Bauer S, et al., 2015. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans Med Imag, 34(10):1993-2024.

[23]Mnih V, Badia AP, Mirza M, et al., 2016. Asynchronous methods for deep reinforcement learning. Proc 33rd Int Conf on Machine Learning, p.1928-1937.

[24]Moeskops P, Veta M, Lafarge MW, et al., 2017. Adversarial training and dilated convolutions for brain MRI segmentation. Proc 3rd Int Workshop on Deep Learning in Medical Image Analysis and 7th Int Workshop on Multimodal Learning for Clinical Decision Support, p.56-64.

[25]Nie D, Wang L, Xiang L, et al., 2019. Difficulty-aware attention network with confidence learning for medical image segmentation. Proc 33rd AAAI Conf on Artificial Intelligence, 31st Innovative Applications of Artificial Intelligence Conf, and 9th AAAI Symp on Educational Advances in Artificial Intelligence, p.1085-1092.

[26]Open AI, 2022. ChatGPT: Optimizing Language Models for Dialogue. https://openai.casa/blog/chatgpt/ [Accessed on July 10, 2022].

[27]Paszke A, Gross S, Massa F, et al., 2019. PyTorch: an imperative style, high-performance deep learning library. Proc 33rd Int Conf on Neural Information Processing Systems, p.8026-8037.

[28]Prabhu A, Torr PHS, Dokania PK, 2020. GDumb: a simple approach that questions our progress in continual learning. Proc 16th European Conf on Computer Vision, p.524-540.

[29]Rajchl M, Lee MCH, Oktay O, et al., 2017. DeepCut: object segmentation from bounding box annotations using convolutional neural networks. IEEE Trans Med Imag, 36(2):674-683.

[30]Rebuffi SA, Kolesnikov A, Sperl G, 2017. iCaRL: incremental classifier and representation learning. IEEE Conf on Computer Vision and Pattern Recognition, p.5533-5542.

[31]Robinson R, Oktay O, Bai WJ, et al., 2018. Real-time prediction of segmentation quality. Proc 21st Int Conf on Medical Image Computing and Computer-Assisted Intervention, p.578-585.

[32]Ronneberger O, Fischer P, Brox T, 2015. U-Net: convolutional networks for biomedical image segmentation. Proc 18th Int Conf on Medical Image Computing and Computer-Assisted Intervention, p.234-241.

[33]Shrivastava A, Gupta A, Girshick R, 2016. Training region-based object detectors with online hard example mining. IEEE Conf on Computer Vision and Pattern Recognition, p.761-769.

[34]Simpson AL, Antonelli M, Bakas S, et al., 2019. A large annotated medical image dataset for the development and evaluation of segmentation algorithms.

[35]Wang GT, Li WQ, Zuluaga MA, et al., 2018. Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans Med Imag, 37(7):1562-1573.

[36]Wang GT, Zuluaga MA, Li WQ, et al., 2019. DeepIGeoS: a deep interactive geodesic framework for medical image segmentation. IEEE Trans Patt Anal Mach Intell, 41(7):1559-1572.

[37]Xie AN, Harrison J, Finn C, 2020. Deep reinforcement learning amidst lifelong non-stationarity.

[38]Xu N, Price B, Cohen S, et al., 2016. Deep interactive object selection. IEEE Conf on Computer Vision and Pattern Recognition, p.373-381.

[39]Ye QH, Gao Y, Ding WP, et al., 2022. Robust weakly supervised learning for COVID-19 recognition using multi-center CT images. Appl Soft Comput, 116:108291.

[40]Yu LQ, Wang SJ, Li XM, et al., 2019. Uncertainty-aware self-ensembling model for semi-supervised 3D left atrium segmentation. Proc 22nd Int Conf on Medical Image Computing and Computer-Assisted Intervention, p.605-613.

[41]Zhang KQ, Yang ZR, Basar T, 2021. Decentralized multi-agent reinforcement learning with networked agents: recent advances. Front Inform Technol Electron Eng, 22(6):802-814.

[42]Zhang SY, Liew JH, Wei YC, et al., 2020. Interactive object segmentation with inside-outside guidance. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.12231-12241.

[43]Zhuang XH, Shen J, 2016. Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI. Med Image Anal, 31:77-87.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE