CLC number: TP391.4
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2023-02-20
Cited: 0
Clicked: 1644
Citations: Bibtex RefMan EndNote GB/T7714
https://orcid.org/0009-0001-3622-1193
Chuyun SHEN, Wenhao LI, Qisen XU, Bin HU, Bo JIN, Haibin CAI, Fengping ZHU, Yuxin LI, Xiangfeng WANG. Interactive medical image segmentation with self-adaptive confidence calibration[J]. Frontiers of Information Technology & Electronic Engineering, 2023, 24(9): 1332-1348.
@article{title="Interactive medical image segmentation with self-adaptive confidence calibration",
author="Chuyun SHEN, Wenhao LI, Qisen XU, Bin HU, Bo JIN, Haibin CAI, Fengping ZHU, Yuxin LI, Xiangfeng WANG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="24",
number="9",
pages="1332-1348",
year="2023",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2200299"
}
%0 Journal Article
%T Interactive medical image segmentation with self-adaptive confidence calibration
%A Chuyun SHEN
%A Wenhao LI
%A Qisen XU
%A Bin HU
%A Bo JIN
%A Haibin CAI
%A Fengping ZHU
%A Yuxin LI
%A Xiangfeng WANG
%J Frontiers of Information Technology & Electronic Engineering
%V 24
%N 9
%P 1332-1348
%@ 2095-9184
%D 2023
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2200299
TY - JOUR
T1 - Interactive medical image segmentation with self-adaptive confidence calibration
A1 - Chuyun SHEN
A1 - Wenhao LI
A1 - Qisen XU
A1 - Bin HU
A1 - Bo JIN
A1 - Haibin CAI
A1 - Fengping ZHU
A1 - Yuxin LI
A1 - Xiangfeng WANG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 24
IS - 9
SP - 1332
EP - 1348
%@ 2095-9184
Y1 - 2023
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2200299
Abstract: Interactive medical image segmentation based on human-in-the-loop machine learning is a novel paradigm that draws on human expert knowledge to assist medical image segmentation. However, existing methods often fall into what we call interactive misunderstanding, the essence of which is the dilemma in trading off short- and long-term interaction information. To better use the interaction information at various timescales, we propose an interactive segmentation framework, called interactive medical image segmentation with self-adaptive Confidence CAlibration (MECCA), which combines action-based confidence learning and multi-agent reinforcement learning. A novel confidence network is learned by predicting the alignment level of the action with short-term interaction information. A confidence-based reward-shaping mechanism is then proposed to explicitly incorporate confidence in the policy gradient calculation, thus directly correcting the model’s interactive misunderstanding. MECCA also enables user-friendly interactions by reducing the interaction intensity and difficulty via label generation and interaction guidance, respectively. Numerical experiments on different segmentation tasks show that MECCA can significantly improve short- and long-term interaction information utilization efficiency with remarkably fewer labeled samples. The demo video is available at https://bit.ly/mecca-demo-video.
[1]Abel D, Jinnai Y, Guo SY, et al., 2018. Policy and value transfer in lifelong reinforcement learning. Proc 35th Int Conf on Machine Learning, p.20-29.
[2]Achanta R, Shaji A, Smith K, et al., 2012. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Patt Anal Mach Intell, 34(11):2274-2282.
[3]Acuna D, Ling H, Kar A, et al., 2018. Efficient interactive annotation of segmentation datasets with Polygon-RNN++. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.859-868.
[4]Aljabri M, AlAmir M, AlGhamdi M, et al., 2022. Towards a better understanding of annotation tools for medical imaging: a survey. Multim Tools Appl, 81(18):25877-25911.
[5]Bredell G, Tanner C, Konukoglu E, 2018. Iterative interaction training for segmentation editing networks. Proc 9th Int Workshop on Machine Learning in Medical Imaging, p.363-370.
[6]Castrejón L, Kundu K, Urtasun R, et al., 2017. Annotating object instances with a polygon-RNN. IEEE Conf on Computer Vision and Pattern Recognition, p.4485-4493.
[7]DeVries T, Taylor GW, 2018a. Learning confidence for out-of-distribution detection in neural networks.
[8]DeVries T, Taylor GW, 2018b. Leveraging uncertainty estimates for predicting segmentation quality.
[9]Feng RW, Zheng XS, Gao TX, et al., 2021. Interactive few-shot learning: limited supervision, better medical image segmentation. IEEE Trans Med Imag, 40(10):2575-2588.
[10]Furuta R, Inoue N, Yamasaki T, 2020. PixelRL: fully convolutional network with reinforcement learning for image processing. IEEE Trans Multim, 22(7):1704-1719.
[11]Glorot X, Bengio Y, 2010. Understanding the difficulty of training deep feedforward neural networks. Proc 13th Int Conf on Artificial Intelligence and Statistics, p.249-256.
[12]Hung W, Tsai Y, Liou Y, et al., 2018. Adversarial learning for semi-supervised semantic segmentation. Proc British Machine Vision Conf, p.65.
[13]Jungo A, Reyes M, 2019. Assessing reliability and challenges of uncertainty estimations for medical image segmentation. Proc 22nd Int Conf on Medical Image Computing and Computer-Assisted Intervention, p.48-56.
[14]Kendall A, Gal Y, 2017. What uncertainties do we need in bayesian deep learning for computer vision? Proc 3rd Int Conf on Neural Information Processing System, p.5580-5590.
[15]Kingma DP, Ba J, 2015. Adam: a method for stochastic optimization. Proc 3rd Int Conf on Learning Representations.
[16]Lee KM, Song G, 2018. SeedNet: automatic seed generation with deep reinforcement learning for robust interactive segmentation. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.1760-1768.
[17]Li L, Zimmer VA, Schnabel JA, et al., 2021. AtrialGeneral: domain generalization for left atrial segmentation of multi-center LGE MRIs. Proc 24th Int Conf on Medical Image Computing and Computer-Assisted Intervention, p.557-566.
[18]Liao X, Li WH, Xu QS, et al., 2020. Iteratively-refined interactive 3D medical image segmentation with multi-agent reinforcement learning. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.9394-9402.
[19]Lin D, Dai JF, Jia JY, et al., 2016. ScribbleSup: scribble-supervised convolutional networks for semantic segmentation. IEEE Conf on Computer Vision and Pattern Recognition, p.3159-3167.
[20]Lin TY, Goyal P, Girshick R, et al., 2017. Focal loss for dense object detection. Proc IEEE Int Conf on Computer Vision, p.2999-3007.
[21]Ma CF, Xu QS, Wang XF, et al., 2021. Boundary-aware supervoxel-level iteratively refined interactive 3D image segmentation with multi-agent reinforcement learning. IEEE Trans Med Imag, 40(10):2563-2574.
[22]Menze BH, Jakab A, Bauer S, et al., 2015. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans Med Imag, 34(10):1993-2024.
[23]Mnih V, Badia AP, Mirza M, et al., 2016. Asynchronous methods for deep reinforcement learning. Proc 33rd Int Conf on Machine Learning, p.1928-1937.
[24]Moeskops P, Veta M, Lafarge MW, et al., 2017. Adversarial training and dilated convolutions for brain MRI segmentation. Proc 3rd Int Workshop on Deep Learning in Medical Image Analysis and 7th Int Workshop on Multimodal Learning for Clinical Decision Support, p.56-64.
[25]Nie D, Wang L, Xiang L, et al., 2019. Difficulty-aware attention network with confidence learning for medical image segmentation. Proc 33rd AAAI Conf on Artificial Intelligence, 31st Innovative Applications of Artificial Intelligence Conf, and 9th AAAI Symp on Educational Advances in Artificial Intelligence, p.1085-1092.
[26]Open AI, 2022. ChatGPT: Optimizing Language Models for Dialogue. https://openai.casa/blog/chatgpt/ [Accessed on July 10, 2022].
[27]Paszke A, Gross S, Massa F, et al., 2019. PyTorch: an imperative style, high-performance deep learning library. Proc 33rd Int Conf on Neural Information Processing Systems, p.8026-8037.
[28]Prabhu A, Torr PHS, Dokania PK, 2020. GDumb: a simple approach that questions our progress in continual learning. Proc 16th European Conf on Computer Vision, p.524-540.
[29]Rajchl M, Lee MCH, Oktay O, et al., 2017. DeepCut: object segmentation from bounding box annotations using convolutional neural networks. IEEE Trans Med Imag, 36(2):674-683.
[30]Rebuffi SA, Kolesnikov A, Sperl G, 2017. iCaRL: incremental classifier and representation learning. IEEE Conf on Computer Vision and Pattern Recognition, p.5533-5542.
[31]Robinson R, Oktay O, Bai WJ, et al., 2018. Real-time prediction of segmentation quality. Proc 21st Int Conf on Medical Image Computing and Computer-Assisted Intervention, p.578-585.
[32]Ronneberger O, Fischer P, Brox T, 2015. U-Net: convolutional networks for biomedical image segmentation. Proc 18th Int Conf on Medical Image Computing and Computer-Assisted Intervention, p.234-241.
[33]Shrivastava A, Gupta A, Girshick R, 2016. Training region-based object detectors with online hard example mining. IEEE Conf on Computer Vision and Pattern Recognition, p.761-769.
[34]Simpson AL, Antonelli M, Bakas S, et al., 2019. A large annotated medical image dataset for the development and evaluation of segmentation algorithms.
[35]Wang GT, Li WQ, Zuluaga MA, et al., 2018. Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans Med Imag, 37(7):1562-1573.
[36]Wang GT, Zuluaga MA, Li WQ, et al., 2019. DeepIGeoS: a deep interactive geodesic framework for medical image segmentation. IEEE Trans Patt Anal Mach Intell, 41(7):1559-1572.
[37]Xie AN, Harrison J, Finn C, 2020. Deep reinforcement learning amidst lifelong non-stationarity.
[38]Xu N, Price B, Cohen S, et al., 2016. Deep interactive object selection. IEEE Conf on Computer Vision and Pattern Recognition, p.373-381.
[39]Ye QH, Gao Y, Ding WP, et al., 2022. Robust weakly supervised learning for COVID-19 recognition using multi-center CT images. Appl Soft Comput, 116:108291.
[40]Yu LQ, Wang SJ, Li XM, et al., 2019. Uncertainty-aware self-ensembling model for semi-supervised 3D left atrium segmentation. Proc 22nd Int Conf on Medical Image Computing and Computer-Assisted Intervention, p.605-613.
[41]Zhang KQ, Yang ZR, Basar T, 2021. Decentralized multi-agent reinforcement learning with networked agents: recent advances. Front Inform Technol Electron Eng, 22(6):802-814.
[42]Zhang SY, Liew JH, Wei YC, et al., 2020. Interactive object segmentation with inside-outside guidance. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.12231-12241.
[43]Zhuang XH, Shen J, 2016. Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI. Med Image Anal, 31:77-87.
Open peer comments: Debate/Discuss/Question/Opinion
<1>