Full Text:   <1074>

Summary:  <59>

CLC number: TP391

On-line Access: 2025-07-28

Received: 2024-04-07

Revision Accepted: 2024-09-10

Crosschecked: 2025-07-30

Cited: 0

Clicked: 777

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Yiman ZHU

https://orcid.org/0000-0002-7421-0188

Lu WANG

https://orcid.org/0000-0001-8147-1581

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2025 Vol.26 No.7 P.1083-1098

http://doi.org/10.1631/FITEE.2400261


A ground-based dataset and diffusion model for on-orbit low-light image enhancement


Author(s):  Yiman ZHU, Lu WANG, Jingyi YUAN, Yu GUO

Affiliation(s):  School of Automation, Nanjing University of Science and Technology, Nanjing 210000, China

Corresponding email(s):   yiman@njust.edu.cn, wanglu21@njust.edu.cn, jingyi@njust.edu.cn, guoyu@njust.edu.cn

Key Words:  Satellite capture, Low-light image enhancement (LLIE), Data collection, Diffusion model, Fused attention


Yiman ZHU, Lu WANG, Jingyi YUAN, Yu GUO. A ground-based dataset and diffusion model for on-orbit low-light image enhancement[J]. Frontiers of Information Technology & Electronic Engineering, 2025, 26(7): 1083-1098.

@article{title="A ground-based dataset and diffusion model for on-orbit low-light image enhancement",
author="Yiman ZHU, Lu WANG, Jingyi YUAN, Yu GUO",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="26",
number="7",
pages="1083-1098",
year="2025",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2400261"
}

%0 Journal Article
%T A ground-based dataset and diffusion model for on-orbit low-light image enhancement
%A Yiman ZHU
%A Lu WANG
%A Jingyi YUAN
%A Yu GUO
%J Frontiers of Information Technology & Electronic Engineering
%V 26
%N 7
%P 1083-1098
%@ 2095-9184
%D 2025
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2400261

TY - JOUR
T1 - A ground-based dataset and diffusion model for on-orbit low-light image enhancement
A1 - Yiman ZHU
A1 - Lu WANG
A1 - Jingyi YUAN
A1 - Yu GUO
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 26
IS - 7
SP - 1083
EP - 1098
%@ 2095-9184
Y1 - 2025
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2400261


Abstract: 
On-orbit service is important for maintaining the sustainability of the space environment. A space-based visible camera is an economical and lightweight sensor for situational awareness during on-orbit service. However, it can be easily affected by the low illumination environment. Recently, deep learning has achieved remarkable success in image enhancement of natural images, but it is seldom applied in space due to the data bottleneck. In this study, we first propose a dataset of BeiDou navigation satellites for on-orbit low-light image enhancement (LLIE). In the automatic data collection scheme, we focus on reducing the domain gap and improving the diversity of the dataset. We collect hardware-in-the-loop images based on a robotic simulation testbed imitating space lighting conditions. To evenly sample poses of different orientations and distances without collision, we propose a collision-free workspace and pose-stratified sampling. Subsequently, we develop a novel diffusion model. To enhance the image contrast without over-exposure and blurred details, we design fused attention guidance to highlight the structure and the dark region. Finally, a comparison of our method with previous methods indicates that our method has better on-orbit LLIE performance.

针对在轨低光照图像增强的地面数据集与扩散模型

朱亦曼,王璐,袁静宜,郭毓
南京理工大学自动化学院,中国南京市,210000
摘要:在轨服务对于维护太空环境的可持续性至关重要。天基可见光相机是一种经济且轻量化的传感器,可用于在轨服务期间的态势感知。然而,其性能易受低照度环境影响。近年来,深度学习在自然图像增强领域取得显著成功,但由于数据瓶颈,尚未广泛应用于太空。本文首次提出一套用于北斗导航卫星在轨低光照图像增强(LLIE)的数据集。在自动化数据采集方案中,我们专注于减少领域差异并提升数据集的多样性。基于模拟太空光照条件的机器人仿真测试平台采集了硬件在环图像。为在不发生碰撞的情况下均匀采样不同方向和距离的姿态,提出一种无碰撞工作空间及姿态分层采样方法。随后,开发了一种新的扩散模型。为在不过度曝光和细节模糊的情况下增强图像对比度,设计了融合注意力引导来突出结构和暗区。与现有方法的对比结果表明,我们的方法具有更好的在轨低光照图像增强性能。

关键词:卫星捕获;低光照图像增强(LLIE);数据采集;扩散模型;融合注意力

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Brateanu A, Balmez R, Avram A, et al., 2024. LYT-NET: lightweight YUV Transformer-based network for low-light image enhancement. https://arxiv.org/abs/2401.15204

[2]Cai YH, Bian H, Lin J, et al., 2023. Retinexformer: one-stage Retinex-based Transformer for low-light image enhancement. Proc IEEE/CVF Int Conf on Computer Vision, p.12470-12479.

[3]Capuano V, Alimo SR, Ho AQ, et al., 2019. Robust features extraction for on-board monocular-based spacecraft pose acquisition. Proc AIAA Scitech Forum, p.1-15.

[4]Chen XL, Liu Z, Xie SN, et al., 2024. Deconstructing denoising diffusion models for self-supervised learning. https://arxiv.org/abs/2401.14404

[5]Chen YS, Wang YC, Kao MH, et al., 2018. Deep photo enhancer: unpaired learning for image enhancement from photographs with GANs. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.6306-6314.

[6]Civardi GL, Bechini M, Quirino M, et al., 2024. Generation of fused visible and thermal-infrared images for uncooperative spacecraft proximity navigation. Adv Space Res, 73(11):5501-5520.

[7]Cowardin H, Miller R, 2022. The intentional destruction of cosmos 1408. Orbit Debr Q News, 26(1):1-5.

[8]Croitoru FA, Hondru V, Ionescu RT, et al., 2023. Diffusion models in vision: a survey. IEEE Trans Patt Anal Mach Intell, 45(9):10850-10869.

[9]Cui HS, Li JJ, Hua Z, et al., 2022. TPET: two-stage perceptual enhancement Transformer network for low-light image enhancement. Eng Appl Artif Intell, 116: 105411.

[10]Dang JC, Li ZH, Zhong Y, et al., 2023. WaveNet: Wave-Aware Image Enhancement. Proc 31st Pacific Conf on Computer Graphics and Applications, p.21-29.

[11]Diao HF, Li Z, Ma ZH, 2011. Simulation of space-based visible surveillance images for space surveillance. Int Symp on Photoelectronic Detection and Imaging, p.94-102.

[12]Fu Y, Hong Y, Chen LW, et al., 2022. LE-GAN: unsupervised low-light image enhancement network using attention module and identity invariant loss. Knowl-Based Syst, 240: 108010.

[13]Guan MS, Xu HY, Jiang GY, et al., 2024. DiffWater: underwater image enhancement based on conditional denoising diffusion probabilistic model. IEEE J Sel Top Appl Earth Obs Remote Sens, 17:2319-2335.

[14]Guo XJ, Li Y, Ling HB, 2017. LIME: low-light image enhancement via illumination map estimation. IEEE Trans Image Process, 26(2):982-993.

[15]Gupta KC, 1986. On the nature of robot workspace. Int J Robot Res, 5(2):112-121.

[16]Harris C, Thomas DJ, Kadan J, et al., 2021. Expanding the space surveillance network with space-based sensors using metaheuristic optimization techniques. Proc Advanced Maui Optical and Space Surveillance Technologies, p.1-13.

[17]Ho J, Jain A, Abbeel P, 2020. Denoising diffusion probabilistic models. Proc 34th Int Conf on Neural Information Processing Systems, Article 574.

[18]Hou JH, Zhu ZY, Hou JH, et al., 2024. Global structure-aware diffusion process for low-light image enhancement. Proc 37th Int Conf on Neural Information Processing Systems, Article 3490.

[19]Hu YL, Speierer S, Jakob W, et al., 2021. Wide-depth-range 6D object pose estimation in space. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.15865-15874.

[20]Huang ZX, Li JJ, Hua Z, et al., 2023. Filter-cluster attention based recursive network for low-light enhancement. Front Inform Technol Electron Eng, 24(7):1028-1044.

[21]Jiang YF, Gong XY, Liu D, et al., 2021. EnlightenGAN: deep light enhancement without paired supervision. IEEE Trans Image Process, 30:2340-2349.

[22]Kisantal M, Sharma S, Park TH, et al., 2020. Satellite pose estimation challenge: dataset, competition design, and results. IEEE Trans Aerosp Electron Syst, 56(5):4083-4098.

[23]Ledkov A, Aslanov V, 2022. Review of contact and contactless active space debris removal approaches. Prog Aerosp Sci, 134: 100858.

[24]Li CY, Guo CL, Han LH, et al., 2022. Low-light image and video enhancement using deep learning: a survey. IEEE Trans Patt Anal Mach Intell, 44(12):9396-9416.

[25]Li J, Zhao F, Li XC, et al., 2016. Analysis of robotic workspace based on Monte Carlo method and the posture matrix. Proc IEEE Int Conf on Control and Robotics Engineering, p.1-5.

[26]Li MD, Liu JY, Yang WH, et al., 2018. Structure-revealing low-light image enhancement via robust Retinex model. IEEE Trans Image Process, 27(6):2828-2841.

[27]Li MN, Pei RH, Zheng TY, et al., 2024. FusionDiff: multi-focus image fusion using denoising diffusion probabilistic models. Expert Syst Appl, 238: 121664.

[28]Li N, Zhang J, 2019. Automatic image enhancement by learning adaptive patch selection. Front Inform Technol Electron Eng, 20(2):206-221.

[29]Lin LQ, Li ZK, Li RK, et al., 2024. Diffusion models for time-series applications: a survey. Front Inform Technol Electron Eng, 25(1):19-41.

[30]Lore KG, Akintayo A, Sarkar S, 2017. LLNet: a deep autoencoder approach to natural low-light image enhancement. Patt Recogn, 61:650-662.

[31]Loshchilov I, Hutter F, 2017. SGDR: stochastic gradient descent with warm restarts. Proc 5th Int Conf on Learning Representations.

[32]Lu SQ, Guan FX, Zhang HY, et al., 2024. Speed-up DDPM for real-time underwater image enhancement. IEEE Trans Circ Syst Video Technol, 34(5):3576-3588.

[33]Ma J, Yarats D, 2021. On the adequacy of untuned warmup for adaptive optimization. Proc 35th AAAI Conf on Artificial Intelligence, p.8828-8836.

[34]Musallam MA, Gaudilliere V, Ghorbel E, et al., 2021. Spacecraft recognition leveraging knowledge of space environment: simulator, dataset, competition design and analysis. Proc IEEE Int Conf on Image Processing Challenges, p.11-15.

[35]Nathan OB, Levy D, Treibitz T, et al., 2024. Osmosis: RGBD diffusion prior for underwater image restoration. https://arxiv.org/abs/2403.14837

[36]Nichol AQ, Dhariwal P, 2021. Improved denoising diffusion probabilistic models. Proc 38th Int Conf on Machine Learning, p.8162-8171.

[37]Park TH, Märtens M, Jawaid M, et al., 2023. Satellite Pose Estimation Competition 2021: results and analyses. Acta Astronaut, 204:640-665.

[38]Paszke A, Gross S, Massa F, et al., 2019. PyTorch: an imperative style, high-performance deep learning library. Proc 33rd Int Conf on Neural Information Processing Systems, Article 721.

[39]Poozhiyil M, Nair MH, Rai MC, et al., 2023. Active debris removal: a review and case study on LEOPARD Phase 0-A mission. Adv Space Res, 72(8):3386-3413.

[40]Proença PF, Gao Y, 2020. Deep learning for spacecraft pose estimation from photorealistic rendering. Proc IEEE Int Conf on Robotics and Automation, p.6007-6013.

[41]Rahman Z, Jobson DJ, Woodell GA, 1996. Multi-scale Retinex for color image enhancement. Proc 3rd IEEE Int Conf on Image Processing, p.1003-1006.

[42]Rao N, Lu T, Zhou Q, et al., 2021. Seeing in the dark by component-GAN. IEEE Signal Proc Lett, 28:1250-1254.

[43]Ren XT, Li MD, Cheng WH, et al., 2018. Joint enhancement and denoising method via sequential decomposition. Proc IEEE Int Symp on Circuits and Systems, p.1-5.

[44]Saharia C, Chan W, Chang HW, et al., 2022. Palette: image-to-image diffusion models. Proc ACM SIGGRAPH Conf, Article 15.

[45]Saharia C, Ho J, Chan W, et al., 2023. Image super-resolution via iterative refinement. IEEE Trans Patt Anal Mach Intell, 45(4):4713-4726.

[46]Sara U, Akter M, Uddin MS, 2019. Image quality assessment through FSIM, SSIM, MSE and PSNR—a comparative study. J Comput Commun, 7(3):8-18.

[47]Shi PF, Xu XW, Fan XN, et al., 2024. LL-UNet++:UNet++ based nested skip connections network for low-light image enhancement. IEEE Trans Comput Imag, 10:510-521.

[48]Triantafyllidou D, Moran S, McDonagh S, et al., 2020. Low light video enhancement using synthetic data produced with an intermediate domain mapping. Proc 16th European Conf on Computer Vision, p.103-119.

[49]Wang Y, Cao Y, Zha ZJ, et al., 2019. Progressive Retinex: mutually reinforced illumination-noise perception network for low-light image enhancement. Proc 27th ACM Int Conf on Multimedia, p.2015-2023.

[50]Wang YF, Yu Y, Yang WH, et al., 2023. Exposure diffusion: learning to expose for low-light image enhancement. Proc IEEE/CVF Int Conf on Computer Vision, p.12404-12414.

[51]Wei C, Wang WJ, Yang WH, et al., 2018. Deep Retinex decomposition for low-light enhancement. Proc British Machine Vision Conf, Article 155.

[52]Xu YD, Yang C, Sun BB, et al., 2021. A novel multi-scale fusion framework for detail-preserving low-light image enhancement. Inform Sci, 548:378-397.

[53]Xu YD, Feng K, Yan XA, et al., 2023. CFCNN: a novel convolutional fusion framework for collaborative fault identification of rotating machinery. Inform Fus, 95:1-16.

[54]Xu YD, Yan XA, Sun BB, et al., 2024. Online knowledge distillation-based multiscale threshold denoising networks for fault diagnosis of transmission systems. IEEE Trans Transp Electrific, 10(2):4421-4431.

[55]Yan QS, Feng YX, Zhang C, et al., 2024. You only need one color space: an efficient network for low-light image enhancement. https://arxiv.org/abs/2402.05809

[56]Yang L, Zhang ZL, Song Y, et al., 2024. Diffusion models: a comprehensive survey of methods and applications. ACM Comput Surv, 56(4):1-39.

[57]Yang X, Wang XQ, Wang NN, et al., 2022. SRDN: a unified super-resolution and motion deblurring network for space image restoration. IEEE Trans Geosci Remote Sens, 60:1-11.

[58]Ying ZQ, Li G, Ren YR, et al., 2017. A new image contrast enhancement algorithm using exposure fusion framework. Proc Int Conf on Computer Analysis of Images and Patterns, p.36-46.

[59]Zhang HP, Jiang ZG, 2014. Multi-view space object recognition and pose estimation based on kernel regression. Chin J Aeronaut, 27(5):1233-1241.

[60]Zhang R, Isola P, Efros AA, et al., 2018. The unreasonable effectiveness of deep features as a perceptual metric. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.586-595.

[61]Zhou DW, Yang ZX, Yang Y, 2023. Pyramid diffusion models for low-light image enhancement. Proc 32nd Int Joint Conf on Artificial Intelligence, p.1795-1803.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2025 Journal of Zhejiang University-SCIENCE