Full Text:   <3036>

Summary:  <2054>

CLC number: TP391

On-line Access: 2015-03-04

Received: 2014-06-18

Revision Accepted: 2014-10-30

Crosschecked: 2015-01-28

Cited: 0

Clicked: 6454

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Yang Chen

http://orcid.org/0000-0003-0927-000X

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2015 Vol.16 No.3 P.227-237

http://doi.org/10.1631/FITEE.1400217


Gradient-based compressive image fusion


Author(s):  Yang Chen, Zheng Qin

Affiliation(s):  Department of Computer Science & Technology, Tsinghua University, Beijing 100084, China

Corresponding email(s):   yang-chen07@mails.tsinghua.edu.cn, qingzh@mail.tsinghua.edu.cn

Key Words:  Compressive sensing (CS), Image fusion, Gradient-based image fusion, CS-based image fusion


Yang Chen, Zheng Qin. Gradient-based compressive image fusion[J]. Frontiers of Information Technology & Electronic Engineering, 2015, 16(3): 227-237.

@article{title="Gradient-based compressive image fusion",
author="Yang Chen, Zheng Qin",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="16",
number="3",
pages="227-237",
year="2015",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1400217"
}

%0 Journal Article
%T Gradient-based compressive image fusion
%A Yang Chen
%A Zheng Qin
%J Frontiers of Information Technology & Electronic Engineering
%V 16
%N 3
%P 227-237
%@ 2095-9184
%D 2015
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1400217

TY - JOUR
T1 - Gradient-based compressive image fusion
A1 - Yang Chen
A1 - Zheng Qin
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 16
IS - 3
SP - 227
EP - 237
%@ 2095-9184
Y1 - 2015
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1400217


Abstract: 
We present a novel image fusion scheme based on gradient and scrambled block Hadamard ensemble (SBHE) sampling for compressive sensing imaging. First, source images are compressed by compressive sensing, to facilitate the transmission of the sensor. In the fusion phase, the image gradient is calculated to reflect the abundance of its contour information. By compositing the gradient of each image, gradient-based weights are obtained, with which compressive sensing coefficients are achieved. Finally, inverse transformation is applied to the coefficients derived from fusion, and the fused image is obtained. Information entropy (IE), Xydeas’s and Piella’s metrics are applied as non-reference objective metrics to evaluate the fusion quality in line with different fusion schemes. In addition, different image fusion application scenarios are applied to explore the scenario adaptability of the proposed scheme. Simulation results demonstrate that the gradient-based scheme has the best performance, in terms of both subjective judgment and objective metrics. Furthermore, the gradient-based fusion scheme proposed in this paper can be applied in different fusion scenarios.

Within the SBHE (scrambled block Hadamard ensemble) sampling and GPSR (gradient projection for sparse reconstruction), authors analyzed six image fusion weighted schemes and proved that among them, gradient-based weighting provides the best results, in terms of subjective and objective judgements. Experiments are conducted under two typical image fusion scenarios: (1) thermal and visible image fusion and (2) multifocus image fusion.

基于梯度的压缩感知图像融合

目的:面向多传感器图像融合,实现基于梯度的压缩感知图像融合,使其具有传输量小,计算复杂度低的特点。
创新点:提出一种基于梯度的融合规则(图1),对压缩感知系数进行融合,并对融合后的压缩感知系数进行反变换得到原图像,提高压缩感知融合质量。
方法:首先,对多传感器捕获的图像进行压缩感知分解以提高传感器传输速率。然后在融合阶段,基于压缩感知系数梯度进行融合得到融合后的压缩感知系数,并对融合后的系数进行压缩感知反变换得到融合后图像。通过两种融合场景的应用实验(图2-7,表1-6),证明所提算法相比于其他传统压缩感知图像融合方法,在人眼视觉及客观融合标准中均更优。
结论:针对多种融合场景,提出一种高效的基于梯度的压缩感知的图像融合方法,提高图像融合精度。

关键词:压缩感知;图像融合;基于梯度的图像融合;压缩感知图像融合

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Amolins, K., Zhang, Y., Dare, P., 2007. Wavelet based image fusion techniques—an introduction, review and comparison. ISPRS J. Photogram. Remote Sens., 62(4): 249-263.

[2]Byeungwoo, J., Landgrebe, D.A., 1999. Decision fusion approach for multitemporal classification. IEEE Trans. Geosci. Remote Sens., 37(3):1227-1233.

[3]Candès, E.J., Romberg, J., 2005. l1-Magic: Recovery of Sparse Signals via Convex Programming. Available from http://www.acm.caltech.edu/l1magic/

[4]Candès, E.J., Tao, T., 2006. Near-optimal signal recovery from random projections: universal encoding strategies. IEEE Trans. Inform. Theory, 52(12):5406-5425.

[5]Candès, E.J., Wakin, M.B., 2008. An introduction to compressive sampling. IEEE Signal Process. Mag., 25(2):21-30.

[6]Chen, R.Y., Li, S., Yang, R., et al., 2008. Multi-focus images fusion based on data assimilation and genetic algorithm. Proc. Int. Conf. on Computer Science and Software Engineering, p.249-252.

[7]Chen, S.S., Donoho, D.L., Saunders, M.A., 1998. Atomic decomposition by basis pursuit. SIAM J. Sci. Comput., 20(1):33-61.

[8]Ding, M., Wei, L., Wang, B.F., 2013. Research on fusion method for infrared and visible images via compressive sensing. Infrared Phys. Technol., 57:56-67.

[9]Do, T.T., Lu, G., Nguyen, N.H., et al., 2012. Fast and efficient compressive sensing using structurally random matrices. IEEE Trans. Signal Process., 60(1):139-154.

[10]Donoho, D.L., 2006. Compressed sensing. IEEE Trans. Inform. Theory, 52(4):1289-1306.

[11]Duarte, M.F., Davenpot, M.A., Takhar, D., et al., 2008. Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag., 25(2):83-91.

[12]Figueiredo, M.A.T., Nowak, R.D., Wright, S.J., 2007. Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Topics Signal Process., 1(4):586-597.

[13]Han, J.J., Loffeld, O., Hartmann, K., et al., 2010. Multi image fusion based on compressive sensing. Proc. Int. Conf. on Audio Language and Image Processing, p.1463-1469.

[14]Jolliffe, I.T., 1986. Principal Component Analysis. Springer.

[15]Kang, B., Zhu, W.P., Yan, J., 2013. Fusion framework for multi- focus images based on compressed sensing. IET Image Process., 7(4):290-299.

[16]Li, S.T., Yang, B., 2008. Multifocus image fusion by combining curvelet and wavelet transform. Patt. Recog. Lett., 29(9):1295-1301.

[17]Li, S.T., Kwok, J.T.Y., Tsang, I.W., et al., 2004. Fusing images with different focuses using support vector machines. IEEE Trans. Neur. Netw., 15(6):1555-1561.

[18]Li, X., Qin, S.Y., 2011. Efficient fusion for infrared and visible images based on compressive sensing principle. IET Image Process., 5(2):141-147.

[19]Liu, Z., Tsukada, K., Hanasaki, K., et al., 2001. Image fusion by using steerable pyramid. Patt. Recog. Lett., 22(9):929-939.

[20]Luo, X.Y., Zhang, J., Yang, J.Y., et al., 2009. Image fusion in compressed sensing. Proc. 16th IEEE Int. Conf. on Image Processing, p.2205-2208.

[21]Pajares, G., de la Cruz, J.M., 2004. A wavelet-based image fusion tutorial. Patt. Recog., 37(9):1855-1872.

[22]Petrović, V.S., Xydeas, C.S., 2004. Gradient-based multiresolution image fusion. IEEE Trans. Image Process., 13(2):228-237.

[23]Piella, G., Heijmans, H., 2003. A new quality metric for image fusion. Proc. Int. Conf. on Image Processing, p.173-176.

[24]Qu, G.H., Zhang, D.L., Yan, P.F., 2002. Information measure for performance of image fusion. Electron. Lett., 38(7):313-315.

[25]Romberg, J., 2008. Imaging via compressive sampling. IEEE Signal Process. Mag., 25(2):14-20.

[26]Ross, A.A., Govindarajan, R., 2005. Feature level fusion of hand and face biometrics. Proc. SPIE, p.196-204.

[27]Shi, W.Z., Zhu, C.Q., Tian, Y., et al., 2005. Wavelet-based image fusion and quality assessment. Int. J. Appl. Earth Observ. Geoinform., 6(3-4):241-251.

[28]Smith, L.I., 2002. A Tutorial on Principal Components Analysis. Available from www.cs.otago.ac.nz/cosc453/student_tutorials/principal_components.pdf.

[29]Tropp, J., Gilbert, A.C., 2007. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inform. Theory, 53(12):4655-4666.

[30]Wan, T., Qin, Z.C., 2011. An application of compressive sensing for image fusion. Int. J. Comput. Math., 88(18):3-9.

[31]Wang, R., Du, L.F., 2014. Infrared and visible image fusion based on random projection and sparse representation. Int. J. Remote Sens., 35(5):1640-1652.

[32]Wang, Y., Yang, J.F., Yin, W., et al., 2008. A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Image Sci., 1(3):248-272.

[33]Xydeas, C.S., Petrović, V., 2000. Objective image fusion performance measure. Electron. Lett., 36(4):308-309.

[34]Yang, X.H., Jin, H.Y., Jiao, L.C., 2007. Adaptive image fusion algorithm for infrared and visible light images based on DT-CWT. J. Infrared Millim. Waves, 26(6):419-424 (in Chinese).

[35]Yang, Y., Han, C.Z., Kang, X., et al., 2007. An overview on pixel-level image fusion in remote sensing. Proc. IEEE Int. Conf. on Automation and Logistics, p.2339-2344.

[36]Zheng, Y.Z., Qin, Z., 2009. Region-based image fusion method using bidimensional empirical mode decomposition. J. Electron. Imag., 18(1):013008.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE