Full Text:   <2722>

CLC number: TP391.7

On-line Access: 2010-11-04

Received: 2010-09-14

Revision Accepted: 2010-10-08

Crosschecked: 2010-09-14

Cited: 0

Clicked: 8153

Citations:  Bibtex RefMan EndNote GB/T7714

-   Go to

Article info.
1. Reference List
Open peer comments

Journal of Zhejiang University SCIENCE C 2010 Vol.11 No.11 P.850-859

http://doi.org/10.1631/jzus.C1001004


Salient object extraction for user-targeted video content association


Author(s):  Jia Li, Han-nan Yu, Yong-hong Tian, Tie-jun Huang, Wen Gao

Affiliation(s):  Key Lab of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China, Graduate University of Chinese Academy of Sciences, Beijing 100049, China, National Engineering Lab for Video Technology (NELVT), School of EE & CS, Peking University, Beijing 100871, China

Corresponding email(s):   yhtian@pku.edu.cn

Key Words:  Salient object extraction, User-targeted video content association, Complementary saliency maps


Jia Li, Han-nan Yu, Yong-hong Tian, Tie-jun Huang, Wen Gao. Salient object extraction for user-targeted video content association[J]. Journal of Zhejiang University Science C, 2010, 11(11): 850-859.

@article{title="Salient object extraction for user-targeted video content association",
author="Jia Li, Han-nan Yu, Yong-hong Tian, Tie-jun Huang, Wen Gao",
journal="Journal of Zhejiang University Science C",
volume="11",
number="11",
pages="850-859",
year="2010",
publisher="Zhejiang University Press & Springer",
doi="10.1631/jzus.C1001004"
}

%0 Journal Article
%T Salient object extraction for user-targeted video content association
%A Jia Li
%A Han-nan Yu
%A Yong-hong Tian
%A Tie-jun Huang
%A Wen Gao
%J Journal of Zhejiang University SCIENCE C
%V 11
%N 11
%P 850-859
%@ 1869-1951
%D 2010
%I Zhejiang University Press & Springer
%DOI 10.1631/jzus.C1001004

TY - JOUR
T1 - Salient object extraction for user-targeted video content association
A1 - Jia Li
A1 - Han-nan Yu
A1 - Yong-hong Tian
A1 - Tie-jun Huang
A1 - Wen Gao
J0 - Journal of Zhejiang University Science C
VL - 11
IS - 11
SP - 850
EP - 859
%@ 1869-1951
Y1 - 2010
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/jzus.C1001004


Abstract: 
The increasing amount of videos on the Internet and digital libraries highlights the necessity and importance of interactive video services such as automatically associating additional materials (e.g., advertising logos and relevant selling information) with the video content so as to enrich the viewing experience. Toward this end, this paper presents a novel approach for user-targeted video content association (VCA). In this approach, the salient objects are extracted automatically from the video stream using complementary saliency maps. According to these salient objects, the VCA system can push the related logo images to the users. Since the salient objects often correspond to important video content, the associated images can be considered as content-related. Our VCA system also allows users to associate images to the preferred video content through simple interactions by the mouse and an infrared pen. Moreover, by learning the preference of each user through collecting feedbacks on the pulled or pushed images, the VCA system can provide user-targeted services. Experimental results show that our approach can effectively and efficiently extract the salient objects. Moreover, subjective evaluations show that our system can provide content-related and user-targeted VCA services in a less intrusive way.

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Achanta, R., Hemami, S., Estrada, F., Susstrunk, S., 2009. Frequency-Tuned Salient Region Detection. IEEE Conf. on Computer Vision and Pattern Recognition, p.1597-1604.

[2]Allili, M.S., Ziou, D., 2007. Object of Interest Segmentation and Tracking by Using Feature Selection and Active Contours. IEEE Conf. on Computer Vision and Pattern Recognition, p.1-8.

[3]Brasnett, P., Bober, M., 2007. Proposed Improvements to Image Signature XM 31.0. MPEG Doc No. M14983.

[4]Chang, C.H., Hsieh, K.Y., Chung, M.C., Wu, J.L., 2008. Visa: Virtual Spotlighted Advertising. Proc. ACM Int. Conf. on Multimedia, p.837-840.

[5]Elazary, L., Itti, L., 2008. Interesting objects are visually salient. J. Vis., 8(3), Article No. 3.

[6]Friedland, G., Jantz, K., Rojas, R., 2005. Siox: Simple Interactive Object Extraction in Still Images. IEEE Int. Symp. on Multimedia, p.7-14.

[7]Gao, W., Tian, Y.H., Huang, T.J., Yang, Q., 2010. Vlogging: a survey of video blogging technology on the web. ACM Comput. Surv., 42(4), Article No. 15.

[8]Guo, J.L., Mei, T., Liu, F.L., Hua, X.S., 2009. Adon: an Intelligent Overlay Video Advertising System. SIGIR, p.628-629.

[9]Hou, X.D., Zhang, L.Q., 2007. Saliency Detection: a Spectral Residual Approach. IEEE Conf. on Computer Vision and Pattern Recognition, p.1-8.

[10]Hua, G., Liu, Z.C., Zhang, Z.Y., Wu, Y., 2006. Iterative local-global energy minimization for automatic extraction of objects of interest. IEEE Trans. Pattern Anal. Mach. Intell., 28(10):1701-1706.

[11]Itti, L., Koch, C., Niebur, E., 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell., 20(11):1254-1259.

[12]Ko, B.C., Nam, J.Y., 2006. Automatic Object-of-Interest Segmentation from Natural Images. IEEE Int. Conf. on Pattern Recognition, p.45-48.

[13]Kwak, S.Y., Ko, B.C., Byun, H., 2005. Automatic salient-object extraction using the contrast map and salient points. LNCS, 3332:138-145.

[14]Lee, J.C., 2008. Hacking the nintendo Wii remote. IEEE Perv. Comput., 7(3):39-45.

[15]Lee, J.T., Lee, H.D., Park, H.S., Song, Y.I., Rim, H.C., 2009. Finding Advertising Keywords on Video Scripts. SIGIR, p.686-687.

[16]Lekakos, G., Papakiriakopoulos, D., Chorianopoulos, K., 2001. An Integrated Approach to Interactive and Personalized TV Advertising. Workshop on Personalization in Future TV.

[17]Li, Y., Wan, K.W., Yan, X., Xu, C.S., 2005. Real Time Advertisement Insertion in Baseball Video Based on Advertisement Effect. Proc. ACM Int. Conf. on Multimedia, p.343-346.

[18]Liao, W.S., Chen, K.T., Hsu, W.H., 2008. Adimage: Video Advertising by Image Matching and Ad Scheduling Optimization. SIGIR, p.767-768.

[19]Liu, H.Y., Jiang, S.Q., Huang, Q.M., Xu, C.S., 2008. A Generic Virtual Content Insertion System Based on Visual Attention Analysis. Proc. ACM Int. Conf. on Multimedia, p.379-388.

[20]Liu, T., Sun, J., Zheng, N.N., Tang, X.O., Shum, H.Y., 2007. Learning to Detect a Salient Object. IEEE Conf. on Computer Vision and Pattern Recognition, p.1-8.

[21]Martin, D., Fowlkes, C., Tai, D., Malik, J., 2001. A Database of Human Segmented Natural Images and Its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics. IEEE ICCV, p.416-423.

[22]Mei, T., Hua, X.S., Yang, L.J., Li, S.P., 2007. Videosense— Towards Effective Online Video Advertising. Proc. ACM Int. Conf. on Multimedia, p.1075-1084.

[23]Movahedi, V., Elder, J.H., 2010. Design and Perceptual Validation of Performance Measures for Salient Object Segmentation. IEEE Computer Society Workshop on Perceptual Organization in Computer Vision, p.49-56.

[24]Park, K.T., Moon, Y.S., 2007. Automatic Extraction of Salient Objects Using Feature Maps. Int. Conf. on Acoustics, Speech, and Signal Processing, p.617-620.

[25]Pinneli, S., Chandler, D.M., 2008. A Bayesian Approach to Predicting the Perceived Interest of Objects. 15th IEEE Int. Conf. on Image Processing, p.2584-2587.

[26]Srinivasan, S.H., Sawant, N., Wadhwa, S., 2007. Vadeo-Video Advertising System. Proc. ACM Int. Conf. on Multimedia, p.455-456.

[27]Thawani, A., Gopalan, S., Sridhar, V., 2004. Context Aware Personalized Ad Insertion in an Interactive TV Environment. Workshop on Personalization in Future TV.

[28]Walther, D., Koch, C., 2006. Modeling attention to salient proto-objects. Neur. Networks, 19(9):1395-1407.

[29]Wang, J.Q., Fang, Y.K., Lu, H.Q., 2008. Online Video Advertising Based on User’s Attention Relevancy Computing. IEEE Int. Conf. on Multimedia and Expo, p.1161-1164.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE