Full Text:   <2375>

Summary:  <1460>

CLC number: TP391.9

On-line Access: 2019-10-08

Received: 2018-11-04

Revision Accepted: 2019-03-11

Crosschecked: 2019-08-23

Cited: 0

Clicked: 4869

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Hong-yu Wu

http://orcid.org/0000-0002-8127-3347

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2019 Vol.20 No.9 P.1165-1174

http://doi.org/10.1631/FITEE.1800693


Modeling yarn-level geometry from a single micro-image


Author(s):  Hong-yu Wu, Xiao-wu Chen, Chen-xu Zhang, Bin Zhou, Qin-ping Zhao

Affiliation(s):  State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing 100191, China

Corresponding email(s):   whyvrlab@buaa.edu.cn, chen@buaa.edu.cn, zhangchenxu528@buaa.edu.cn, zhoubin@buaa.edu.cn

Key Words:  Single micro-images, Yarn geometry, Cloth appearance


Share this article to: More |Next Article >>>

Hong-yu Wu, Xiao-wu Chen, Chen-xu Zhang, Bin Zhou, Qin-ping Zhao. Modeling yarn-level geometry from a single micro-image[J]. Frontiers of Information Technology & Electronic Engineering, 2019, 20(9): 1165-1174.

@article{title="Modeling yarn-level geometry from a single micro-image",
author="Hong-yu Wu, Xiao-wu Chen, Chen-xu Zhang, Bin Zhou, Qin-ping Zhao",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="20",
number="9",
pages="1165-1174",
year="2019",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1800693"
}

%0 Journal Article
%T Modeling yarn-level geometry from a single micro-image
%A Hong-yu Wu
%A Xiao-wu Chen
%A Chen-xu Zhang
%A Bin Zhou
%A Qin-ping Zhao
%J Frontiers of Information Technology & Electronic Engineering
%V 20
%N 9
%P 1165-1174
%@ 2095-9184
%D 2019
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1800693

TY - JOUR
T1 - Modeling yarn-level geometry from a single micro-image
A1 - Hong-yu Wu
A1 - Xiao-wu Chen
A1 - Chen-xu Zhang
A1 - Bin Zhou
A1 - Qin-ping Zhao
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 20
IS - 9
SP - 1165
EP - 1174
%@ 2095-9184
Y1 - 2019
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1800693


Abstract: 
Different types of cloth show distinctive appearances owing to their unique yarn-level geometrical details. Despite its importance in applications such as cloth rendering and simulation, capturing yarn-level geometry is nontrivial and requires special hardware, e.g., computed tomography scanners, for conventional methods. In this paper, we propose a novel method that can produce the yarn-level geometry of real cloth using a single micro-image, captured by a consumer digital camera with a macro lens. Given a single input image, our method estimates the large-scale yarn geometry by image shading, and the fine-scale fiber details can be recovered via the proposed fiber tracing and generation algorithms. Experimental results indicate that our method can capture the detailed yarn-level geometry of a wide range of cloth and reproduce plausible cloth appearances.

基于单幅微距图像的布料丝线结构建模

摘要:真实世界的布料具有不同的微观丝线结构,导致不同布料具有各种各样外观。布料真实感绘制在影视制作、电子商务等领域具有重要应用价值。为获取布料微观丝线几何信息,传统方法需要微米CT扫描仪等昂贵复杂设备,并且采集过程费时费力,难以普及。为降低布料丝线获取复杂度,本文提出一种基于单幅微距图像的丝线获取与建模方法,仅需装有微距镜头的普通消费级相机拍摄的微距图像。该方法首先通过单幅微距图像的明暗信息获得丝线的大尺度几何;然后,通过丝线追踪算法获得丝线上的纤维细节;最后将这两者结合,得到布料丝线的微观尺度几何。实验结果表明,本方法能够高效获取各种类型布料丝线几何。

关键词:单幅微距图像;丝线三维几何;布料外观

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Aittala M, Weyrich T, Lehtinen J, 2015. Two-shot SVBRDF capture for stationary materials. ACM Trans Graph, 34(4), Article 110.

[2]Asha V, Nagabhushan P, Bhajantri NU, 2012. Automatic extraction of texture-periodicity using superposition of distance matching functions and their forward differences. Patt Recogn Lett, 33(5):629-640.

[3]Chai ML, Wang LD, Weng YL, et al., 2012. Single-view hair modeling for portrait manipulation. ACM Trans Graph, 31(4), Article 116.

[4]Chai ML, Luo LJ, Sunkavalli K, et al., 2015. High-quality hair modeling from a single portrait photo. ACM Trans Graph, 34(6), Article 204.

[5]Chai ML, Shao TJ, Wu HZ, et al., 2016. AutoHair: fully automatic hair modeling from a single image. ACM Trans Graph, 35(4), Article 116.

[6]Chen XW, Wu HY, Jin X, et al., 2013a. Face illumination manipulation using a single reference image by adaptive layer decomposition. IEEE Trans Image Process, 22(11):4249-4259.

[7]Chen XW, Zou DQ, Zhou SZ, et al., 2013b. Image matting with local and nonlocal smooth priors. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1902-1907.

[8]Cirio G, Lopez-Moreno J, Miraut D, et al., 2014. Yarn-level simulation of woven cloth. ACM Trans Graph, 33(6), Article 207.

[9]Efros AA, Freeman WT, 2001. Image quilting for texture synthesis and transfer. Proc 28th Annual Conf on Computer Graphics and Interactive Techniques, p.341-346.

[10]Farbman Z, Fattal R, Lischinski D, et al., 2008. Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans Graph, 27(3), Article 67.

[11]Felzenszwalb PF, Girshick RB, McAllester D, et al., 2010. Object detection with discriminatively trained part-based models. IEEE Trans Patt Anal Mach Intell, 32(9):1627-1645.

[12]Irawan P, Marschner S, 2012. Specular reflection from woven cloth. ACM Trans Graph, 31(1), Article 11.

[13]Jakob W, 2010. Mitsuba Physically Based Renderer. http://www.mitsuba-renderer.org.

[14]Johnson MK, Cole F, Raj A, et al., 2011. Microgeometry capture using an elastomeric sensor. ACM Trans Graph, 30(4), Article 46.

[15]Khungurn P, Schroeder D, Zhao S, et al., 2015. Matching real fabrics with micro-appearance models. ACM Trans Graph, 35(1), Article 1.

[16]Leaf J, Wu RD, Schweickart E, et al., 2018. Interactive design of periodic yarn-level cloth patterns. ACM Trans Graph, 37(6), Article 202.

[17]Li GN, Wu CL, Stoll C, et al., 2013. Capturing relightable human performances under general uncontrolled illumination. Comput Graph Forum, 32(2pt3):275-284.

[18]Liu LB, Yin KK, Wang B, et al., 2013. Simulation and control of skeleton-driven soft body characters. ACM Trans Graph, 32(6), Article 215.

[19]Marschner SR, Jensen HW, Cammarano M, et al., 2003. Light scattering from human hair fibers. ACM Trans Graph, 22(3):780-791.

[20]Nam G, Lee JH, Wu HZ, et al., 2016. Simultaneous acquisition of microscale reflectance and normals. ACM Trans Graph, 35(6), Article 185.

[21]Sadeghi I, Bisker O, de Deken J, et al., 2013. A practical microcylinder appearance model for cloth rendering. ACM Trans Graph, 32(2), Article 14.

[22]Schröder K, Zinke A, Klein R, 2015. Image-based reverse engineering and visual prototyping of woven cloth. IEEE Trans Vis Comput Graph, 21(2):188-200.

[23]Wang HM, O’Brien JF, Ramamoorthi R, 2011. Data-driven elastic models for cloth: modeling and measurement. ACM Trans Graph, 30(4), Article 71.

[24]Xu WW, Umentani N, Chao QW, et al., 2014. Sensitivity-optimized rigging for example-based real-time clothing synthesis. ACM Trans Graph, 33(4), Article 107.

[25]Yuksel C, Kaldor JM, James DL, et al., 2012. Stitch meshes for modeling knitted clothing with yarn-level detail. ACM Trans Graph, 31(4), Article 37.

[26]Zhang JH, Baciu G, Zheng DJ, et al., 2013. IDSS: a novel representation for woven fabrics. IEEE Trans Vis Comput Graph, 19(3):420-432.

[27]Zhang M, Chai ML, Wu HZ, et al., 2017. A data-driven approach to four-view image-based hair modeling. ACM Trans Graph, 36(4), Article 156.

[28]Zhao S, Jakob W, Marschner S, et al., 2012. Structure-aware synthesis for predictive woven fabric appearance. ACM Trans Graph, 31(4), Article 75.

[29]Zhao S, Jakob W, Marschner S, et al., 2014. Building volumetric appearance models of fabric using micro CT imaging. ACM Commun, 57(11):98-105.

[30]Zhao S, Luan FJ, Bala K, 2016. Fitting procedural yarn models for realistic cloth rendering. ACM Trans Graph, 35(4), Article 51.

[31]Zhou ZL, Wu Z, Tan P, 2013. Multi-view photometric stereo with spatially varying isotropic materials. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1482-1489.

[32]Zhu H, Liu YB, Fan JT, et al., 2017. Video-based outdoor human reconstruction. IEEE Trans Circ Syst Video Technol, 27(4):760-770.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE