Full Text:   <141>

Summary:  <65>

CLC number: TP391

On-line Access: 2020-01-13

Received: 2019-07-30

Revision Accepted: 2019-12-08

Crosschecked: 2019-12-24

Cited: 0

Clicked: 415

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Lingyun Sun

http://orcid.org/0000-0002-5561-0493

Wei Xiang

http://orcid.org/0000-0003-2058-5379

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2019 Vol.20 No.12 P.1644-1656

10.1631/FITEE.1900386


SmartPaint: a co-creative drawing system based on generative adversarial networks


Author(s):  Lingyun Sun, Pei Chen, Wei Xiang, Peng Chen, Wei-yue Gao, Ke-jun Zhang

Affiliation(s):  Key Laboratory of Design Intelligence and Digital Creativity of Zhejiang Province, Hangzhou 310027, China; more

Corresponding email(s):   sunly@zju.edu.cn, chenpei@zju.edu.cn, wxiang@zju.edu.cn

Key Words:  Co-creative drawing, Deep learning, Image generation


Lingyun Sun, Pei Chen, Wei Xiang, Peng Chen, Wei-yue Gao, Ke-jun Zhang. SmartPaint: a co-creative drawing system based on generative adversarial networks[J]. Frontiers of Information Technology & Electronic Engineering, 2019, 20(12): 1644-1656.

@article{title="SmartPaint: a co-creative drawing system based on generative adversarial networks",
author="Lingyun Sun, Pei Chen, Wei Xiang, Peng Chen, Wei-yue Gao, Ke-jun Zhang",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="20",
number="12",
pages="1644-1656",
year="2019",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1900386"
}

%0 Journal Article
%T SmartPaint: a co-creative drawing system based on generative adversarial networks
%A Lingyun Sun
%A Pei Chen
%A Wei Xiang
%A Peng Chen
%A Wei-yue Gao
%A Ke-jun Zhang
%J Frontiers of Information Technology & Electronic Engineering
%V 20
%N 12
%P 1644-1656
%@ 2095-9184
%D 2019
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1900386

TY - JOUR
T1 - SmartPaint: a co-creative drawing system based on generative adversarial networks
A1 - Lingyun Sun
A1 - Pei Chen
A1 - Wei Xiang
A1 - Peng Chen
A1 - Wei-yue Gao
A1 - Ke-jun Zhang
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 20
IS - 12
SP - 1644
EP - 1656
%@ 2095-9184
Y1 - 2019
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1900386


Abstract: 
Artificial intelligence (AI) has played a significant role in imitating and producing large-scale designs such as e-commerce banners. However, it is less successful at creative and collaborative design outputs. Most humans express their ideas as rough sketches, and lack the professional skills to complete pleasing paintings. Existing AI approaches have failed to convert varied user sketches into artistically beautiful paintings while preserving their semantic concepts. To bridge this gap, we have developed SmartPaint, a co-creative drawing system based on generative adversarial networks (GANs), enabling a machine and a human being to collaborate in cartoon landscape painting. SmartPaint trains a GAN using triples of cartoon images, their corresponding semantic label maps, and edge detection maps. The machine can then simultaneously understand the cartoon style and semantics, along with the spatial relationships among the objects in the landscape images. The trained system receives a sketch as a semantic label map input, and automatically synthesizes its edge map for stable handling of varied sketches. It then outputs a creative and fine painting with the appropriate style corresponding to the human‘s sketch. Experiments confirmed that the proposed SmartPaint system successfully generates high-quality cartoon paintings.

SmartPaint:一种基于生成式对抗神经网络的机协同绘画系统

摘要:当前人工智能在模仿和大批量生产设计作品中扮演重要角色(如电商广告),而在与用户合作创作时表现欠佳。人们有能力使用草图表达创意想法,但缺乏专业绘画技巧完成精美画作。已有人工智能方法无法基于用户输入草图的语义输出具有艺术美感的画作。本文开发了一种基于生成式对抗神经网络的人机协作绘画系统—SmartPaint,支持人机合作创作动漫风景画作。该系统使用动漫图像数据及其相应语义标注图、边缘检测图训练生成式对抗神经网络。通过此种方式,该系统能够同时理解动漫风格以及风景图像中物体的语义和空间关系。在使用中,用户输入草图作为语义标注图,系统自动为其合成边缘图;根据合成的边缘图生成具有恰当风格纹理的画作,从而稳定地处理多样化草图。实验证明该系统可有效满足用户创作需求,生成高质量动漫风格画作。

关键词:协同绘画;深度学习;图像生成

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Atarsaikhan G, Iwana BK, Narusawa A, et al., 2017. Neural font style transfer. Proc 14th IAPR Int Conf on Document Analysis and Recognition, p.51-56.

[2]Belongie S, Malik J, Puzicha J, 2001. Shape context: a new descriptor for shape matching and object recognition. Proc 13th Int Conf on Neural Information Processing Systems, p.798-804.

[3]Benedetti L, Winnemöller H, Corsini M, et al., 2014. Painting with Bob: assisted creativity for novices. Proc 27th Annual ACM Symp on User Interface Software and Technology, p.419-428.

[4]Bowman SR, Vilnis L, Vinyals O, et al., 2016. Generating sentences from a continuous space. Proc 20th SIGNLL Conf on Computational Natural Language Learning, p.10-21.

[5]Canny J, 1987. A computational approach to edge detection. In: Fischler MA, Firschein O (Eds.), Readings in Computer Vision: Issues, Problem, Principles, and Paradigms. Elsevier, Amsterdam, p.184-203.

[6]Champandard AJ, 2016. Semantic style transfer and turning two-bit doodles into fine artworks. https://arxiv.org/abs/1603.01768

[7]Chen C, Lin JC, Liao MH, et al., 2016. Learning to detect salient curves of cartoon images based on composition rules. Proc 11th Int Conf on Computer Science & Education, p.808-813.

[8]Chen T, Cheng MM, Tan P, et al., 2009. Sketch2Photo: Internet image montage. ACM Trans Graph, 28(5), Article 124.

[9]Chen Y, Lai YK, Liu YJ, 2018. CartoonGAN: generative adversarial networks for photo cartoonization. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.9465-9474.

[10]Chu NSH, Tai CL, 2004. Real-time painting with an expressive virtual Chinese brush. IEEE Comput Graph Appl, 24(5):76-85.

[11]Ci YZ, Ma XZ, Wang ZH, et al., 2018. User-guided deep anime line art colorization with conditional adversarial networks. Proc 26th ACM Int Conf on Multimedia, p.1536-1544.

[12]Cummmings D, Vides F, Hammond T, 2012. I don‘t believe my eyes! Geometric sketch recognition for a computer art tutorial. Proc Int Symp on Sketch-Based Interfaces and Modeling, p.97-106.

[13]Davis NM, 2013. Human-computer co-creativity: blending human and computational creativity. Proc 9th Artificial Intelligence and Interactive Digital Entertainment Conf, p.9-12.

[14]Davis NM, Hsiao CP, Singh KY, et al., 2015. Drawing apprentice: an enactive co-creative agent for artistic collaboration. Proc ACM SIGCHI Conf on Creativity and Cognition, p.185-186.

[15]Davis NM, Hsiao CP, Singh KY, et al., 2016a. Co-creative drawing agent with object recognition. Proc 12th Artificial Intelligence and Interactive Digital Entertainment Conf, p.9-15.

[16]Davis NM, Hsiao CP, Yashraj Singh K, et al., 2016b. Empirically studying participatory sense-making in abstract drawing with a co-creative cognitive agent. Proc 21st Int Conf on Intelligent User Interfaces, p.196-207.

[17]Dekel T, Gan C, Krishnan D, et al., 2018. Sparse, smart contours to represent and edit images. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.3511-3520.

[18]Dixon D, Prasad M, Hammond T, 2010. iCanDraw: using sketch recognition and corrective feedback to assist a user in drawing human faces. Proc SIGCHI Conf on Human Factors in Computing Systems, p.897-906.

[19]Eitz M, Richter R, Hildebrand K, et al., 2011. Photosketcher: interactive sketch-based image synthesis. IEEE Comput Graph Appl, 31(6):56-66.

[20]Gatys LA, Ecker AS, Bethge M, 2016. Image style transfer using convolutional neural networks. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.2414-2423.

[21]Goodfellow I, Pouget-Abadie J, Mirza M, et al., 2014. Generative adversarial nets. Proc 27th Int Conf on Neural Information Processing Systems, p.2672-2680.

[22]Güçclütürk Y, Güçclü U, van Lier R, et al., 2016. Convolutional sketch inversion. European Conf on Computer Vision, p.810-824.

[23]Ha D, Eck D, 2017. A neural representation of sketch drawings. https://arxiv.org/abs/1704.03477

[24]Huang HZ, Wang H, Luo WH, et al., 2017. Real-time neural style transfer for videos. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.783-791.

[25]Isola P, Zhu JY, Zhou TH, et al., 2017. Image-to-image translation with conditional adversarial networks. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1125-1134.

[26]Karimi P, Davis N, Grace K, et al., 2018. Deep learning for identifying potential conceptual shifts for co-creative drawing. https://arxiv.org/abs/1801.00723

[27]Lee YJ, Zitnick CL, Cohen MF, 2011. ShadowDraw: realtime user guidance for freehand drawing. ACM Trans Graph, 30(4), Article 27.

[28]Li MJ, Huang HZ, Ma L, et al., 2018. Unsupervised image-to-image translation with stacked cycle-consistent adversarial networks. Proc 15th European Conf on Computer Vision, p.186-201.

[29]Liu YF, Qin ZC, Wan T, et al., 2018. Auto-painter: cartoon image generation from sketch by using conditional Wasserstein generative adversarial networks. Neurocomputing, 311:78-87.

[30]Luan FJ, Paris S, Shechtman E, et al., 2017. Deep photo style transfer. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.4990-4998.

[31]Mirza M, Osindero S, 2014. Conditional generative adversarial nets. https://arxiv.org/abs/1411.1784

[32]Ning X, Laga H, Saito S, et al., 2011. Contour-driven Sumi-e rendering of real photos. Comput Graph, 35(1):122-134.

[33]Oh C, Song J, Choi J, et al., 2018. I lead, you help but only with enough details: understanding user experience of co-creation with artificial intelligence. Proc CHI Conf on Human Factors in Computing Systems, Article 649.

[34]Portenier T, Hu QY, Szabó A, et al., 2018. Faceshop: deep sketch-based face image editing. ACM Trans Graph, 37(4), Article 99.

[35]Roberts A, Engel J, Eck D, 2017. Hierarchical variational autoencoders for music. Workshop on Machine Learning for Creativity and Design, NIPS. %https://nips2017creativity.github.io/doc/

[36]Sangkloy P, Lu JW, Fang C, et al., 2017. Scribbler: controlling deep image synthesis with sketch and color. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.5400-5409.

[37]Selim A, Elgharib M, Doyle L, 2016. Painting style transfer for head portraits using convolutional neural networks. ACM Trans Graph, 35(4), Article 129.

[38]Simo-Serra E, Iizuka S, Sasaki K, et al., 2016. Learning to simplify: fully convolutional networks for rough sketch cleanup. ACM Trans Graph, 35(4), Article 121.

[39]Wang TC, Liu MY, Zhu JY, et al., 2018. High-resolution image synthesis and semantic manipulation with conditional GANs. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.8798-8807.

[40]Xian WQ, Sangkloy P, Agrawal V, et al., 2018. TextureGAN: controlling deep image synthesis with texture patches. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.8456-8465.

[41]Zhang YK, Hu KK, Ren PR, et al., 2017. Layout style modeling for automating banner design. Proc Thematic Workshops of ACM Multimedia, p.451-459.

[42]Zhao NX, Cao Y, Lau RWH, 2018. What characterizes personalities of graphic designs? ACM Trans Graph, 37(4), Article 116.

[43]Zhu JY, Krähenbühl P, Shechtman E, et al., 2016. Generative visual manipulation on the natural image manifold. Proc 14th European Conf on Computer Vision, p.597-613.

[44]Zhu JY, Park T, Isola P, et al., 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. Proc IEEE Int Conf on Computer Vision, p.2223-2232.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - Journal of Zhejiang University-SCIENCE