CLC number: TP391
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2019-12-24
Cited: 0
Clicked: 8769
Citations: Bibtex RefMan EndNote GB/T7714
Lingyun Sun, Pei Chen, Wei Xiang, Peng Chen, Wei-yue Gao, Ke-jun Zhang. SmartPaint: a co-creative drawing system based on generative adversarial networks[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.1900386 @article{title="SmartPaint: a co-creative drawing system based on generative adversarial networks", %0 Journal Article TY - JOUR
SmartPaint:一种基于生成式对抗神经网络的机协同绘画系统关键词组: Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article
Reference[1]Atarsaikhan G, Iwana BK, Narusawa A, et al., 2017. Neural font style transfer. Proc 14th IAPR Int Conf on Document Analysis and Recognition, p.51-56. ![]() [2]Belongie S, Malik J, Puzicha J, 2001. Shape context: a new descriptor for shape matching and object recognition. Proc 13th Int Conf on Neural Information Processing Systems, p.798-804. ![]() [3]Benedetti L, Winnemöller H, Corsini M, et al., 2014. Painting with Bob: assisted creativity for novices. Proc 27th Annual ACM Symp on User Interface Software and Technology, p.419-428. ![]() [4]Bowman SR, Vilnis L, Vinyals O, et al., 2016. Generating sentences from a continuous space. Proc 20th SIGNLL Conf on Computational Natural Language Learning, p.10-21. ![]() [5]Canny J, 1987. A computational approach to edge detection. In: Fischler MA, Firschein O (Eds.), Readings in Computer Vision: Issues, Problem, Principles, and Paradigms. Elsevier, Amsterdam, p.184-203. ![]() [6]Champandard AJ, 2016. Semantic style transfer and turning two-bit doodles into fine artworks. https://arxiv.org/abs/1603.01768 ![]() [7]Chen C, Lin JC, Liao MH, et al., 2016. Learning to detect salient curves of cartoon images based on composition rules. Proc 11th Int Conf on Computer Science & Education, p.808-813. ![]() [8]Chen T, Cheng MM, Tan P, et al., 2009. Sketch2Photo: Internet image montage. ACM Trans Graph, 28(5), Article 124. ![]() [9]Chen Y, Lai YK, Liu YJ, 2018. CartoonGAN: generative adversarial networks for photo cartoonization. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.9465-9474. ![]() [10]Chu NSH, Tai CL, 2004. Real-time painting with an expressive virtual Chinese brush. IEEE Comput Graph Appl, 24(5):76-85. ![]() [11]Ci YZ, Ma XZ, Wang ZH, et al., 2018. User-guided deep anime line art colorization with conditional adversarial networks. Proc 26th ACM Int Conf on Multimedia, p.1536-1544. ![]() [12]Cummmings D, Vides F, Hammond T, 2012. I don‘t believe my eyes! Geometric sketch recognition for a computer art tutorial. Proc Int Symp on Sketch-Based Interfaces and Modeling, p.97-106. ![]() [13]Davis NM, 2013. Human-computer co-creativity: blending human and computational creativity. Proc 9th Artificial Intelligence and Interactive Digital Entertainment Conf, p.9-12. ![]() [14]Davis NM, Hsiao CP, Singh KY, et al., 2015. Drawing apprentice: an enactive co-creative agent for artistic collaboration. Proc ACM SIGCHI Conf on Creativity and Cognition, p.185-186. ![]() [15]Davis NM, Hsiao CP, Singh KY, et al., 2016a. Co-creative drawing agent with object recognition. Proc 12th Artificial Intelligence and Interactive Digital Entertainment Conf, p.9-15. ![]() [16]Davis NM, Hsiao CP, Yashraj Singh K, et al., 2016b. Empirically studying participatory sense-making in abstract drawing with a co-creative cognitive agent. Proc 21st Int Conf on Intelligent User Interfaces, p.196-207. ![]() [17]Dekel T, Gan C, Krishnan D, et al., 2018. Sparse, smart contours to represent and edit images. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.3511-3520. ![]() [18]Dixon D, Prasad M, Hammond T, 2010. iCanDraw: using sketch recognition and corrective feedback to assist a user in drawing human faces. Proc SIGCHI Conf on Human Factors in Computing Systems, p.897-906. ![]() [19]Eitz M, Richter R, Hildebrand K, et al., 2011. Photosketcher: interactive sketch-based image synthesis. IEEE Comput Graph Appl, 31(6):56-66. ![]() [20]Gatys LA, Ecker AS, Bethge M, 2016. Image style transfer using convolutional neural networks. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.2414-2423. ![]() [21]Goodfellow I, Pouget-Abadie J, Mirza M, et al., 2014. Generative adversarial nets. Proc 27th Int Conf on Neural Information Processing Systems, p.2672-2680. ![]() [22]Güçclütürk Y, Güçclü U, van Lier R, et al., 2016. Convolutional sketch inversion. European Conf on Computer Vision, p.810-824. ![]() [23]Ha D, Eck D, 2017. A neural representation of sketch drawings. https://arxiv.org/abs/1704.03477 ![]() [24]Huang HZ, Wang H, Luo WH, et al., 2017. Real-time neural style transfer for videos. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.783-791. ![]() [25]Isola P, Zhu JY, Zhou TH, et al., 2017. Image-to-image translation with conditional adversarial networks. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1125-1134. ![]() [26]Karimi P, Davis N, Grace K, et al., 2018. Deep learning for identifying potential conceptual shifts for co-creative drawing. https://arxiv.org/abs/1801.00723 ![]() [27]Lee YJ, Zitnick CL, Cohen MF, 2011. ShadowDraw: realtime user guidance for freehand drawing. ACM Trans Graph, 30(4), Article 27. ![]() [28]Li MJ, Huang HZ, Ma L, et al., 2018. Unsupervised image-to-image translation with stacked cycle-consistent adversarial networks. Proc 15th European Conf on Computer Vision, p.186-201. ![]() [29]Liu YF, Qin ZC, Wan T, et al., 2018. Auto-painter: cartoon image generation from sketch by using conditional Wasserstein generative adversarial networks. Neurocomputing, 311:78-87. ![]() [30]Luan FJ, Paris S, Shechtman E, et al., 2017. Deep photo style transfer. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.4990-4998. ![]() [31]Mirza M, Osindero S, 2014. Conditional generative adversarial nets. https://arxiv.org/abs/1411.1784 ![]() [32]Ning X, Laga H, Saito S, et al., 2011. Contour-driven Sumi-e rendering of real photos. Comput Graph, 35(1):122-134. ![]() [33]Oh C, Song J, Choi J, et al., 2018. I lead, you help but only with enough details: understanding user experience of co-creation with artificial intelligence. Proc CHI Conf on Human Factors in Computing Systems, Article 649. ![]() [34]Portenier T, Hu QY, Szabó A, et al., 2018. Faceshop: deep sketch-based face image editing. ACM Trans Graph, 37(4), Article 99. ![]() [35]Roberts A, Engel J, Eck D, 2017. Hierarchical variational autoencoders for music. Workshop on Machine Learning for Creativity and Design, NIPS. %https://nips2017creativity.github.io/doc/ ![]() [36]Sangkloy P, Lu JW, Fang C, et al., 2017. Scribbler: controlling deep image synthesis with sketch and color. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.5400-5409. ![]() [37]Selim A, Elgharib M, Doyle L, 2016. Painting style transfer for head portraits using convolutional neural networks. ACM Trans Graph, 35(4), Article 129. ![]() [38]Simo-Serra E, Iizuka S, Sasaki K, et al., 2016. Learning to simplify: fully convolutional networks for rough sketch cleanup. ACM Trans Graph, 35(4), Article 121. ![]() [39]Wang TC, Liu MY, Zhu JY, et al., 2018. High-resolution image synthesis and semantic manipulation with conditional GANs. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.8798-8807. ![]() [40]Xian WQ, Sangkloy P, Agrawal V, et al., 2018. TextureGAN: controlling deep image synthesis with texture patches. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.8456-8465. ![]() [41]Zhang YK, Hu KK, Ren PR, et al., 2017. Layout style modeling for automating banner design. Proc Thematic Workshops of ACM Multimedia, p.451-459. ![]() [42]Zhao NX, Cao Y, Lau RWH, 2018. What characterizes personalities of graphic designs? ACM Trans Graph, 37(4), Article 116. ![]() [43]Zhu JY, Krähenbühl P, Shechtman E, et al., 2016. Generative visual manipulation on the natural image manifold. Proc 14th European Conf on Computer Vision, p.597-613. ![]() [44]Zhu JY, Park T, Isola P, et al., 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. Proc IEEE Int Conf on Computer Vision, p.2223-2232. ![]() Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou
310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn Copyright © 2000 - 2025 Journal of Zhejiang University-SCIENCE |
Open peer comments: Debate/Discuss/Question/Opinion
<1>