CLC number: TP39
On-line Access: 2023-03-25
Received: 2022-03-15
Revision Accepted: 2023-03-25
Crosschecked: 2022-08-31
Cited: 0
Clicked: 1704
Yunnong CHEN, Yankun ZHEN, Chuning SHI, Jiazhi LI, Liuqing CHEN, Zejian LI, Lingyun SUN, Tingting ZHOU, Yanfang CHANG. UI layers merger: merging UI layers via visual learning and boundary prior[J]. Frontiers of Information Technology & Electronic Engineering, 2023, 24(3): 373-387.
@article{title="UI layers merger: merging UI layers via visual learning and boundary prior",
author="Yunnong CHEN, Yankun ZHEN, Chuning SHI, Jiazhi LI, Liuqing CHEN, Zejian LI, Lingyun SUN, Tingting ZHOU, Yanfang CHANG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="24",
number="3",
pages="373-387",
year="2023",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2200099"
}
%0 Journal Article
%T UI layers merger: merging UI layers via visual learning and boundary prior
%A Yunnong CHEN
%A Yankun ZHEN
%A Chuning SHI
%A Jiazhi LI
%A Liuqing CHEN
%A Zejian LI
%A Lingyun SUN
%A Tingting ZHOU
%A Yanfang CHANG
%J Frontiers of Information Technology & Electronic Engineering
%V 24
%N 3
%P 373-387
%@ 2095-9184
%D 2023
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2200099
TY - JOUR
T1 - UI layers merger: merging UI layers via visual learning and boundary prior
A1 - Yunnong CHEN
A1 - Yankun ZHEN
A1 - Chuning SHI
A1 - Jiazhi LI
A1 - Liuqing CHEN
A1 - Zejian LI
A1 - Lingyun SUN
A1 - Tingting ZHOU
A1 - Yanfang CHANG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 24
IS - 3
SP - 373
EP - 387
%@ 2095-9184
Y1 - 2023
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2200099
Abstract: With the fast-growing graphical user interface (GUI) development workload in the Internet industry, some work attempted to generate maintainable front-end code from GUI screenshots. It can be more suitable for using user interface (UI) design drafts that contain UI metadata. However, fragmented layers inevitably appear in the UI design drafts, which greatly reduces the quality of the generated code. None of the existing automated GUI techniques detects and merges the fragmented layers to improve the accessibility of generated code. In this paper, we propose UI layers merger (UILM), a vision-based method that can automatically detect and merge fragmented layers into UI components. Our UILM contains the merging area detector (MAD) and a layer merging algorithm. The MAD incorporates the boundary prior knowledge to accurately detect the boundaries of UI components. Then, the layer merging algorithm can search for the associated layers within the components' boundaries and merge them into a whole. We present a dynamic data augmentation approach to boost the performance of MAD. We also construct a large-scale UI dataset for training the MAD and testing the performance of UILM. Experimental results show that the proposed method outperforms the best baseline regarding merging area detection and achieves decent layer merging accuracy. A user study on a real application also confirms the effectiveness of our UILM.
[1]Așıroǧlu B, Mete BR, Yıldız E, et al., 2019. Automatic HTML code generation from mock-up images using machine learning techniques. Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science, p.1-4.
[2]Behrang F, Reiss SP, Orso A, 2018. GUIFetch: supporting app design and development through GUI search. Proc 5th Int Conf on Mobile Software Engineering and Systems, p.236-246.
[3]Beltramelli T, 2018. pix2code: generating code from a graphical user interface screenshot. ACM SIGCHI Symp on Engineering Interactive Computing Systems, Article 3.
[4]Bunian S, Li K, Jemmali C, et al., 2021. VINS: visual search for mobile user interface design. CHI Conf on Human Factors in Computing Systems, Article 423.
[5]Cai ZW, Vasconcelos N, 2018. Cascade R-CNN: delving into high quality object detection. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.6154-6162.
[6]Chen CY, Su T, Meng GZ, et al., 2018. From UI design image to GUI skeleton: a neural machine translator to bootstrap mobile GUI implementation. Proc 40th Int Conf on Software Engineering, p.665-676.
[7]Chen CY, Feng SD, Xing ZC, et al., 2019. Gallery D.C.: design search and knowledge discovery through auto-created GUI component gallery. Proc ACM on Human-Computer Interaction, Article 180.
[8]Chen JS, Xie ML, Xing ZC, et al., 2020. Object detection for graphical user interface: old fashioned or deep learning or a combination? Proc 28th ACM Joint Meeting on European Software Engineering Conf and Symp on the Foundations of Software Engineering, p.1202-1214.
[9]Chen K, Wang JQ, Pang JM, et al., 2019. MMDetection: open MMLab detection toolbox and benchmark. https://arxiv.org/abs/1906.07155v1
[10]Chen S, Fan LL, Su T, et al., 2019. Automated cross-platform GUI code generation for mobile apps. Proc IEEE 1st Int Workshop on Artificial Intelligence for Mobile, p.13-16.
[11]Deka B, Huang ZF, Franzen C, et al., 2017. Rico: a mobile app dataset for building data-driven design applications. Proc 30th Annual ACM Symp on User Interface Software and Technology, p.845-854.
[12]Fay MP, Proschan MA, 2010. Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules. Stat Surv, 4:1-39.
[13]Feng SD, Ma SY, Yu JZ, et al., 2021. Auto-icon: an automated code generation tool for icon designs assisting in UI development. Proc 26th Int Conf on Intelligent User Interfaces, p.59-69.
[14]Ge XF, 2019. Android GUI search using hand-drawn sketches. Proc IEEE/ACM 41st Int Conf on Software Engineering: Companion Proc, p.141-143.
[15]Halbe A, Joshi AR, 2015. A novel approach to HTML page creation using neural network. Proc Comput Sci, 45:197-204.
[16]He KM, Zhang XY, Ren SQ, et al., 2016. Deep residual learning for image recognition. IEEE Conf on Computer Vision and Pattern Recognition, p.770-778.
[17]Jain V, Agrawal P, Banga S, et al., 2019. Sketch2Code: transformation of sketches to UI in real-time using deep neural network. https://arxiv.org/abs/1910.08930
[18]Li G, Baechler G, Tragut M, et al., 2022. Learning to denoise raw mobile UI layouts for improving datasets at scale. CHI Conf on Human Factors in Computing Systems, Article 67.
[19]Lin TY, Maire M, Belongie S, et al., 2014. Microsoft COCO: common objects in context. Proc 13th European Conf on Computer Vision, p.740-755.
[20]Lin TY, Dollár P, Girshick R, et al., 2017. Feature pyramid networks for object detection. IEEE Conf on Computer Vision and Pattern Recognition, p.936-944.
[21]Liu Z, Chen CY, Wang JJ, et al., 2020. Owl eyes: spotting UI display issues via visual understanding. Proc 35th IEEE/ACM Int Conf on Automated Software Engineering, p.398-409.
[22]Liu Z, Chen CY, Wang JJ, et al., 2023. Nighthawk: fully automated localizing UI display issues via visual understanding. IEEE Trans Soft Eng, 49(1):403-418.
[23]Moran K, Bernal-Cárdenas C, Curcio M, et al., 2020. Machine learning-based prototyping of graphical user interfaces for mobile apps. IEEE Trans Soft Eng, 46(2):196-221.
[24]Nguyen TA, Csallner C, 2015. Reverse engineering mobile application user interfaces with REMAUI. Proc 30th IEEE/ACM Int Conf on Automated Software Engineering, p.248-259.
[25]Ren SQ, He KM, Girshick RB, et al., 2017. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Patt Anal Mach Intell, 39(6):1137-1149.
[26]Suleri S, Pandian VPS, Shishkovets S, et al., 2019. Eve: a sketch-based software prototyping workbench. CHI Conf on Human Factors in Computing Systems, Article LBW1410.
[27]Vu T, Jang H, Pham TX, et al., 2019. Cascade RPN: delving into high-quality region proposal network with adaptive convolution. Proc 33rd Conf on Neural Information Processing Systems, p.1430-1440.
[28]White TD, Fraser G, Brown GJ, 2019. Improving random GUI testing with image-based widget detection. Proc 28th ACM SIGSOFT Int Symp on Software Testing and Analysis, p.307-317.
[29]Xu N, Price B, Cohen S, et al., 2017. Deep image matting. IEEE Conf on Computer Vision and Pattern Recognition, p.311-320.
[30]Zhang XY, de Greef L, Swearngin A, et al., 2021. Screen recognition: creating accessibility metadata for mobile applications from pixels. CHI Conf on Human Factors in Computing Systems, Article 275.
[31]Zhao TM, Chen CY, Liu YM, et al., 2021. GUIGAN: learning to generate GUI designs using generative adversarial networks. Proc 43rd IEEE/ACM Int Conf on Software Engineering, p.748-760.
Open peer comments: Debate/Discuss/Question/Opinion
<1>