
CLC number:
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2021-11-22
Cited: 0
Clicked: 7532
Yi Yang, Yueting Zhuang, Yunhe Pan. Multiple knowledge representation for big data artificial intelligence: framework, applications, and case studies[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.2100463 @article{title="Multiple knowledge representation for big data artificial intelligence: framework, applications, and case studies", %0 Journal Article TY - JOUR
大数据人工智能下的多重知识表达:框架、应用及案例研究浙江大学计算机学院,中国杭州市,310027 摘要:提出一种多重知识表示框架,探讨了其对推动大数据人工智能技术在各个领域中发展的重要意义及深远影响。传统知识表达和现代基于深度学习的知识表达通常着眼于利用特定变换方式,将输入转换为符号编码或者向量。例如,知识图谱关注于描述各个概念之间的语义联系,而深度神经网络更像是感知原始信号输入的工具。多重知识表达是一种更为先进的人工智能表征框架,具备更完整的智能功能,比如原始信号感知、特征提取及向量化、知识符号化和逻辑推断。多重知识表达有如下两点优势:(1)与现有以深度学习为主导的人工智能技术相比,具有更强的解释性以及更好的泛化能力;(2)将多重知识表达集成于现有人工智能技术,有利于各种表征(例如原始信号感知以及符号化编码)发挥互补优势。我们希望多重知识表达相关研究以及应用能够驱动新一代人工智能蓬勃发展。 关键词组: Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article
Reference[1]Amodei D, Olah C, Steinhardt J, et al., 2016. Concrete problems in AI safety. https://arxiv.org/abs/1606.06565v2 ![]() [2]Arrieta AB, díaz-Rodríguez N, Del Ser J, et al., 2020. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform Fus, 58:82-115. ![]() [3]Auer S, Bizer C, Kobilarov G, et al., 2007. DBpedia: a nucleus for a web of open data. Proc 6th Int Semantic Web Conf and 2nd Asian Semantic Web Conf the Semantic Web, p.722-735. ![]() [4]de Souza CR, Gaidon A, Cabon Y, et al., 2017. Procedural generation of videos to train deep action recognition networks. IEEE Conf on Computer Vision and Pattern Recognition, p.2594-2604. ![]() [5]Fan HH, Zhu LC, Yang Y, et al., 2020. Recurrent attention network with reinforced generator for visual dialog. ACM Trans Multim Comput Commun Appl, 16(3):78. ![]() [6]Ferrada S, Bustos B, Hogan A, 2017. IMGpedia: a linked dataset with content-based analysis of Wikimedia images. Proc 16th Int Semantic Web Conf on the Semantic Web, p.84-93. ![]() [7]França MVM, Zaverucha G, d’Avila Garcez AS, 2014. Fast relational learning using bottom clause propositionalization with artificial neural networks. Mach Learn, 94(1):81-104. ![]() [8]Gogoglou A, Bruss CB, Hines KE, 2019. On the interpretability and evaluation of graph representation learning. https://arxiv.org/abs/1910.03081 ![]() [9]Goodfellow IJ, Pouget-Abadie J, Mirza M, et al., 2014. Generative adversarial nets. Proc 27th Int Conf on Neural Information Processing Systems, p.2672-2680. ![]() [10]He KM, Zhang XY, Ren SQ, et al., 2016. Deep residual learning for image recognition. IEEE Conf on Computer Vision and Pattern Recognition, p.770-778. ![]() [11]He KM, Fan HQ, Wu YX, et al., 2020. Momentum contrast for unsupervised visual representation learning. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.9726-9735. ![]() [12]Johnson J, Gupta A, Li FF, 2018. Image generation from scene graphs. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.1219-1228. ![]() [13]Krizhevsky A, Sutskever I, Hinton GE, 2012. ImageNet classification with deep convolutional neural networks. Proc 25th Int Conf on Neural Information Processing Systems, p.1097-1105. ![]() [14]Miao JX, Wu Y, Yang Y, 2021. Identifying visible parts via pose estimation for occluded person re-identification. IEEE Trans Neur Netw Learn Syst, early access. ![]() [15]Pan YH, 2019. On visual knowledge. Front Inform Technol Electron Eng, 20(8):1021-1025. ![]() [16]Pan YH, 2020a. Miniaturized five fundamental issues about visual knowledge. Front Inform Technol Electron Eng, early access. ![]() [17]Pan YH, 2020b. Multiple knowledge representation of artificial intelligence. Engineering, 6(3):216-217. ![]() [18]Pan YH, 2021. On visual understanding. Front Inform Technol Electron Eng, early access. ![]() [19]Serafini L, d’Avila Garcez A, 2016. Logic tensor networks: deep learning and logical reasoning from data and knowledge. Proc 11th Int Workshop on Neural-Symbolic Learning and Reasoning Co-located with the Joint Multi-conf on Human-Level Artificial Intelligence. ![]() [20]Simonyan K, Zisserman A, 2014. Two-stream convolutional networks for action recognition in videos. Proc 27th Int Conf on Neural Information Processing Systems, p.568-576. ![]() [21]Singh J, Zheng L, 2020. Combining semantic guidance and deep reinforcement learning for generating human level paintings. https://arxiv.org/abs/2011.12589 ![]() [22]Sun YF, Cheng CM, Zhang YH, et al., 2020. Circle loss: a unified perspective of pair similarity optimization. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.6397-6406. ![]() [23]Tang KH, Niu YL, Huang JQ, et al., 2020. Unbiased scene graph generation from biased training. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.3716-3725. ![]() [24]Vaswani A, Shazeer N, Parmar N, et al., 2017. Attention is all you need. Proc 31st Int Conf on Neural Information Processing Systems, p.6000-6010. ![]() [25]Veeravasarapu V, Rothkopf C, Visvanathan R, 2017. Adversarially tuned scene generation. IEEE Conf on Computer Vision and Pattern Recognition, p.6441-6449. ![]() [26]Vrandečić D, Krötzsch M, 2014. Wikidata: a free collaborative knowledge base. Commun ACM, 57(10):78-85. ![]() [27]Wang XH, Zhu LC, Wu Y, et al., 2020. Symbiotic attention for egocentric action recognition with object-centric alignment. IEEE Trans Patt Anal Mach Intell, early access. ![]() [28]Xu DF, Zhu YK, Choy CB, et al., 2017. Scene graph generation by iterative message passing. IEEE Conf on Computer Vision and Pattern Recognition, p.3097-3106. ![]() [29]Yan Y, Nie FP, Li W, et al., 2016. Image classification by cross-media active learning with privileged information. IEEE Trans Multim, 18(12):2494-2502. ![]() [30]Zhu LC, Fan HH, Luo YW, et al., 2021. Temporal cross-layer correlation mining for action recognition. IEEE Trans Multim, early access. ![]() Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou
310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn Copyright © 2000 - 2026 Journal of Zhejiang University-SCIENCE | ||||||||||||||



ORCID:
Open peer comments: Debate/Discuss/Question/Opinion
<1>