CLC number:
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2021-11-22
Cited: 0
Clicked: 5548
Yi Yang, Yueting Zhuang, Yunhe Pan. Multiple knowledge representation for big data artificial intelligence: framework, applications, and case studies[J]. Frontiers of Information Technology & Electronic Engineering, 2021, 22(12): 1551-1558.
@article{title="Multiple knowledge representation for big data artificial intelligence: framework, applications, and case studies",
author="Yi Yang, Yueting Zhuang, Yunhe Pan",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="22",
number="12",
pages="1551-1558",
year="2021",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2100463"
}
%0 Journal Article
%T Multiple knowledge representation for big data artificial intelligence: framework, applications, and case studies
%A Yi Yang
%A Yueting Zhuang
%A Yunhe Pan
%J Frontiers of Information Technology & Electronic Engineering
%V 22
%N 12
%P 1551-1558
%@ 2095-9184
%D 2021
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2100463
TY - JOUR
T1 - Multiple knowledge representation for big data artificial intelligence: framework, applications, and case studies
A1 - Yi Yang
A1 - Yueting Zhuang
A1 - Yunhe Pan
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 22
IS - 12
SP - 1551
EP - 1558
%@ 2095-9184
Y1 - 2021
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2100463
Abstract: In this paper, we present a multiple knowledge representation (MKR) framework and discuss its potential for developing big data artificial intelligence (AI) techniques with possible broader impacts across different AI areas. Typically, canonical knowledge representations and modern representations each emphasize a particular aspect of transforming inputs into symbolic encoding or vectors. For example, knowledge graphs focus on depicting semantic connections among concepts, whereas deep neural networks (DNNs) are more of a tool to perceive raw signal inputs. MKR is an advanced AI representation framework for more complete intelligent functions, such as raw signal perception, feature extraction and vectorization, knowledge symbolization, and logical reasoning. MKR has two benefits: (1) it makes the current AI techniques (dominated by deep learning) more explainable and generalizable, and (2) it expands current AI techniques by integrating MKR to facilitate the mutual benefits of the complementary capacity of each representation, e.g., raw signal perception and symbolic encoding. We expect that MKR research and its applications will drive the evolution of AI 2.0 and beyond.
[1]Amodei D, Olah C, Steinhardt J, et al., 2016. Concrete problems in AI safety. https://arxiv.org/abs/1606.06565v2
[2]Arrieta AB, díaz-Rodríguez N, Del Ser J, et al., 2020. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform Fus, 58:82-115.
[3]Auer S, Bizer C, Kobilarov G, et al., 2007. DBpedia: a nucleus for a web of open data. Proc 6th Int Semantic Web Conf and 2nd Asian Semantic Web Conf the Semantic Web, p.722-735.
[4]de Souza CR, Gaidon A, Cabon Y, et al., 2017. Procedural generation of videos to train deep action recognition networks. IEEE Conf on Computer Vision and Pattern Recognition, p.2594-2604.
[5]Fan HH, Zhu LC, Yang Y, et al., 2020. Recurrent attention network with reinforced generator for visual dialog. ACM Trans Multim Comput Commun Appl, 16(3):78.
[6]Ferrada S, Bustos B, Hogan A, 2017. IMGpedia: a linked dataset with content-based analysis of Wikimedia images. Proc 16th Int Semantic Web Conf on the Semantic Web, p.84-93.
[7]França MVM, Zaverucha G, d’Avila Garcez AS, 2014. Fast relational learning using bottom clause propositionalization with artificial neural networks. Mach Learn, 94(1):81-104.
[8]Gogoglou A, Bruss CB, Hines KE, 2019. On the interpretability and evaluation of graph representation learning. https://arxiv.org/abs/1910.03081
[9]Goodfellow IJ, Pouget-Abadie J, Mirza M, et al., 2014. Generative adversarial nets. Proc 27th Int Conf on Neural Information Processing Systems, p.2672-2680.
[10]He KM, Zhang XY, Ren SQ, et al., 2016. Deep residual learning for image recognition. IEEE Conf on Computer Vision and Pattern Recognition, p.770-778.
[11]He KM, Fan HQ, Wu YX, et al., 2020. Momentum contrast for unsupervised visual representation learning. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.9726-9735.
[12]Johnson J, Gupta A, Li FF, 2018. Image generation from scene graphs. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.1219-1228.
[13]Krizhevsky A, Sutskever I, Hinton GE, 2012. ImageNet classification with deep convolutional neural networks. Proc 25th Int Conf on Neural Information Processing Systems, p.1097-1105.
[14]Miao JX, Wu Y, Yang Y, 2021. Identifying visible parts via pose estimation for occluded person re-identification. IEEE Trans Neur Netw Learn Syst, early access.
[15]Pan YH, 2019. On visual knowledge. Front Inform Technol Electron Eng, 20(8):1021-1025.
[16]Pan YH, 2020a. Miniaturized five fundamental issues about visual knowledge. Front Inform Technol Electron Eng, early access.
[17]Pan YH, 2020b. Multiple knowledge representation of artificial intelligence. Engineering, 6(3):216-217.
[18]Pan YH, 2021. On visual understanding. Front Inform Technol Electron Eng, early access.
[19]Serafini L, d’Avila Garcez A, 2016. Logic tensor networks: deep learning and logical reasoning from data and knowledge. Proc 11th Int Workshop on Neural-Symbolic Learning and Reasoning Co-located with the Joint Multi-conf on Human-Level Artificial Intelligence.
[20]Simonyan K, Zisserman A, 2014. Two-stream convolutional networks for action recognition in videos. Proc 27th Int Conf on Neural Information Processing Systems, p.568-576.
[21]Singh J, Zheng L, 2020. Combining semantic guidance and deep reinforcement learning for generating human level paintings. https://arxiv.org/abs/2011.12589
[22]Sun YF, Cheng CM, Zhang YH, et al., 2020. Circle loss: a unified perspective of pair similarity optimization. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.6397-6406.
[23]Tang KH, Niu YL, Huang JQ, et al., 2020. Unbiased scene graph generation from biased training. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.3716-3725.
[24]Vaswani A, Shazeer N, Parmar N, et al., 2017. Attention is all you need. Proc 31st Int Conf on Neural Information Processing Systems, p.6000-6010.
[25]Veeravasarapu V, Rothkopf C, Visvanathan R, 2017. Adversarially tuned scene generation. IEEE Conf on Computer Vision and Pattern Recognition, p.6441-6449.
[26]Vrandečić D, Krötzsch M, 2014. Wikidata: a free collaborative knowledge base. Commun ACM, 57(10):78-85.
[27]Wang XH, Zhu LC, Wu Y, et al., 2020. Symbiotic attention for egocentric action recognition with object-centric alignment. IEEE Trans Patt Anal Mach Intell, early access.
[28]Xu DF, Zhu YK, Choy CB, et al., 2017. Scene graph generation by iterative message passing. IEEE Conf on Computer Vision and Pattern Recognition, p.3097-3106.
[29]Yan Y, Nie FP, Li W, et al., 2016. Image classification by cross-media active learning with privileged information. IEEE Trans Multim, 18(12):2494-2502.
[30]Zhu LC, Fan HH, Luo YW, et al., 2021. Temporal cross-layer correlation mining for action recognition. IEEE Trans Multim, early access.
Open peer comments: Debate/Discuss/Question/Opinion
<1>