Full Text:  <2595>

Summary:  <1208>

CLC number: TP391

On-line Access: 2021-03-08

Received: 2020-07-13

Revision Accepted: 2020-11-11

Crosschecked: 2021-01-08

Cited: 0

Clicked: 3778

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Yi Han

https://orcid.org/0000-0001-9176-8178

Linbo Qiao

https://orcid.org/0000-0002-8285-2738

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering 

Accepted manuscript available online (unedited version)


A survey of script learning


Author(s):  Yi Han, Linbo Qiao, Jianming Zheng, Hefeng Wu, Dongsheng Li, Xiangke Liao

Affiliation(s):  Science and Technology on Parallel and Distributed Processing Laboratory, National University of Defense Technology, Changsha 410000, China; more

Corresponding email(s):  hanyi12@nudt.edu.cn, qiao.linbo@nudt.edu.cn, zhengjianming12@nudt.edu.cn, wuhefeng@mail.sysu.edu.cn, dsli@nudt.edu.cn, xkliao@nudt.edu.cn

Key Words:  Script learning, Natural language processing, Commonsense knowledge modeling, Event reasoning


Share this article to: More <<< Previous Paper|Next Paper >>>

Yi Han, Linbo Qiao, Jianming Zheng, Hefeng Wu, Dongsheng Li, Xiangke Liao. A survey of script learning[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.2000347

@article{title="A survey of script learning",
author="Yi Han, Linbo Qiao, Jianming Zheng, Hefeng Wu, Dongsheng Li, Xiangke Liao",
journal="Frontiers of Information Technology & Electronic Engineering",
year="in press",
publisher="Zhejiang University Press & Springer",
doi="https://doi.org/10.1631/FITEE.2000347"
}

%0 Journal Article
%T A survey of script learning
%A Yi Han
%A Linbo Qiao
%A Jianming Zheng
%A Hefeng Wu
%A Dongsheng Li
%A Xiangke Liao
%J Frontiers of Information Technology & Electronic Engineering
%P 341-373
%@ 2095-9184
%D in press
%I Zhejiang University Press & Springer
doi="https://doi.org/10.1631/FITEE.2000347"

TY - JOUR
T1 - A survey of script learning
A1 - Yi Han
A1 - Linbo Qiao
A1 - Jianming Zheng
A1 - Hefeng Wu
A1 - Dongsheng Li
A1 - Xiangke Liao
J0 - Frontiers of Information Technology & Electronic Engineering
SP - 341
EP - 373
%@ 2095-9184
Y1 - in press
PB - Zhejiang University Press & Springer
ER -
doi="https://doi.org/10.1631/FITEE.2000347"


Abstract: 
Script is the structured knowledge representation of prototypical real-life event sequences. Learning the commonsense knowledge inside the script can be helpful for machines in understanding natural language and drawing commonsensible inferences. script learning is an interesting and promising research direction, in which a trained script learning system can process narrative texts to capture script knowledge and draw inferences. However, there are currently no survey articles on script learning, so we are providing this comprehensive survey to deeply investigate the standard framework and the major research topics on script learning. This research field contains three main topics: event representations, script learning models, and evaluation approaches. For each topic, we systematically summarize and categorize the existing script learning systems, and carefully analyze and compare the advantages and disadvantages of the representative systems. We also discuss the current state of the research and possible future directions.

脚本学习综述

韩毅1,乔林波1,郑建明2,吴贺丰3,李东升1,廖湘科1
1国防科技大学并行与分布处理国防科技重点实验室,中国长沙市,410000
2国防科技大学信息系统工程重点实验室,中国长沙市,410000
3中山大学数据科学与计算机学院,中国广州市,510006
摘要:脚本是现实世界中日常事件的结构化知识表示。学习脚本中蕴含的丰富常识知识可以帮助机器理解自然语言并做出常识性推理。脚本学习是一个颇具用途及潜力的研究方向,一个经过训练的脚本学习系统可以处理叙事文本,捕捉其中的脚本知识进而做出推理。然而目前尚不存在针对脚本学习的综述性文章,因此我们写作本文以深入研究脚本学习的基本框架和主要研究方向。脚本学习主要包括3个重点研究内容:事件表示方式、脚本学习模型以及性能评估方法。针对每一主题,对现有脚本学习系统进行了系统总结和分类,仔细分析和比较了其中代表性系统的优缺点。此外,研究并讨论了脚本学习的发展现状以及未来研究方向。

关键词组:脚本学习;自然语言处理;常识知识建模;事件推理

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Arrieta AB, Díaz-Rodríguez N, Del Ser J, et al., 2020. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform Fus, 58:82-115.

[2]Balasubramanian N, Soderland S, Mausam, et al., 2013. Generating coherent event schemas at scale. Proc Conf on Empirical Methods in Natural Language Processing, p.1721-1731.

[3]Baroni M, Zamparelli R, 2010. Nouns are vectors, adjectives are matrices: representing adjective-noun constructions in semantic space. Proc Conf on Empirical Methods in Natural Language Processing, p.1183-1193.

[4]Bengio Y, Ducharme R, Vincent P, et al., 2003. A neural probabilistic language model. J Mach Learn Res, 3:1137-1155.

[5]Bordes A, Usunier N, Garcia-Durán A, et al., 2013. Translating embeddings for modeling multi-relational data. Proc 26th Int Conf on Neural Information Processing Systems, p.2787-2795.

[6]Bower GH, Black JB, Turner TJ, 1979. Scripts in memory for text. Cogn Psychol, 11(2):177-220.

[7]Chambers N, 2017. Behind the scenes of an evolving event cloze test. Proc 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-Level Semantics, p.41-45.

[8]Chambers N, Jurafsky D, 2008. Unsupervised learning of narrative event chains. Proc 46th Annual Meeting of the Association for Computational Linguistics, p.789-797.

[9]Chambers N, Jurafsky D, 2009. Unsupervised learning of narrative schemas and their participants. Proc Joint Conf of the 47th Annual Meeting of the ACL and the 4th Int Joint Conf on Natural Language Processing of the AFNLP, p.602-610.

[10]Chung J, Gulcehre C, Cho K, et al., 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. https://arxiv.org/abs/1412.3555

[11]Church KW, Hanks P, 1990. Word association norms, mutual information, and lexicography. Comput Ling, 16(1):22-29.

[12]Cullingford RE, 1978. Script Application: Computer Understanding of Newspaper Stories. PhD Thesis, Yale University, New Haven, CT, USA.

[13]DeJong GF, 1979. Skimming Stories in Real Time: an Experiment in Integrated Understanding. PhD Thesis, Yale University, New Haven, CT, USA.

[14]Devlin J, Chang MW, Lee K, et al., 2019. BERT: pre-training of deep bidirectional transformers for language understanding. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.4171-4186.

[15]Ding X, Li ZY, Liu T, et al., 2019a. ELG: an event logic graph. https://arxiv.org/abs/1907.08015

[16]Ding X, Liao K, Liu T, et al., 2019b. Event representation learning enhanced with external commonsense knowledge. Proc Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing, p.4896-4905.

[17]Erk K, Padó S, 2008. A structured vector space model for word meaning in context. Proc Conf on Empirical Methods in Natural Language Processing, p.897-906.

[18]Fillmore CJ, 1976. Frame semantics and the nature of language. Ann N Y Acad Sci, 280(1):20-32.

[19]Glavav s G, v Snajder J, 2015. Construction and evaluation of event graphs. Nat Lang Eng, 21(4):607-652.

[20]Gordon AS, 2001. Browsing image collections with representations of common-sense activities. J Am Soc Inform Sci Technol, 52(11):925-929.

[21]Granroth-Wilding M, Clark S, 2016. What happens next? Event prediction using a compositional neural network model. Proc 30th AAAI Conf on Artificial Intelligence, p.2727-2733.

[22]Gupta R, Kochenderfer MJ, 2004. Common sense data acquisition for indoor mobile robots. Proc 19th National Conf on Artifical Intelligence, p.605-610.

[23]Harris ZS, 1954. Distributional structure. Word, 10(2-3):146-162.

[24]Hochreiter S, Schmidhuber J, 1997. Long short-term memory. Neur Comput, 9(8):1735-1780.

[25]Hu LM, Li JZ, Nie LQ, et al., 2017. What happens next? Future subevent prediction using contextual hierarchical LSTM. Proc 31st AAAI Conf on Artificial Intelligence, p.3450-3456.

[26]Jans B, Bethard S, Vulic, et al., 2012. Skip N-grams and ranking functions for predicting script events. Proc 13th Conf of the European Chapter of the Association for Computational Linguistics, p.336-344.

[27]Jones MP, Martin JH, 1997. Contextual spelling correction using latent semantic analysis. Proc 5th Conf on Applied Natural Language Processing, p.166-173.

[28]Kaelbling LP, Littman ML, Moore AW, 1996. Reinforcement learning: a survey. J Artif Intell Res, 4:237-285.

[29]Khan A, Salim N, Kumar YJ, 2015. A framework for multi-document abstractive summarization based on semantic role labelling. Appl Soft Comput, 30:737-747.

[30]Kiros R, Zhu YK, Salakhutdinov R, et al., 2015. Skip-thought vectors. Proc 28th Int Conf on Neural Information Processing Systems, p.3294-3302.

[31]Koh PW, Liang P, 2017. Understanding black-box predictions via influence functions. Proc 34th Int Conf on Machine Learning, p.1885-1894.

[32]Laender AHF, Ribeiro-Neto BA, Da Silva AS, et al., 2002. A brief survey of web data extraction tools. ACM SIGMOD Rec, 31(2):84-93.

[33]Lee G, Flowers M, Dyer MG, 1992. Learning distributed representations of conceptual knowledge and their application to script-based story processing. In: Sharkey N (Ed.), Connectionist Natural Language Processing. Springer, Dordrecht, p.215-247.

[34]Lee IT, Goldwasser D, 2018. FEEL: featured event embedding learning. Proc 32nd AAAI Conf on Artificial Intelligence.

[35]Lee IT, Goldwasser D, 2019. Multi-relational script learning for discourse relations. Proc 57th Annual Meeting of the Association for Computational Linguistics, p.4214-4226.

[36]Li JW, Monroe W, Ritter A, et al., 2016. Deep reinforcement learning for dialogue generation. Proc Conf on Empirical Methods in Natural Language Processing, p.1192-1202.

[37]Li Q, Li ZW, Wei JM, et al., 2018. A multi-attention based neural network with external knowledge for story ending predicting task. Proc 27th Int Conf on Computational Linguistics, p.1754-1762.

[38]Li ZY, Ding X, Liu T, 2018. Constructing narrative event evolutionary graph for script event prediction. Proc 27th Int Joint Conf on Artificial Intelligence, p.4201-4207.

[39]Li ZY, Ding X, Liu T, 2019. Story ending prediction by transferable BERT. Proc 28th Int Joint Conf on Artificial Intelligence, p.1800-1806.

[40]Lin YK, Liu ZY, Sun MS, et al., 2015. Learning entity and relation embeddings for knowledge graph completion. Pro 29th AAAI Conf on Artificial Intelligence.

[41]Lin ZH, Feng MW, Dos Santos CN, et al., 2017. A structured self-attentive sentence embedding. Proc 5th Int Conf on Learning Representations.

[42]Luong T, Pham H, Manning CD, 2015. Effective approaches to attention-based neural machine translation. Proc Conf on Empirical Methods in Natural Language Processing, p.1412-1421.

[43]Lv SW, Qian WH, Huang LT, et al., 2019. SAM-Net: integrating event-level and chain-level attentions to predict what happens next. Proc AAAI Conf on Artificial Intelligence, p.6802-6809.

[44]Mausam, Schmitz M, Bart R, et al., 2012. Open language learning for information extraction. Proc Joint Conf on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, p.523-534.

[45]McCann B, Bradbury J, Xiong CM, et al., 2017. Learned in translation: contextualized word vectors. Proc 31st Int Conf on Neural Information Processing Systems, p.6297-6308.

[46]Miikkulainen R, 1992. Discern: a distributed neural network model of script processing and memory. University Twente, Connectionism and Natural Language Processing, p.115-124.

[47]Miikkulainen R, 1993. Subsymbolic Natural Language Processing: an Integrated Model of Scripts, Lexicon, and Memory. MIT Press, Cambridge, USA.

[48]Mikolov T, Chen K, Corrado G, et al., 2013. Efficient estimation of word representations in vector space. https://arxiv.org/abs/1301.3781

[49]Miller GA, 1995. WordNet: a lexical database for English. Commun ACM, 38(11):39-41.

[50]Minsky M, 1975. A framework for representing knowledge. In: Winston PH (Ed.), The Psychology of Computer Vision. McGraw-Hill Book, New York, USA.

[51]Mnih A, Hinton G, 2007. Three new graphical models for statistical language modelling. Proc 24th Int Conf on Machine Learning, p.641-648.

[52]Modi A, 2016. Event embeddings for semantic script modeling. Proc 20th SIGNLL Conf on Computational Natural Language Learning, p.75-83.

[53]Modi A, Titov I, 2014a. Inducing neural models of script knowledge. Proc 18th Conf on Computational Natural Language Learning, p.49-57.

[54]Modi A, Titov I, 2014b. Learning semantic script knowledge with event embeddings. Proc 2nd Int Conf on Learning Representations.

[55]Modi A, Anikina T, Ostermann S, et al., 2016. InScript: narrative texts annotated with script information. Proc 10th Int Conf on Language Resources and Evaluation.

[56]Modi A, Titov I, Demberg V, et al., 2017. Modeling semantic expectation: using script knowledge for referent prediction. Trans Assoc Comput Ling, 5(2):31-44.

[57]Mostafazadeh N, Chambers N, He XD, et al., 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.839-849.

[58]Mueller ET, 1998. Natural Language Processing with ThoughtTreasure. Signiform, New York, USA.

[59]Navigli R, 2009. Word sense disambiguation: a survey. ACM Comput Surv, 41(2):10.

[60]Orr JW, Tadepalli P, Doppa JR, et al., 2014. Learning scripts as hidden Markov models. Proc 28th AAAI Conf on Artificial Intelligence, p.1565-1571.

[61]Osman AH, Salim N, Binwahlan MS, et al., 2012. Plagiarism detection scheme based on semantic role labeling. Proc Int Conf on Information Retrieval & Knowledge Management, p.30-33.

[62]Pei KX, Cao YZ, Yang JF, et al., 2017. DeepXplore: automated whitebox testing of deep learning systems. Proc 26th Symp on Operating Systems Principles, p.1-18.

[63]Pennington J, Socher R, Manning C, 2014. GloVe: global vectors for word representation. Proc Conf on Empirical Methods in Natural Language Processing, p.1532-1543.

[64]Perozzi B, Al-Rfou R, Skiena S, 2014. DeepWalk: online learning of social representations. Proc 20th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining, p.701-710.

[65]Peters M, Neumann M, Iyyer M, et al., 2018. Deep contextualized word representations. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.2227-2237.

[66]Pichotta K, Mooney R, 2014. Statistical script learning with multi-argument events. Proc 14th Conf of the European Chapter of the Association for Computational Linguistics, p.220-229.

[67]Pichotta K, Mooney RJ, 2016a. Learning statistical scripts with LSTM recurrent neural networks. Proc 30th AAAI Conf on Artificial Intelligence, p.2800-2806.

[68]Pichotta K, Mooney RJ, 2016b. Using sentence-level LSTM language models for script inference. Proc 54th Annual Meeting of the Association for Computational Linguistics, p.279-289.

[69]Prasad R, Dinesh N, Lee A, et al., 2008. The Penn discourse Treebank 2.0. Proc Int 6th Conf on Language Resources and Evaluation, p.2961-2968.

[70]Qiu XP, Sun TX, Xu YG, et al., 2020. Pre-trained models for natural language processing: a survey. https://arxiv.org/abs/2003.08271

[71]Radford A, Narasimhan K, Salimans T, et al., 2019. Improving language understanding by generative pre-training. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.4171-4186.

[72]Radinsky K, Agichtein E, Gabrilovich E, et al., 2011. A word at a time: computing word relatedness using temporal semantic analysis. Proc 20th Int Conf on World Wide Web, p.337-346.

[73]Rashkin H, Sap M, Allaway E, et al., 2018. Event2Mind: commonsense inference on events, intents, and reactions. Proc 56th Annual Meeting of the Association for Computational Linguistics, p.463-473.

[74]Regneri M, Koller A, Pinkal M, 2010. Learning script knowledge with web experiments. Proc 48th Annual Meeting of the Association for Computational Linguistics, p.979-988.

[75]Rudinger R, Rastogi P, Ferraro F, et al., 2015. Script induction as language modeling. Proc Conf on Empirical Methods in Natural Language Processing, p.1681-1686.

[76]Rumelhart DE, 1980. Schemata: the building blocks of cognition. In: Spiro RJ (Ed.), Theoretical Issues in Reading Comprehension. Erlbaum, Hillsdale, p.33-58.

[77]Sap M, Le Bras R, Allaway E, et al., 2019. ATOMIC: an atlas of machine commonsense for if-then reasoning. Proc AAAI Conf on Artificial Intelligence, p.3027-3035.

[78]Schank RC, 1983. Dynamic Memory: a Theory of Reminding and Learning in Computers and People. Cambridge University Press, New York, USA.

[79]Schank RC, 1990. Tell Me a Story: a New Look at Real and Artificial Memory. Charles Scribner, New York, USA.

[80]Schank RC, Abelson RP, 1977. Scripts, Plans, Goals and Understanding: an Inquiry into Human Knowledge Structures. L. Erlbaum, Hillsdale, USA.

[81]Schuler KK, 2005. VerbNet: a Broad-Coverage, Comprehensive Verb Lexicon. PhD Thesis, University of Pennsylvania, Pennsylvania, USA.

[82]Shen D, Lapata M, 2007. Using semantic roles to improve question answering. Proc Joint Conf on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, p.12-21.

[83]Socher R, Huval B, Manning CD, et al., 2012. Semantic compositionality through recursive matrix-vector spaces. Proc Joint Conf on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, p.1201-1211.

[84]Sutton RS, Barto AG, 2018. Reinforcement Learning: an Introduction (2nd Ed.). MIT Press, Cambridge, USA.

[85]Taylor WL, 1953. “Cloze procedure”: a new tool for measuring readability. J Mass Commun Q, 30(4):415-433.

[86]Terry WS, 2006. Learning and Memory: Basic Principles, Processes, and Procedures. Allyn and Bacon, Boston, USA.

[87]Tulving E, 1983. Elements of Episodic Memory. Oxford University Press, New York, USA.

[88]Wang Z, Zhang JW, Feng JL, et al., 2014. Knowledge graph embedding by translating on hyperplanes. Proc 28th AAAI Conf on Artificial Intelligence, p.1112-1119.

[89]Wang ZQ, Zhang Y, Chang CY, 2017. Integrating order information and event relation for script event prediction. Proc Conf on Empirical Methods in Natural Language Processing, p.57-67.

[90]Weber N, Balasubramanian N, Chambers N, 2018. Event representations with tensor-based compositions. Proc 32nd AAAI Conf on Artificial Intelligence, p.4946-4953.

[91]Weston J, Chopra S, Bordes A, 2015. Memory networks. https://arxiv.org/abs/1410.3916

[92]Zhao SD, Wang Q, Massung S, et al., 2017. Constructing and embedding abstract event causality networks from text snippets. Proc 10th ACM Int Conf on Web Search and Data Mining, p.335-344.

[93]Zheng JM, Cai F, Chen HH, 2020. Incorporating scenario knowledge into a unified fine-tuning architecture for event representation. Proc 43rd Int ACM SIGIR Conf on Research and Development in Information Retrieval, p.249-258.

[94]Zhou MT, Huang ML, Zhu XY, 2019. Story ending selection by finding hints from pairwise candidate endings. IEEE/ACM Trans Audio Speech Lang Process, 27(4):719-729.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE