Full Text:   <218>

CLC number: 

On-line Access: 2024-03-25

Received: 2023-11-04

Revision Accepted: 2024-03-25

Crosschecked: 2023-11-22

Cited: 0

Clicked: 327

Citations:  Bibtex RefMan EndNote GB/T7714


Yi Yang


Yawei LUO


-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2024 Vol.25 No.3 P.333-341


Large language model and domain-specific model collaboration for smart education

Author(s):  Yawei LUO, Yi YANG

Affiliation(s):  School of Software Technology, Zhejiang University, Ningbo 315048, China; more

Corresponding email(s):   yaweiluo@zju.edu.cn, yangyics@zju.edu.cn

Key Words: 

Share this article to: More |Next Article >>>

Yawei LUO, Yi YANG. Large language model and domain-specific model collaboration for smart education[J]. Frontiers of Information Technology & Electronic Engineering, 2024, 25(3): 333-341.

@article{title="Large language model and domain-specific model collaboration for smart education",
author="Yawei LUO, Yi YANG",
journal="Frontiers of Information Technology & Electronic Engineering",
publisher="Zhejiang University Press & Springer",

%0 Journal Article
%T Large language model and domain-specific model collaboration for smart education
%A Yawei LUO
%J Frontiers of Information Technology & Electronic Engineering
%V 25
%N 3
%P 333-341
%@ 2095-9184
%D 2024
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300747

T1 - Large language model and domain-specific model collaboration for smart education
A1 - Yawei LUO
A1 - Yi YANG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 25
IS - 3
SP - 333
EP - 341
%@ 2095-9184
Y1 - 2024
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300747

In this paper, we introduce the large language model and domain-specific model collaboration (LDMC) framework designed to enhance smart education. The LDMC framework leverages the comprehensive and versatile knowledge of large domain-general models, combines it with the specialized and disciplinary knowledge from small domain-specific models (DSMs), and incorporates pedagogy knowledge from learning theory models. This integration yields multiple knowledge representations, fostering personalized and adaptive educational experiences. We explore various applications of the LDMC framework in the context of smart education. LDMC represents an advanced and comprehensive educational assistance framework, enriched with intelligent capabilities. With the continuous advancement of artificial intelligence (AI), this framework is poised to offer promising potential in significantly impacting the field of smart education.




Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article


[1]Agarwal O, Ge HM, Shakeri S, et al., 2021. Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.3554-3565.

[2]Anderson JR, Boyle CF, Reiser BJ, 1985. Intelligent tutoring systems. Science, 228(4698):456-462.

[3]Bajaj R, Sharma V, 2018. Smart education with artificial intelligence based determination of learning styles. Proc Comput Sci, 132:834-842.

[4]Dai W, Lin JH, Jin H, et al., 2023. Can large language models provide feedback to students? A case study on ChatGPT. IEEE Int Conf on Advanced Learning Technologies, p.323-325.

[5]Felder RM, Silverman LK, 1988. Learning and teaching styles in engineering education. Eng Educ, 78(7):674-681.

[6]Fleming N, Baume D, 2006. Learning styles again: varking up the right tree! Educ Dev, 7(4):4-7.

[7]Greff K, Srivastava RK, Koutník J, et al., 2017. LSTM: a search space odyssey. IEEE Trans Neur Netw Learn Syst, 28(10):2222-2232.

[8]Griffith S, Subramanian K, Scholz J, et al., 2013. Policy shaping: integrating human feedback with reinforcement learning. Proc 26th Int Conf on Neural Information Processing Systems, p.2625-2633.

[9]Healey M, Jenkins A, 2000. Kolb’s experiential learning theory and its application in geography in higher education. J Geogr, 99(5):185-195.

[10]Hickson L, Worrall L, Scarinci N, 2007. A randomized controlled trial evaluating the active communication education program for older people with hearing impairment. Ear Hear, 28(2):212-230.

[11]Honey P, Mumford A, 1994. Styles of learning. Gower Handb Manag Dev, 101:101-111.

[12]Hu EJ, Shen YL, Wallis P, et al., 2021. LoRa: low-rank adaptation of large language models. https://arxiv.org/abs/2106.09685

[13]Hwang GJ, 2014. Definition, framework and research issues of smart learning environments—a context-aware ubiquitous learning perspective. Smart Learn Environ, 1(1):4.

[14]Luo YW, Zheng L, Guan T, et al., 2019. Taking a closer look at domain shift: category-level adversaries for semantics consistent domain adaptation. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.2502-2511.

[15]Luo YW, Liu P, Zheng L, et al., 2022. Category-level adversarial adaptation for semantic segmentation using purified features. IEEE Trans Patt Anal Mach Intell, 44(8):3940-3956.

[16]Ma SJ, Luo YW, Yang Y, 2023. Personas-based student grouping using reinforcement learning and linear programming. Knowl-Based Syst, 281:111071.

[17]Pan YH, 2019. On visual knowledge. Front Inform Technol Electron Eng, 20(8):1021-1025.

[18]Pan YH, 2020. Multiple knowledge representation of artificial intelligence. Engineering, 6(3):216-217.

[19]Pan YH, 2021. Miniaturized five fundamental issues about visual knowledge. Front Inform Technol Electron Eng, 22(5):615-618.

[20]Pan YH, 2022. On visual understanding. Front Inform Technol Electron Eng, 23(9):1287-1289.

[21]Reif E, Ippolito D, Yuan A, et al., 2022. A recipe for arbitrary text style transfer with large language models. Proc 60th Annual Meeting of the Association for Computational Linguistics, p.837-848.

[22]Seo PH, Nagrani A, Schmid C, 2023. AVFormer: injecting vision into frozen speech models for zero-shot AV-ASR. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.22922-22931.

[23]Shi DQ, Wang T, Xing H, et al., 2020. A learning path recommendation model based on a multidimensional knowledge graph framework for e-learning. Knowl-Based Syst, 195:105618.

[24]Wang J, Tang Y, Hare R, et al., 2023. Parallel intelligent education with ChatGPT. Front Inform Technol Electron Eng, early access.

[25]Wang XH, Zhu LC, Zheng ZD, et al., 2022. Align and tell: boosting text-video retrieval with local alignment and fine-grained supervision. IEEE Trans Multim, 25:6079-6089.

[26]Wang YZ, 2021. An improved machine learning and artificial intelligence algorithm for classroom management of English distance education. J Intell Fuzzy Syst, 40(2):3477-3488.

[27]Wilson JM, Goodman PS, Cronin MA, 2007. Group learning. Acad Manag Rev, 32(4):1041-1059.

[28]Yang Y, Zhuang YT, Pan YH, 2021. Multiple knowledge representation for big data artificial intelligence: framework, applications, and case studies. Front Inform Technol Electron Eng, 22(12):1551-1558.

[29]Yang Y, Zhuang YT, Pan YH, 2022. The review of visual knowledge: a new pivot for cross-media intelligence evolution. J Image Graph, 27(9):2574-2588 (in Chinese).

[30]Ye PJ, Wang X, Zheng WB, et al., 2022. Parallel cognition: hybrid intelligence for human-machine interaction and management. Front Inform Technol Electron Eng, 23(12):1765-1779.

[31]Zamfirescu-Pereira JD, Wong RY, Hartmann B, et al., 2023. Why Johnny can’t prompt: how non-AI experts try (and fail) to design LLM prompts. Proc CHI Conf on Human Factors in Computing Systems, Article 437.

[32]Zhang XT, Li CY, Zong Y, et al., 2023. Evaluating the performance of large language models on Gaokao benchmark. https://arxiv.org/abs/2305.12474

[33]Zhang Y, Jin R, Zhou ZH, 2010. Understanding bag-of-words model: a statistical framework. Int J Mach Learn Cybern, 1(1):43-52.

[34]Zhou J, Ke P, Qiu XP, et al., 2023. ChatGPT: potential, prospects, and limitations. Front Inform Technol Electron Eng, early access.

[35]Zhuang YT, Tang SL, 2021. Visual knowledge: an attempt to explore machine creativity. Front Inform Technol Electron Eng, 22(5):619-624.

Open peer comments: Debate/Discuss/Question/Opinion


Please provide your name, email address and a comment

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE