CLC number: TP39
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2023-10-29
Cited: 0
Clicked: 2099
Citations: Bibtex RefMan EndNote GB/T7714
Weining WANG, Jiahui LI, Yifan LI, Xiaofen XING. Style-conditioned music generation with Transformer-GANs[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.2300359 @article{title="Style-conditioned music generation with Transformer-GANs", %0 Journal Article TY - JOUR
基于Transformer-GANs生成有风格调节的音乐华南理工大学电子与信息学院,中国广州市,510600 摘要:近年来,研究人员开发了各种算法来生成动听的音乐。然而,在生成过程中有时忽略了风格控制。音乐风格是指音乐作品呈现的具有代表性的特征,是音乐最突出的特质之一。本文提出一种创新的音乐生成算法,该算法能够根据指定的风格从零开始创作完整的音乐作品。算法引入了风格约束的线性生成器和风格鉴别器。风格约束生成器模拟MIDI事件序列,强调风格信息的作用。风格鉴别器应用对抗学习机制并引入两种创新的损失函数,以加强对音乐序列的建模。此外,本文首次建立了一个判别指标,以评估生成音乐与训练数据在音乐风格上的一致性。在现有公共数据集上,实验结果的客观和主观评价都表明我们的算法在音乐制作方面优于现有先进方法。 关键词组: Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article
Reference[1]Brunner G, Konrad A, Wang YY, et al., 2018. MIDI-VAE: modeling dynamics and instrumentation of music with applications to style transfer. Proc 19th Int Society for Music Information Retrieval Conf, p.747-754. ![]() [2]Choi K, Hawthorne C, Simon I, et al., 2020. Encoding musical style with Transformer autoencoders. Proc 37th Int Conf on Machine Learning, p.1899-1908. ![]() [3]Chou YH, Chen IC, Chang CJ, et al., 2021. MidiBERT-Piano: large-scale pre-training for symbolic music understanding. ![]() [4]Delgado M, Fajardo W, Molina-Solana M, 2009. Inmamusys: intelligent multiagent music system. Expert Syst Appl, 36(3):4574-4580. ![]() [5]Devlin J, Chang MW, Lee K, et al., 2019. BERT: pre-training of deep bidirectional Transformers for language understanding. Proc Conf of the North American Chapter of the Association for Computational Linguistics, p.4171-4186. ![]() [6]Dong HW, Yang YH, 2018. Convolutional generative adversarial networks with binary neurons for polyphonic music generation. Proc 19th Int Society for Music Information Retrieval Conf, p.190-196. ![]() [7]Dong HW, Hsiao WY, Yang LC, et al., 2018. MuseGAN: multi-track sequential generative adversarial networks for symbolic music generation and accompaniment. Proc 32nd AAAI Conf on Artificial Intelligence, Article 5. ![]() [8]Dong HW, Chen K, McAuley JJ, et al., 2020. MusPy: a toolkit for symbolic music generation. Proc 21st Int Society for Music Information Retrieval Conf, p.101-108. ![]() [9]Dosovitskiy A, Beyer L, Kolesnikov A, et al., 2021. An image is worth 16x16 words: Transformers for image recognition at scale. Proc 9th Int Conf on Learning Representations. ![]() [10]Ferreira LN, Whitehead J, 2021. Learning to generate music with sentiment. Proc 29th Int Society for Music Information Retrieval Conf, p.384-390. ![]() [11]Goodfellow I, Pouget-Abadie J, Mirza M, et al., 2020. Generative adversarial networks. Commun ACM, 63(11):139-144. ![]() [12]Herremans D, Chew E, 2019. MorpheuS: generating structured music with constrained patterns and tension. IEEE Trans Affect Comput, 10(4):510-523. ![]() [13]Hsiao WY, Liu JY, Yeh YC, et al., 2021. Compound word Transformer: learning to compose full-song music over dynamic directed hypergraphs. Proc 35th AAAI Conf on Artificial Intelligence, p.178-186. ![]() [14]Huang CZA, Vaswani A, Uszkoreit J, et al., 2019. Music Transformer: generating music with long-term structure. Proc 7th Int Conf on Learning Representations. ![]() [15]Huang YS, Yang YH, 2020. Pop music Transformer: beat-based modeling and generation of expressive pop piano compositions. Proc 28th ACM Int Conf on Multimedia, p.1180-1188. ![]() [16]Hung HT, Ching J, Doh S, et al., 2021. EMOPIA: a multi-modal pop piano dataset for emotion recognition and emotion-based music generation. Proc 22nd Int Society for Music Information Retrieval Conf, p.318-325. ![]() [17]Jang E, Gu SX, Poole B, 2017. Categorical reparameterization with Gumbel-Softmax. Proc 5th Int Conf on Learning Representations. ![]() [18]Jhamtani H, Berg-Kirkpatrick T, 2019. Modeling self-repetition in music generation using generative adversarial networks. Proc 36th Int Conf on Machine Learning. ![]() [19]Jiang JY, Wang ZQ, 2019. Stylistic melody generation with conditional variational auto-encoder. Available fromhttps://www.cs.cmu.edu/~epxing/Class/10708-19/assets/project/final-reports/project8.pdf [Accessed on Oct. 28, 2023]. ![]() [20]Jiang JY, Xia GG, Carlton DB, et al., 2020. Transformer VAE: a hierarchical model for structure-aware and interpretable music representation learning. Proc IEEE Int Conf on Acoustics, Speech and Signal Processing, p.516-520. ![]() [21]Kaliakatsos-Papakostas M, Floros A, Vrahatis MN, 2020. Artificial intelligence methods for music generation: a review and future perspectives. In: Yang XS (Ed.), Nature-Inspired Computation and Swarm Intelligence. Academic Press, Amsterdam, p.217-245. ![]() [22]Katharopoulos A, Vyas A, Pappas N, et al., 2020. Transformers are RNNs: fast autoregressive transformers with linear attention. Proc 37th Int Conf on Machine Learning, p.5156-5165. ![]() [23]Ke GL, He D, Liu TY, 2021. Rethinking positional encoding in language pre-training. Proc 9th Int Conf on Learning Representations. ![]() [24]Leach J, Fitch J, 1995. Nature, music, and algorithmic composition. Comput Music J, 19(2):23-33. ![]() [25]Liang X, Wu JM, Cao J, 2019. MIDI-Sandwich2: RNN-based hierarchical multi-modal fusion generation VAE networks for multi-track symbolic music generation. ![]() [26]Liao YK, Yue W, Jian YQ, et al., 2022. MICW: a multi-instrument music generation model based on the improved compound word. Proc IEEE Int Conf on Multi- media and Expo Workshops, p.1-10. ![]() [27]Lim YQ, Chan CS, Loo FY, 2020. Style-conditioned music generation. Proc IEEE Int Conf on Multimedia and Expo, p.1-6. ![]() [28]Liu HM, Yang YH, 2018. Lead sheet generation and arrangement by conditional generative adversarial network. Proc 17th IEEE Int Conf on Machine Learning and Applications, p.722-727. ![]() [29]Livingstone SR, Mühlberger R, Brown AR, et al., 2010. Changing musical emotion: a computational rule system for modifying score and performance. Comput Music J, 34(1):41-64. ![]() [30]Lousseief E, Sturm BLT, 2019. MahlerNet: unbounded orchestral music with neural networks. Proc Nordic Sound and Music Computing Conf and the Interactive Sonification Workshop, p.58-64. ![]() [31]Luo J, Yang XY, Ji SL, et al., 2020. MG-VAE: deep Chinese folk songs generation with specific regional styles. Proc 7th Conf on Sound and Music Technology, p.93-106. ![]() [32]Mao HH, Shin T, Cottrell G, 2018. DeepJ: style-specific music generation. Proc IEEE 12th Int Conf on Semantic Computing, p.377-382. ![]() [33]Mou LT, Sun YH, Tian YH, et al., 2023. MemoMusic 3.0: considering context at music recommendation and combining music theory at music generation. Proc IEEE Int Conf on Multimedia and Expo Workshops, p.296-301. ![]() [34]Muhamed A, Li L, Shi XJ, et al., 2021. Symbolic music generation with Transformer-GANs. Proc 35th AAAI Conf on Artificial Intelligence, p.408-417. ![]() [35]Nie WL, Narodytska N, Patel A, 2019. RelGAN: relational generative adversarial networks for text generation. Proc 7th Int Conf on Learning Representations. ![]() [36]Oore S, Simon I, Dieleman S, et al., 2020. This time with feeling: learning expressive musical performance. Neur Comput Appl, 32(4):955-967. ![]() [37]Ren Y, He JZ, Tan X, et al., 2020. PopMAG: pop music accompaniment generation. Proc 28th ACM Int Conf on Multimedia, p.1198-1206. ![]() [38]Rivero D, Ramírez-Morales I, Fernandez-Blanco E, et al., 2020. Classical music prediction and composition by means of variational autoencoders. Appl Sci, 10(9):3053. ![]() [39]Roberts A, Engel J, Raffel C, et al., 2018. A hierarchical latent vector model for learning long-term structure in music. Proc 35th Int Conf on Machine Learning, p.4364-4373. ![]() [40]Shih YJ, Wu SL, Zalkow F, et al., 2022. Theme Transformer: symbolic music generation with theme-conditioned Transformer. IEEE Trans Multimed, 25:3495-3508. ![]() [41]Sulun S, Davies MEP, Viana P, 2022. Symbolic music generation conditioned on continuous-valued emotions. IEEE Access, 10:44617-44626. ![]() [42]Supper M, 2001. A few remarks on algorithmic composition. Comput Music J, 25(1):48-53. ![]() [43]Trieu N, Keller RM, 2018. JazzGAN: improvising with generative adversarial networks. Proc 6th Int Workshop on Musical Metacreation. ![]() [44]Vaswani A, Shazeer N, Parmar N, et al., 2017. Attention is all you need. Proc 31st Int Conf on Neural Information Processing Systems, p.6000-6010. ![]() [45]Waite E, Eck D, Roberts A, et al., 2016. Project magenta: generating long-term structure in songs and stories. Available fromhttps://github.com/magenta/magenta/issues/1438 [Accessed on Oct. 28, 2023]. ![]() [46]Wang L, Zhao ZY, Liu HW, et al., 2023. A review of intelligent music generation systems. ![]() [47]Wang WP, Li XB, Jin C, et al., 2022. CPS: full-song and style-conditioned music generation with linear transformer. Proc IEEE Int Conf on Multimedia and Expo Workshops, p.1-6. ![]() [48]Williams RJ, Zipser D, 1989. A learning algorithm for continually running fully recurrent neural networks. Neur Comput, 1(2):270-280. ![]() [49]Wu SL, Yang YH, 2020. The jazz Transformer on the front line: exploring the shortcomings of AI-composed music through quantitative measures. Proc 21st Int Society for Music Information Retrieval Conf, p.142-149. ![]() [50]Wu XC, Wang CY, Lei QY, 2020. Transformer-XL based music generation with multiple sequences of time-valued notes. ![]() [51]Yang LC, Lerch A, 2020. On the evaluation of generative models in music. Neur Comput Appl, 32(9):4773-4784. ![]() [52]Yang LC, Chou SY, Yang YH, 2017. MidiNet: a convolutional generative adversarial network for symbolic-domain music generation. Proc 18th Int Society for Music Information Retrieval Conf, p.324-331. ![]() [53]Yu BT, Lu PL, Wang R, et al., 2022. Museformer: Transformer with fine- and coarse-grained attention for music generation. Proc 36th Conf on Neural Information Processing Systems, p.1376-1388. ![]() [54]Zhang N, 2023. Learning adversarial transformer for symbolic music generation. IEEE Trans Neur Netw Learn Syst, 34(4):1754-1763. ![]() [55]Zhang XY, Zhang JC, Qiu Y, et al., 2022. Structure-enhanced pop music generation via harmony-aware learning. Proc 30th ACM Int Conf on Multimedia, p.1204-1213. ![]() [56]Zhong K, Qiao TW, Zhang LQ, 2019. A study of emotional communication of emoticon based on Russell’s Circumplex Model of Affect. Proc 8th Int Conf on Human-Computer Interaction, p.577-596. ![]() Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou
310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn Copyright © 2000 - 2025 Journal of Zhejiang University-SCIENCE |
Open peer comments: Debate/Discuss/Question/Opinion
<1>