CLC number: TP181
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2023-04-09
Cited: 0
Clicked: 2693
Citations: Bibtex RefMan EndNote GB/T7714
https://orcid.org/0009-0001-0142-7933
Luolin XIONG, Yang TANG, Chensheng LIU, Shuai MAO, Ke MENG, Zhaoyang DONG, Feng QIAN. A home energy management approach using decoupling value and policy in reinforcement learning[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.2200667 @article{title="A home energy management approach using decoupling value and policy in reinforcement learning", %0 Journal Article TY - JOUR
基于解耦价值和策略强化学习的家庭能源管理方法1华东理工大学能源化工过程智能制造教育部重点实验室,中国上海市,200237 2南通大学电气工程学院,中国南通市,226019 3新南威尔士大学电气工程与通信学院,澳大利亚新南威尔士州,2052 4南洋理工大学电气与电子工程学院,新加坡南洋大道50号,639798 摘要:由于电动汽车的普及性和家用电器的灵活性,在动态电价下对家庭能源系统进行能源调度优化电力成本和保障居民舒适度是可行的。本文提出一种基于数据驱动的深度强化学习家庭能源管理方法。首先,为揭示影响电动汽车充电行为的多种不确定因素,引入一种结合驾驶员经验、突发事件和交通状况的改进数学模型描述电动汽车在家庭能源系统中的动态能量需求。其次,提出一种解耦优势演员-评论家(DA2C)算法,通过缓解策略和价值共享网络导致的过拟合问题提升能源优化性能。此外,策略函数和价值函数的解耦网络确保了所提方法在不可见场景中的泛化性。最后,将所提方法与现有方法进行综合实验比较。结果表明,该方法能在不同场景下优化用电成本并兼顾居住舒适度。 关键词组: Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article
Reference[1]Agnew S, Dargusch P, 2015. Effect of residential solar and storage on centralized electricity supply systems. Nat Climate Change, 5(4):315-318. ![]() [2]Anvari-Moghaddam A, Monsef H, Rahimi-Kian A, 2015. Optimal smart home energy management considering energy saving and a comfortable lifestyle. IEEE Trans Smart Grid, 6(1):324-332. ![]() [3]Ausgrid, 2014. Solar Home Electricity Data. http://www.ipart.nsw.gov.au [Accessed on Nov. 30, 2022]. ![]() [4]Baek K, Lee E, Kim J, 2021. Resident behavior detection model for environment responsive demand response. IEEE Trans Smart Grid, 12(5):3980-3989. ![]() [5]Cobbe K, Hilton J, Klimov O, et al., 2021. Phasic policy gradient. Proc 38th Int Conf on Machine Learning, p.2020-2027. ![]() [6]Gao HJ, Li ZK, Yu XH, et al., 2022. Hierarchical multiobjective heuristic for PCB assembly optimization in a beam-head surface mounter. IEEE Trans Cybern, 52(7):6911-6924. ![]() [7]Hu KY, Li WJ, Wang LD, et al., 2018. Energy management for multi-microgrid system based on model predictive control. Front Inform Technol Electron Eng, 19(11):1340-1351. ![]() [8]Huang G, Wu F, Guo CX, 2022. Smart grid dispatch powered by deep learning: a survey. Front Inform Technol Electron Eng, 23(5):763-776. ![]() [9]Kong WC, Luo FJ, Jia YW, et al., 2021. Benefits of home energy storage utilization: an Australian case study of demand charge practices in residential sector. IEEE Trans Smart Grid, 12(4):3086-3096. ![]() [10]Kumari A, Tanwar S, 2022. A reinforcement-learning-based secure demand response scheme for smart grid system. IEEE Internet Things J, 9(3):2180-2191. ![]() [11]Li HP, Wan ZQ, He HB, 2020a. A deep reinforcement learning based approach for home energy management system. Proc IEEE Power & Energy Society Innovative Smart Grid Technologies Conf, p.1-5. ![]() [12]Li HP, Wan ZQ, He HB, 2020b. Real-time residential demand response. IEEE Trans Smart Grid, 11(5):4144-4154. ![]() [13]Li JH, 2018. Cyber security meets artificial intelligence: a survey. Front Inform Technol Electron Eng, 19(12):1462-1474. ![]() [14]Liu SG, Zheng SZ, Zhang WB, et al., 2022. A power resource dispatching framework with a privacy protection function in the power Internet of Things. Front Inform Technol Electron Eng, 23(9):1354-1368. ![]() [15]Liu YB, Liu JY, Taylor G, et al., 2016. Situational awareness architecture for smart grids developed in accordance with dispatcher’s thought process: a review. Front Inform Technol Electron Eng, 17(11):1107-1121. ![]() [16]Liu ZT, Lin WY, Yu XH, et al., 2022. Approximation-free robust synchronization control for dual-linear-motors-driven systems with uncertainties and disturbances. IEEE Trans Ind Electron, 69(10):10500-10509. ![]() [17]Liu ZT, Gao HJ, Yu XH, et al., 2023. B-spline wavelet neural network-based adaptive control for linear motor-driven systems via a novel gradient descent algorithm. IEEE Trans Ind Electron, early access. ![]() [18]Lu RZ, Jiang ZY, Wu HM, et al., 2023. Reward shaping-based actor-critic deep reinforcement learning for residential energy management. IEEE Trans Ind Inform, 19(3):2662-2673. ![]() [19]Luo FJ, Kong WC, Ranzi G, et al., 2020. Optimal home energy management system with demand charge tariff and appliance operational dependencies. IEEE Trans Smart Grid, 11(1):4-14. ![]() [20]Mao S, Wang B, Tang Y, et al., 2019. Opportunities and challenges of artificial intelligence for green manufacturing in the process industry. Engineering, 5(6):995-1002. ![]() [21]Mao S, Tang Y, Dong ZW, et al., 2021. A privacy preserving distributed optimization algorithm for economic dispatch over time-varying directed networks. IEEE Trans Ind Inform, 17(3):1689-1701. ![]() [22]Nafi NM, Glasscock C, Hsu W, 2022. Attention-based partial decoupling of policy and value for generalization in reinforcement learning. Proc 21st IEEE Int Conf on Machine Learning and Applications, p.15-22. ![]() [23]ta K, Oiki T, Jha D, et al., 2020. Can increasing input dimensionality improve deep reinforcement learning? Proc 37th Int Conf on Machine Learning, p.7424-7433. ![]() [24]Parag Y, Sovacool BK, 2016. Electricity market design for the prosumer era. Nat Energy, 1(4):16032. ![]() [25]Qian F, 2019. Smart process manufacturing systems: deep integration of artificial intelligence and process manufacturing. Engineering, 5(6):981. ![]() [26]Qian F, 2021. Editorial for special issue “artificial intelligence energizes process manufacturing”. Engineering, 7(9):1193-1194. ![]() [27]Qin ZM, Liu D, Hua HC, et al., 2021. Privacy preserving load control of residential microgrid via deep reinforcement learning. IEEE Trans Smart Grid, 12(5):4079-4089. ![]() [28]Raileanu R, Fergus R, 2021. Decoupling value and policy for generalization in reinforcement learning. Proc 38th Int Conf on Machine Learning, p.8787-8798. ![]() [29]Rastegar M, Fotuhi-Firuzabad M, Zareipour H, et al., 2017. A probabilistic energy management scheme for renewable-based residential energy hubs. IEEE Trans Smart Grid, 8(5):2217-2227. ![]() [30]Saberi H, Zhang C, Dong ZY, 2021. Data-driven distributionally robust hierarchical coordination for home energy management. IEEE Trans Smart Grid, 12(5):4090-4101. ![]() [31]Schulman J, Wolski F, Dhariwal P, et al., 2017. Proximal policy optimization algorithms. https://arxiv.org/abs/1707.06347 ![]() [32]Shi PW, Sun WC, Yang XB, et al., 2023. Master-slave synchronous control of dual-drive gantry stage with cogging force compensation. IEEE Trans Syst Man Cybern Syst, 53(1):216-225. ![]() [33]Shirsat A, Tang WY, 2021. Data-driven stochastic model predictive control for DC-coupled residential PV-storage systems. IEEE Trans Energy Convers, 36(2):1435-1448. ![]() [34]Shuvo SS, Yilmaz Y, 2022. Home energy recommendation system (HERS): a deep reinforcement learning method based on residents’ feedback and activity. IEEE Trans Smart Grid, 13(4):2812-2821. ![]() [35]Tang Y, Zhao C, Wang J, et al., 2022. Perception and navigation in autonomous systems in the era of learning: a survey. IEEE Trans Neur Netw Learn Syst, early access. ![]() [36]Wang AJ, Liu WP, Dong T, et al., 2022. DisEHPPC: enabling heterogeneous privacy-preserving consensus-based scheme for economic dispatch in smart grids. IEEE Trans Cybern, 52(6):5124-5135. ![]() [37]Wang HN, Liu N, Zhang YY, et al., 2020. Deep reinforcement learning: a survey. Front Inform Technol Electron Eng, 21(12):1726-1744. ![]() [38]Wang JR, Hong YT, Wang JL, et al., 2022. Cooperative and competitive multi-agent systems: from optimization to games. IEEE/CAA J Autom Sin, 9(5):763-783. ![]() [39]Wang YP, Zheng KX, Tian DX, et al., 2021. Pre-training with asynchronous supervised learning for reinforcement learning based autonomous driving. Front Inform Technol Electron Eng, 22(5):673-686. ![]() [40]Wen GH, Yu XH, Liu ZW, 2021. Recent progress on the study of distributed economic dispatch in smart grid: an overview. Front Inform Technol Electron Eng, 22(1):25-39. ![]() [41]Xia YH, Liu JY, Huang ZW, et al., 2016. Carbon emission impact on the operation of virtual power plant with combined heat and power system. Front Inform Technol Electron Eng, 17(5):479-488. ![]() [42]Xiong LL, Mao S, Tang Y, et al., 2021. Reinforcement learning based integrated energy system management: a survey. Acta Autom Sin, 47(10):2321-2340 (in Chinese). ![]() [43]Xiong LL, Tang Y, Mao S, et al., 2022. A two-level energy management strategy for multi-microgrid systems with interval prediction and reinforcement learning. IEEE Trans Circ Syst I Regul Pap, 69(4):1788-1799. ![]() [44]Xu X, Jia YW, Xu Y, et al., 2020. A multi-agent reinforcement learning-based data-driven method for home energy management. IEEE Trans Smart Grid, 11(4):3201-3211. ![]() [45]Yan LF, Chen X, Zhou JY, et al., 2021. Deep reinforcement learning for continuous electric vehicles charging control with dynamic user behaviors. IEEE Trans Smart Grid, 12(6):5124-5134. ![]() [46]Zenginis I, Vardakas J, Koltsaklis NE, et al., 2022. Smart home’s energy management through a clustering-based reinforcement learning approach. IEEE Internet Things J, 9(17):16363-16371. ![]() [47]Zhang HF, Yue D, Dou CX, et al., 2022. Two-layered hierarchical optimization strategy with distributed potential game for interconnected hybrid energy systems. IEEE Trans Cybern, early access. ![]() [48]Zhang YA, Yang QY, An D, et al., 2022. Multistep multi-agent reinforcement learning for optimal energy schedule strategy of charging stations in smart grid. IEEE Trans Cybern, 53(7):4292-4305. ![]() [49]Zhang YI, Ai ZY, Chen JC, et al., 2022. Energy-saving optimization and control of autonomous electric vehicles with considering multiconstraints. IEEE Trans Cybern, 52(10):10869-10881. ![]() [50]Zhou SP, 2020. Summary of Time of Use Electricity Price Policy at Home and Abroad (in Chinese). https://shoudian.bjx.com.cn/html/20200807/1095247.shtml [Accessed on Nov. 30, 2022]. ![]() Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou
310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn Copyright © 2000 - 2025 Journal of Zhejiang University-SCIENCE |
Open peer comments: Debate/Discuss/Question/Opinion
<1>