Full Text:   <420>

Summary:  <150>

CLC number: TP181

On-line Access: 2023-06-21

Received: 2022-12-27

Revision Accepted: 2023-09-21

Crosschecked: 2023-04-09

Cited: 0

Clicked: 622

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Luolin XIONG

https://orcid.org/0009-0001-0142-7933

Yang TANG

https://orcid.org/0000-0002-2750-8029

Feng QIAN

https://orcid.org/0000-0003-2781-332X

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2023 Vol.24 No.9 P.1261-1272

http://doi.org/10.1631/FITEE.2200667


A home energy management approach using decoupling value and policy in reinforcement learning


Author(s):  Luolin XIONG, Yang TANG, Chensheng LIU, Shuai MAO, Ke MENG, Zhaoyang DONG, Feng QIAN

Affiliation(s):  Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China; more

Corresponding email(s):   tangtany@gmail.com, fqian@ecust.edu.cn

Key Words:  Home energy system, Electric vehicle, Reinforcement learning, Generalization


Luolin XIONG, Yang TANG, Chensheng LIU, Shuai MAO, Ke MENG, Zhaoyang DONG, Feng QIAN. A home energy management approach using decoupling value and policy in reinforcement learning[J]. Frontiers of Information Technology & Electronic Engineering, 2023, 24(9): 1261-1272.

@article{title="A home energy management approach using decoupling value and policy in reinforcement learning",
author="Luolin XIONG, Yang TANG, Chensheng LIU, Shuai MAO, Ke MENG, Zhaoyang DONG, Feng QIAN",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="24",
number="9",
pages="1261-1272",
year="2023",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2200667"
}

%0 Journal Article
%T A home energy management approach using decoupling value and policy in reinforcement learning
%A Luolin XIONG
%A Yang TANG
%A Chensheng LIU
%A Shuai MAO
%A Ke MENG
%A Zhaoyang DONG
%A Feng QIAN
%J Frontiers of Information Technology & Electronic Engineering
%V 24
%N 9
%P 1261-1272
%@ 2095-9184
%D 2023
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2200667

TY - JOUR
T1 - A home energy management approach using decoupling value and policy in reinforcement learning
A1 - Luolin XIONG
A1 - Yang TANG
A1 - Chensheng LIU
A1 - Shuai MAO
A1 - Ke MENG
A1 - Zhaoyang DONG
A1 - Feng QIAN
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 24
IS - 9
SP - 1261
EP - 1272
%@ 2095-9184
Y1 - 2023
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2200667


Abstract: 
Considering the popularity of electric vehicles and the flexibility of household appliances, it is feasible to dispatch energy in home energy systems under dynamic electricity prices to optimize electricity cost and comfort residents. In this paper, a novel home energy management (HEM) approach is proposed based on a data-driven deep reinforcement learning method. First, to reveal the multiple uncertain factors affecting the charging behavior of electric vehicles (EVs), an improved mathematical model integrating driver’s experience, unexpected events, and traffic conditions is introduced to describe the dynamic energy demand of EVs in home energy systems. Second, a decoupled advantage actor-critic (DA2C) algorithm is presented to enhance the energy optimization performance by alleviating the overfitting problem caused by the shared policy and value networks. Furthermore, separate networks for the policy and value functions ensure the generalization of the proposed method in unseen scenarios. Finally, comprehensive experiments are carried out to compare the proposed approach with existing methods, and the results show that the proposed method can optimize electricity cost and consider the residential comfort level in different scenarios.

基于解耦价值和策略强化学习的家庭能源管理方法

熊珞琳1,唐漾1,刘臣胜1,毛帅2,孟科3,董朝阳4,钱锋1
1华东理工大学能源化工过程智能制造教育部重点实验室,中国上海市,200237
2南通大学电气工程学院,中国南通市,226019
3新南威尔士大学电气工程与通信学院,澳大利亚新南威尔士州,2052
4南洋理工大学电气与电子工程学院,新加坡南洋大道50号,639798
摘要:由于电动汽车的普及性和家用电器的灵活性,在动态电价下对家庭能源系统进行能源调度优化电力成本和保障居民舒适度是可行的。本文提出一种基于数据驱动的深度强化学习家庭能源管理方法。首先,为揭示影响电动汽车充电行为的多种不确定因素,引入一种结合驾驶员经验、突发事件和交通状况的改进数学模型描述电动汽车在家庭能源系统中的动态能量需求。其次,提出一种解耦优势演员-评论家(DA2C)算法,通过缓解策略和价值共享网络导致的过拟合问题提升能源优化性能。此外,策略函数和价值函数的解耦网络确保了所提方法在不可见场景中的泛化性。最后,将所提方法与现有方法进行综合实验比较。结果表明,该方法能在不同场景下优化用电成本并兼顾居住舒适度。

关键词:家庭能源系统;电动汽车;强化学习;泛化性

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Agnew S, Dargusch P, 2015. Effect of residential solar and storage on centralized electricity supply systems. Nat Climate Change, 5(4):315-318.

[2]Anvari-Moghaddam A, Monsef H, Rahimi-Kian A, 2015. Optimal smart home energy management considering energy saving and a comfortable lifestyle. IEEE Trans Smart Grid, 6(1):324-332.

[3]Ausgrid, 2014. Solar Home Electricity Data. http://www.ipart.nsw.gov.au [Accessed on Nov. 30, 2022].

[4]Baek K, Lee E, Kim J, 2021. Resident behavior detection model for environment responsive demand response. IEEE Trans Smart Grid, 12(5):3980-3989.

[5]Cobbe K, Hilton J, Klimov O, et al., 2021. Phasic policy gradient. Proc 38th Int Conf on Machine Learning, p.2020-2027.

[6]Gao HJ, Li ZK, Yu XH, et al., 2022. Hierarchical multiobjective heuristic for PCB assembly optimization in a beam-head surface mounter. IEEE Trans Cybern, 52(7):6911-6924.

[7]Hu KY, Li WJ, Wang LD, et al., 2018. Energy management for multi-microgrid system based on model predictive control. Front Inform Technol Electron Eng, 19(11):1340-1351.

[8]Huang G, Wu F, Guo CX, 2022. Smart grid dispatch powered by deep learning: a survey. Front Inform Technol Electron Eng, 23(5):763-776.

[9]Kong WC, Luo FJ, Jia YW, et al., 2021. Benefits of home energy storage utilization: an Australian case study of demand charge practices in residential sector. IEEE Trans Smart Grid, 12(4):3086-3096.

[10]Kumari A, Tanwar S, 2022. A reinforcement-learning-based secure demand response scheme for smart grid system. IEEE Internet Things J, 9(3):2180-2191.

[11]Li HP, Wan ZQ, He HB, 2020a. A deep reinforcement learning based approach for home energy management system. Proc IEEE Power & Energy Society Innovative Smart Grid Technologies Conf, p.1-5.

[12]Li HP, Wan ZQ, He HB, 2020b. Real-time residential demand response. IEEE Trans Smart Grid, 11(5):4144-4154.

[13]Li JH, 2018. Cyber security meets artificial intelligence: a survey. Front Inform Technol Electron Eng, 19(12):1462-1474.

[14]Liu SG, Zheng SZ, Zhang WB, et al., 2022. A power resource dispatching framework with a privacy protection function in the power Internet of Things. Front Inform Technol Electron Eng, 23(9):1354-1368.

[15]Liu YB, Liu JY, Taylor G, et al., 2016. Situational awareness architecture for smart grids developed in accordance with dispatcher’s thought process: a review. Front Inform Technol Electron Eng, 17(11):1107-1121.

[16]Liu ZT, Lin WY, Yu XH, et al., 2022. Approximation-free robust synchronization control for dual-linear-motors-driven systems with uncertainties and disturbances. IEEE Trans Ind Electron, 69(10):10500-10509.

[17]Liu ZT, Gao HJ, Yu XH, et al., 2023. B-spline wavelet neural network-based adaptive control for linear motor-driven systems via a novel gradient descent algorithm. IEEE Trans Ind Electron, early access.

[18]Lu RZ, Jiang ZY, Wu HM, et al., 2023. Reward shaping-based actor-critic deep reinforcement learning for residential energy management. IEEE Trans Ind Inform, 19(3):2662-2673.

[19]Luo FJ, Kong WC, Ranzi G, et al., 2020. Optimal home energy management system with demand charge tariff and appliance operational dependencies. IEEE Trans Smart Grid, 11(1):4-14.

[20]Mao S, Wang B, Tang Y, et al., 2019. Opportunities and challenges of artificial intelligence for green manufacturing in the process industry. Engineering, 5(6):995-1002.

[21]Mao S, Tang Y, Dong ZW, et al., 2021. A privacy preserving distributed optimization algorithm for economic dispatch over time-varying directed networks. IEEE Trans Ind Inform, 17(3):1689-1701.

[22]Nafi NM, Glasscock C, Hsu W, 2022. Attention-based partial decoupling of policy and value for generalization in reinforcement learning. Proc 21st IEEE Int Conf on Machine Learning and Applications, p.15-22.

[23]ta K, Oiki T, Jha D, et al., 2020. Can increasing input dimensionality improve deep reinforcement learning? Proc 37th Int Conf on Machine Learning, p.7424-7433.

[24]Parag Y, Sovacool BK, 2016. Electricity market design for the prosumer era. Nat Energy, 1(4):16032.

[25]Qian F, 2019. Smart process manufacturing systems: deep integration of artificial intelligence and process manufacturing. Engineering, 5(6):981.

[26]Qian F, 2021. Editorial for special issue “artificial intelligence energizes process manufacturing”. Engineering, 7(9):1193-1194.

[27]Qin ZM, Liu D, Hua HC, et al., 2021. Privacy preserving load control of residential microgrid via deep reinforcement learning. IEEE Trans Smart Grid, 12(5):4079-4089.

[28]Raileanu R, Fergus R, 2021. Decoupling value and policy for generalization in reinforcement learning. Proc 38th Int Conf on Machine Learning, p.8787-8798.

[29]Rastegar M, Fotuhi-Firuzabad M, Zareipour H, et al., 2017. A probabilistic energy management scheme for renewable-based residential energy hubs. IEEE Trans Smart Grid, 8(5):2217-2227.

[30]Saberi H, Zhang C, Dong ZY, 2021. Data-driven distributionally robust hierarchical coordination for home energy management. IEEE Trans Smart Grid, 12(5):4090-4101.

[31]Schulman J, Wolski F, Dhariwal P, et al., 2017. Proximal policy optimization algorithms. https://arxiv.org/abs/1707.06347

[32]Shi PW, Sun WC, Yang XB, et al., 2023. Master-slave synchronous control of dual-drive gantry stage with cogging force compensation. IEEE Trans Syst Man Cybern Syst, 53(1):216-225.

[33]Shirsat A, Tang WY, 2021. Data-driven stochastic model predictive control for DC-coupled residential PV-storage systems. IEEE Trans Energy Convers, 36(2):1435-1448.

[34]Shuvo SS, Yilmaz Y, 2022. Home energy recommendation system (HERS): a deep reinforcement learning method based on residents’ feedback and activity. IEEE Trans Smart Grid, 13(4):2812-2821.

[35]Tang Y, Zhao C, Wang J, et al., 2022. Perception and navigation in autonomous systems in the era of learning: a survey. IEEE Trans Neur Netw Learn Syst, early access.

[36]Wang AJ, Liu WP, Dong T, et al., 2022. DisEHPPC: enabling heterogeneous privacy-preserving consensus-based scheme for economic dispatch in smart grids. IEEE Trans Cybern, 52(6):5124-5135.

[37]Wang HN, Liu N, Zhang YY, et al., 2020. Deep reinforcement learning: a survey. Front Inform Technol Electron Eng, 21(12):1726-1744.

[38]Wang JR, Hong YT, Wang JL, et al., 2022. Cooperative and competitive multi-agent systems: from optimization to games. IEEE/CAA J Autom Sin, 9(5):763-783.

[39]Wang YP, Zheng KX, Tian DX, et al., 2021. Pre-training with asynchronous supervised learning for reinforcement learning based autonomous driving. Front Inform Technol Electron Eng, 22(5):673-686.

[40]Wen GH, Yu XH, Liu ZW, 2021. Recent progress on the study of distributed economic dispatch in smart grid: an overview. Front Inform Technol Electron Eng, 22(1):25-39.

[41]Xia YH, Liu JY, Huang ZW, et al., 2016. Carbon emission impact on the operation of virtual power plant with combined heat and power system. Front Inform Technol Electron Eng, 17(5):479-488.

[42]Xiong LL, Mao S, Tang Y, et al., 2021. Reinforcement learning based integrated energy system management: a survey. Acta Autom Sin, 47(10):2321-2340 (in Chinese).

[43]Xiong LL, Tang Y, Mao S, et al., 2022. A two-level energy management strategy for multi-microgrid systems with interval prediction and reinforcement learning. IEEE Trans Circ Syst I Regul Pap, 69(4):1788-1799.

[44]Xu X, Jia YW, Xu Y, et al., 2020. A multi-agent reinforcement learning-based data-driven method for home energy management. IEEE Trans Smart Grid, 11(4):3201-3211.

[45]Yan LF, Chen X, Zhou JY, et al., 2021. Deep reinforcement learning for continuous electric vehicles charging control with dynamic user behaviors. IEEE Trans Smart Grid, 12(6):5124-5134.

[46]Zenginis I, Vardakas J, Koltsaklis NE, et al., 2022. Smart home’s energy management through a clustering-based reinforcement learning approach. IEEE Internet Things J, 9(17):16363-16371.

[47]Zhang HF, Yue D, Dou CX, et al., 2022. Two-layered hierarchical optimization strategy with distributed potential game for interconnected hybrid energy systems. IEEE Trans Cybern, early access.

[48]Zhang YA, Yang QY, An D, et al., 2022. Multistep multi-agent reinforcement learning for optimal energy schedule strategy of charging stations in smart grid. IEEE Trans Cybern, 53(7):4292-4305.

[49]Zhang YI, Ai ZY, Chen JC, et al., 2022. Energy-saving optimization and control of autonomous electric vehicles with considering multiconstraints. IEEE Trans Cybern, 52(10):10869-10881.

[50]Zhou SP, 2020. Summary of Time of Use Electricity Price Policy at Home and Abroad (in Chinese). https://shoudian.bjx.com.cn/html/20200807/1095247.shtml [Accessed on Nov. 30, 2022].

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE