Full Text:   <1096>

CLC number: 

On-line Access: 2021-03-05

Received: 2020-08-31

Revision Accepted: 2021-01-10

Crosschecked: 0000-00-00

Cited: 0

Clicked: 1892

Citations:  Bibtex RefMan EndNote GB/T7714

-   Go to

Article info.
Open peer comments

Journal of Zhejiang University SCIENCE C 1998 Vol.-1 No.-1 P.

http://doi.org/10.1631/FITEE.2000446


Minimax Q-learning design for H control of linear discrete-time systems


Author(s):  Xinxing LI, Lele XI, Wenzhong ZHA, Zhihong PENG

Affiliation(s):  Information Science Academy, China Electronics Technology Group Corporation, Beijing 100086, China; more

Corresponding email(s):   lixinxing_1006@163.com, xilele.bit@gmail.com, zhawenzhong@126.com, peng@bit.edu.cn

Key Words:  H∞, control, Zero-sum dynamic game, Reinforcement learning, Adaptive dynamic programming, Minimax Q-learning, Policy iteration


Xinxing LI, Lele XI, Wenzhong ZHA, Zhihong PENG. Minimax Q-learning design for H control of linear discrete-time systems[J]. Frontiers of Information Technology & Electronic Engineering, 1998, -1(-1): .

@article{title="Minimax Q-learning design for H control of linear discrete-time systems",
author="Xinxing LI, Lele XI, Wenzhong ZHA, Zhihong PENG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="-1",
number="-1",
pages="",
year="1998",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2000446"
}

%0 Journal Article
%T Minimax Q-learning design for H control of linear discrete-time systems
%A Xinxing LI
%A Lele XI
%A Wenzhong ZHA
%A Zhihong PENG
%J Journal of Zhejiang University SCIENCE C
%V -1
%N -1
%P
%@ 2095-9184
%D 1998
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2000446

TY - JOUR
T1 - Minimax Q-learning design for H control of linear discrete-time systems
A1 - Xinxing LI
A1 - Lele XI
A1 - Wenzhong ZHA
A1 - Zhihong PENG
J0 - Journal of Zhejiang University Science C
VL - -1
IS - -1
SP -
EP -
%@ 2095-9184
Y1 - 1998
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2000446


Abstract: 
The H; control method is an effective approach for attenuating the effect of disturbances on practical systems, but it is difficult to obtain the H; controller due to the nonlinear Hamilton-Jacobi-Isaacs(HJI) equation, even for linear systems. This study deals with the design of an H; controller for linear discrete-time systems. To solve the related game algebraic Riccati equation (GARE), a novel model-free minimax Q-learning method is developed, on the basis of an offline policy iteration (PI) algorithm, which is shown to be Newton’s method for solving the GARE. The proposed minimax Q-learning method, which employs off-policy reinforcement learning (RL), learns the optimal control policies for the controller and the disturbance online, using only the state samples generated by the implemented behavior policies. Different from existing Q-learning methods, a novel gradient-based policy improvement scheme is proposed. We prove that the minimax Q-learning method converges to the saddle solution under initially admissible control policies and an appropriate positive learning rate, provided that certain persistence of excitation (PE) conditions are satisfied. In addition, the PE conditions can be easily met by choosing appropriate behavior policies containing certain excitation noises, without causing any excitation noise bias. In the simulation study, we apply the proposed minimax Q-learning method to design an H; load-frequency controller for an electrical power system generator that suffers from load disturbance, and the simulation results indicate that the obtained H; load-frequency controller has strong disturbance rejection performance.

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - Journal of Zhejiang University-SCIENCE