电工技术学报  2024, Vol. 39 Issue (5): 1300-1312    DOI: 10.19595/j.cnki.1000-6753.tces.222195
电力系统与综合能源 |
基于单/多智能体简化强化学习的电力系统无功电压控制
马庆, 邓长虹
武汉大学电气与自动化学院 武汉 430072
Single/Multi Agent Simplified Deep Reinforcement Learning Based Volt-Var Control of Power System
Ma Qing, Deng Changhong
School of Electrical Engineering and Automation Wuhan University Wuhan 430072 China
全文: PDF (1676 KB)   HTML
输出: BibTeX | EndNote (RIS)      
摘要 为了快速平抑分布式能源接入系统产生的无功电压波动,以强化学习、模仿学习为代表的机器学习方法逐渐被应用于无功电压控制。虽然现有方法能实现在线极速求解,但仍然存在离线训练速度慢、普适性不够等阻碍其应用于实际的缺陷。该文首先提出一种适用于输电网集中式控制的单智能体简化强化学习方法,该方法基于“Actor-Critic”架构对强化学习进行简化与改进,保留了强化学习无需标签数据与强普适性的优点,同时消除了训练初期因智能体随机搜索造成的计算浪费,大幅提升了强化学习的训练速度;然后,提出一种适用于配电网分布式零通信控制的多智能体简化强化学习方法,该方法将简化强化学习思想推广形成多智能体版本,同时采用模仿学习进行初始化,将全局优化思想提前注入各智能体,提升各无功设备之间的就地协同控制效果;最后,基于改进IEEE 118节点算例的仿真结果验证了所提方法的正确性与快速性。
服务
把本文推荐给朋友
加入我的书架
加入引用管理器
E-mail Alert
RSS
作者相关文章
马庆
邓长虹
关键词 无功电压控制集中式控制单智能体简化强化学习分布式控制多智能体简化强化学习    
Abstract:In order to quickly suppress the rapid fluctuations of reactive power and voltage caused by the random output change of distributed energies, machine learning (ML) methods represented by deep reinforcement learning (DRL) and imitation learning (IL) have been applied to volt-var control (VVC) research recently, to replace the traditional methods which require a large number of iterations. Although the ML methods in the existing literature can realize the online rapid VVC optimization, there are still some shortcomings such as slow offline training speed and insufficient universality that hinder their applications in practice.
Firstly, this paper proposes a single-agent simplified DRL (SASDRL) method suitable for the centralized control of transmission networks. Based on the classic "Actor-Critic" architecture and the fact that the Actor network can generate wonderful control strategies heavily depends on whether the Critic network can make accurate evaluation, this method simplifies and improves the offline training process of DRL based VVC, whose core ideas are the simplification of Critic network training and the change in the update mode of Actor and Critic network. It simplifies the sequential decision problem set in the traditional DRL based VVC to a single point decision problem and the output of Critic network is transformed from the original sequential action value into the reward value corresponding to the current control strategy. In addition, by training the Critic network in advance to help the accelerated convergence of Actor network, it solves the computational waste problem caused by the random search of agent in the early training stage which greatly improves the offline training speed, and retains the DRL’s advantages like without using massive labeled data and strong universality.
Secondly, a multi-agent simplified DRL method (MASDRL) suitable for decentralized and zero-communication control of active distribution network is proposed. This method generalizes the core idea of SASDRL to form a multi-agent version and continues to accelerate the convergence performance of Actor network of each agent on the basis of training the unified Critic network in advance. Each agent corresponds to a different VVC device in the system. During online application, each agent only uses the local information of the node connected to the VVC device to generate the control strategy through its own Actor network independently. Besides, it adopts IL for initialization to inject the global optimization idea into each agent in advance, and improves the local collaborative control effect between various VVC devices.
Simulation results on the improved IEEE 118-bus system show that SASDRL and MASDRL both achieve the best control results of VVC among all the compared methods. In terms of offline training speed, SASDRL consumes the least amount of training time, whose speed is 4.47 times faster than the traditional DRL and 50.76 times faster than IL. 87.1% of SASDRL's training time is spent on generating the expert samples required for the supervised training of Critic network while only 12.9% is consumed by the training of Actor and Critic network. Regarding MASDRL, it can realize the 82.77% reduction in offline training time compared to traditional MADRL.
The following conclusions can be drawn from the simulation analysis: (1) Compared with traditional mathematical methods and existing ML methods, SASDRL is able to obtain excellent control results similar to mathematical methods while greatly accelerating the offline training speed of DRL based VVC. (2) Compared with traditional MADRL, by the inheritance of SASDRL’ core ideas and the introduction of IL into the initialization of Actor network, the method of MASDRL+IL proposed can improve the local collaborative control effect between various VVC devices and offline training speed significantly.
Key wordsVolt-var control    centralized control    single-agent simplified deep reinforcement learning    decentralized control    multi-agent simplified deep reinforcement learning   
收稿日期: 2022-11-22     
PACS: TM76  
基金资助:国家重点研发计划资助项目(2017YFB0903705)
通讯作者: 邓长虹 女,1963年生,教授,博士生导师,研究方向为电力系统安全稳定分析、可再生能源接入电网的优化控制。E-mail:dengch@whu.edu.cn   
作者简介: 马 庆 男,1990年生,博士研究生,研究方向电力系统无功电压控制。 E-mail:747942466@qq.com
引用本文:   
马庆, 邓长虹. 基于单/多智能体简化强化学习的电力系统无功电压控制[J]. 电工技术学报, 2024, 39(5): 1300-1312. Ma Qing, Deng Changhong. Single/Multi Agent Simplified Deep Reinforcement Learning Based Volt-Var Control of Power System. Transactions of China Electrotechnical Society, 2024, 39(5): 1300-1312.
链接本文:  
https://dgjsxb.ces-transaction.com/CN/10.19595/j.cnki.1000-6753.tces.222195          https://dgjsxb.ces-transaction.com/CN/Y2024/V39/I5/1300