Operation Scheduling of MGs Based on Deep Reinforcement Learning Algorithm
In this paper, the operation scheduling of Microgrids (MGs), including Distributed Energy Resources (DERs) and Energy Storage Systems (ESSs), is proposed using a Deep Reinforcement Learning (DRL) based approach. Due to the dynamic characteristic of the problem, it firstly is formulated as a Markov Decision Process (MDP). Next, Deep Deterministic Policy Gradient (DDPG) algorithm is presented to minimize total operational costs by learning the optimal strategy for operation scheduling of MG systems. This model-free algorithm deploys an actor-critic architecture which can not only model the continuous state and action spaces properly but also overcome the curse of dimensionality. In order to evaluate the efficiency of the proposed algorithm, the results were compared with the analytical method and a Q-based learning algorithm which demonstrates the capability of the DDPG method from the aspects of convergence, running time, and total costs.
- حق عضویت دریافتی صرف حمایت از نشریات عضو و نگهداری، تکمیل و توسعه مگیران میشود.
- پرداخت حق اشتراک و دانلود مقالات اجازه بازنشر آن در سایر رسانههای چاپی و دیجیتال را به کاربر نمیدهد.