Emergent Mind

Abstract

The uncertainty of distributed renewable energy brings significant challenges to economic operation of microgrids. Conventional online optimization approaches require a forecast model. However, accurately forecasting the renewable power generations is still a tough task. To achieve online scheduling of a residential microgrid (RM) that does not need a forecast model to predict the future PV/wind and load power sequences, this paper investigates the usage of reinforcement learning (RL) approach to tackle this challenge. Specifically, based on the recent development of Model-Based Reinforcement Learning, MuZero, we investigate its application to the RM scheduling problem. To accommodate the characteristics of the RM scheduling application, a optimization framework that combines the modelbased RL agent with the mathematical optimization technique is designed, and long short-term memory (LSTM) units are adopted to extract features from the past renewable generation and load sequences. At each time step, the optimal decision is obtained by conducting Monte-Carlo tree search (MCTS) with a learned model and solving an optimal power flow sub-problem. In this way, this approach can sequentially make operational decisions online without relying on a forecast model. The numerical simulation results demonstrate the effectiveness of the proposed algorithm.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.