Emergent Mind

PhysQ: A Physics Informed Reinforcement Learning Framework for Building Control

(2211.11830)
Published Nov 21, 2022 in eess.SY and cs.SY

Abstract

Large-scale integration of intermittent renewable energy sources calls for substantial demand side flexibility. Given that the built environment accounts for approximately 40% of total energy consumption in EU, unlocking its flexibility is a key step in the energy transition process. This paper focuses specifically on energy flexibility in residential buildings, leveraging their intrinsic thermal mass. Building on recent developments in the field of data-driven control, we propose PhysQ. As a physics-informed reinforcement learning framework for building control, PhysQ forms a step in bridging the gap between conventional model-based control and data-intensive control based on reinforcement learning. Through our experiments, we show that the proposed PhysQ framework can learn high quality control policies that outperform a business-as-usual, as well as a rudimentary model predictive controller. Our experiments indicate cost savings of about 9% compared to a business-as-usual controller. Further, we show that PhysQ efficiently leverages prior physics knowledge to learn such policies using fewer training samples than conventional reinforcement learning approaches, making PhysQ a scalable alternative for use in residential buildings. Additionally, the PhysQ control policy utilizes building state representations that are intuitive and based on conventional building models, that leads to better interpretation of the learnt policy over other data-driven controllers.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.