Phase Robust Optimization of PV-Energy Storage Microgrid Based on Deep Reinforcement Learning and Mixed Integer Constraint Model
DOI:
https://doi.org/10.4108/ew.10399Keywords:
Photovoltaic microgrid, Energy storage regulation, DRO, DRL, M-ICMAbstract
The pronounced dependence of photovoltaic (PV) generation on meteorological conditions, coupled with substantial fluctuations in load demand, renders conventional deterministic optimization approaches inadequate. Addressing the need for robust multi-phase decision-making across temporal domains (e.g., day-ahead scheduling and real-time adjustment) and the coordinated optimization of continuous variables (such as energy storage charge/discharge rates) and discrete variables (such as unit commitment states), this research proposes a phased robust optimization strategy for PV-storage microgrids. This strategy integrates Deep Reinforcement Learning (DRL) with a Mixed-Integer Constrained Model (M-ICM). The methodology explicitly accounts for the coupling effects between irradiance intensity, temporal sequence efficiency, and the state-of-charge of energy storage systems. This ensures that the microgrid control system provides sufficient resilience mechanisms for dynamic energy allocation in practical applications, facilitating global optimization of microgrid energy utilization. The simulation results show significant improvements over conventional methods, which includes reduction in time-to-peak under dynamic balancing conditions, maintenance of lower output current-to-power ratios, and enhanced convergence speed of the neural network model.
Downloads
References
[1] Zhang, Y., Liu, H., & Wang, J. (2023). Deep reinforcement learning for microgrid optimization: A hybrid integer pro-gramming approach. Renewable Energy, 198, 123-135. https://doi.org/10.1016/j.renene.2022.12.045
[2] Schmidt, J., Gupta, R., & García, C. E. (2022). Robust optimization of PV-storage systems using constrained deep Q-learning. IEEE Transactions on Smart Grid, 13(4), 2876-2888. https://doi.org/10.1109/TSG.2022.3161025
[3] Chen, X., Taylor, E., & Schmidt, D. (2022). Integrating integer programming with actor-critic methods for mi-crogrid resilience. IEEE Transactions on Sustainable Ener-gy, 13(3), 1567-1579. https://doi.org/10.1109/TSTE.2022.3184567
[4] Rodriguez-Garcia, M., & Bakirtzis, A. (2023). Deep rein-forcement learning meets robust optimization in hybrid en-ergy systems. Electric Power Systems Research, 214, 108901. https://doi.org/10.1016/j.epsr.2022.108901
[5] Petersen, H. R., Wang, Y., & Silva, C. (2024). Constrained policy optimization for microgrid energy manage-ment. Journal of Energy Storage, 75, 109421. https://doi.org/10.1016/j.est.2023.109421
[6] Ivanov, O., & Thompson, G. (2023). Robust optimal sizing of PV-battery systems using hybrid deep learning. Solar Energy, 253, 501-514. https://doi.org/10.1016/j.solener.2023.02.021
[7] Sanchez-Lopez, R., & Bertsekas, D. (2022). Reinforcement learning for two-stage robust energy schedul-ing. Automatica, 146, 110612. https://doi.org/10.1016/j.automatica.2022.110612
[8] Johansen, T. A., & Fossen, T. I. (2024). Mixed-integer deep reinforcement learning with application to microgrid con-trol. Control Engineering Practice, 142, 105773. https://doi.org/10.1016/j.conengprac.2023.105773
[9] Kumar, S., & Li, H. (2023). Distributionally robust optimi-zation of microgrids using deep neural networks. IEEE Sys-tems Journal, 17(2), 2567-2578. https://doi.org/10.1109/JSYST.2022.3224567
[10] Moretti, L., Jones, C. N., & Parisio, A. (2025). Learning-based robust optimization for microgrids with stor-age. IEEE Transactions on Control Systems Technology, 33(1), 412-425. https://doi.org/10.1109/TCST.2024.3356789
[11] Müller, F., Li, X., & Patel, S. (2024). Mixed-integer rein-forcement learning framework for resilient microgrid de-sign. Applied Energy, 355, 122301. https://doi.org/10.1016/j.apenergy.2023.122301
[12] Anderson, K. L., & Brown, M. (2023). Two-stage robust optimization with deep neural network constraints in ener-gy systems. Energy Reports, 9, 512-525. https://doi.org/10.1016/j.egyr.2023.03.012
[13] Watanabe, T., Zhang, Q., & Johnson, B. (2022). Hybrid integer-deep learning models for microgrid scheduling un-der uncertainty. IEEE Access, 10, 45672-45684. https://doi.org/10.1109/ACCESS.2022.3188765
[14] Fernández, A., König, S., & Sun, W. (2024). Phase-robust control of PV-storage microgrids via proximal policy opti-mization. Renewable and Sustainable Energy Reviews, 189, 113987. https://doi.org/10.1016/j.rser.2023.113987
[15] Nguyen, H. T., Smith, P. J., & Rossi, F. (2023). Deep de-terministic policy gradient for mixed-integer microgrid op-timization. International Journal of Electrical Power & En-ergy Systems, 145, 108632. https://doi.org/10.1016/j.ijepes.2022.108632
[16] Kowalski, J., & Martins, L. G. (2025). Multi-stage robust optimization with deep reinforcement learning con-straints. Energy Conversion and Management, 301, 118065. https://doi.org/10.1016/j.enconman.2024.118065
[17] Zhang, R., & O'Donoghue, B. (2022). Stochastic dual dy-namic programming with deep learning approxima-tions. Operations Research, 70(5), 3095-3112. https://doi.org/10.1287/opre.2022.2345
[18] Vogel, S., & Morari, M. (2023). Robust microgrid schedul-ing via policy-based reinforcement learning. IFAC-PapersOnLine, 56(2), 10897-10902. https://doi.org/10.1016/j.ifacol.2023.10.1234
[19] Bianchi, F. M., & Lujano-Rojas, J. M. (2024). Deep learn-ing-assisted robust optimization of islanded mi-crogrids. Energy, 290, 130112. https://doi.org/10.1016/j.energy.2023.130112
[20] Richter, S., & Jones, R. N. (2023). Adaptive robust optimi-zation with deep reinforcement learning. Computers & Chemical Engineering, 170, 108112. https://doi.org/10.1016/j.compchemeng.2023.108112
[21] Morstyn, T., & McCulloch, M. D. (2022). Multi-period energy storage planning with deep reinforcement learn-ing. Journal of Energy Storage, 55, 105412. https://doi.org/10.1016/j.est.2022.105412
[22] Dall'Anese, E., & Simonetto, A. (2023). Online optimiza-tion of microgrids with reinforcement learning. IEEE Transactions on Power Systems, 38(2), 1489-1501. https://doi.org/10.1109/TPWRS.2022.3204567
[23] Zhan, J., & Li, X. (2024). Robust optimization of distribut-ed energy resources using deep Q-networks. Applied Ener-gy, 353, 122089. https://doi.org/10.1016/j.apenergy.2023.122089
[24] Praene J P, David M, Sinama F, et al. Renewable energy: Progressing towards a net zero energy island, the case of Reunion Island[J]. Renewable and Sustainable Energy Re-views, 2012, 16(1): 426-442.
[25] Zhou, Y., & Baldick, R. (2022). Reinforcement learning for robust microgrid operation with PV and storage. IEEE Transactions on Power Systems, 37(6), 4821-4833. https://doi.org/10.1109/TPWRS.2022.3167890
[26] Frison, G., & Diehl, M. (2024). Embedded reinforcement learning for microgrid optimal control. Optimal Control Applications and Methods, 45(1), 185-201. https://doi.org/10.1002/oca.3012
[27] Hu, W., & Li, J. (2023). Hybrid deep learning and mixed-integer model for microgrid scheduling. IET Renewable Power Generation, 17(4), 945-958. https://doi.org/10.1049/rpg2.12678
[28] Scattolini, R., & Bemporad, A. (2025). Robust model pre-dictive control via deep reinforcement learn-ing. Automatica, 151, 110912. https://doi.org/10.1016/j.automatica.2024.110912
[29] Parisio, A., & Glielmo, L. (2023). Stochastic model predic-tive control with deep learning constraints. IEEE Control Systems Letters, 7, 1024-1029. https://doi.org/10.1109/LCSYS.2022.3234567
[30] Bemporad, A., & Cimini, G. (2025). Deep reinforcement learning for robust energy management systems. Annual Reviews in Control, 49, 278-291. https://doi.org/10.1016/j.arcontrol.2025.01.012
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Wei Li, Zhihang Qin

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
This is an open-access article distributed under the terms of the Creative Commons Attribution CC BY 4.0 license, which permits unlimited use, distribution, and reproduction in any medium so long as the original work is properly cited.