Use of Augmented Random Search Algorithm for Transmission Line Control in Smart Grids - A Comparative Study with RNA-based Algorithms
Resumo
Due to climate change challenges, countries are diversifying their energy sources to reduce carbon emissions and adopt cleaner alternatives. However, integrating these new energy sources into existing power grids poses challenges, such as increased intermittency. Prior studies have shown that active control of the power grid's topology can address these issues. This research aims to demonstrate the effectiveness of the Augmented Random Search (ARS) algorithm as a faster alternative to neural network-based reinforcement learning algorithms. The ARS algorithm can achieve comparable results to neural networks in significantly less time, enabling a broader range of tests and reducing computational training costs.
Referências
Bi, W., Shu, Y., Dong, W. and Yang, Q. (oct 2020). Real-time Energy Management of Microgrid Using Reinforcement Learning. In 2020 19th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES). . IEEE. https://ieeexplore.ieee.org/document/9277821/.
Bollenbacher, J. and Rhein, B. (oct 2017). Optimal configuration and control strategy in a multi-carrier-energy system using reinforcement learning methods. In 2017 International Energy and Sustainability Conference (IESC). . IEEE. http://ieeexplore.ieee.org/document/8167476/.
Donnot, B., Guyon, I., Schoenauer, M., Panciatici, P. and Marot, A. (2017). Introducing machine learning for power system operation support.
Fan, J., Wang, Z., Xie, Y. and Yang, Z. (2019). A Theoretical Analysis of Deep Q- Learning.
Flick, T. and Morehouse, J. (2011). Securing the smart grid: next generation power grid security. Amsterdam ; Boston: Syngress.
Gao, Wei, Fan, R., Huang, R., et al. (mar 2023). Augmented random search based inter-area oscillation damping using high voltage DC transmission. Electric Power Systems Research, v. 216, p. 109063.
He, Y., Wu, S., Liang, Y., et al. (23 dec 2021). National Energy Demand And Carbon Emission Forecast Under The “Carbon peak and Carbon neutrality” Target Based On System Dynamic. In 2021 IEEE Sustainable Power and Energy Conference (iSPEC). . IEEE. https://ieeexplore.ieee.org/document/9735833/.
Hu, D., Peng, Y., Yang, J., Deng, Q. and Cai, T. (8 dec 2021). Deep Reinforcement Learning Based Coordinated Voltage Control in Smart Distribution Network. In 2021 International Conference on Power System Technology (POWERCON). . IEEE. https://ieeexplore.ieee.org/document/9697762/.
Kofinas, P., Dounis, A. I. and Vouros, G. A. (jun 2018). Fuzzy Q-Learning for multi-agent decentralized energy management in microgrids. Applied Energy, v. 219, p. 53–67.
Lan, T., Duan, J., Zhang, B., et al. (2019). AI-Based Autonomous Line Flow Control via Topology Adjustment for Maximizing Time-Series ATCs.
Liul, J., Xul, W., Liul, Z., et al. (1 nov 2022). Autonomous Decentralized Control of Distributed Generation using Multi-Agent Reinforcement Learning. In 2022 IEEE PES Innovative Smart Grid Technologies - Asia (ISGT Asia). . IEEE. https://ieeexplore.ieee.org/document/10003595/.
Marot, A., Donnot, B., Dulac-Arnold, G., et al. (2021). Learning to run a Power Network Challenge: a Retrospective Analysis.
Pratt, R. G. (2004). Transforming the U.S. electricity system. In IEEE PES Power Systems Conference and Exposition, 2004. . IEEE. http://ieeexplore.ieee.org/document/1397713/.
Quakernack, L., Kelker, M. and Haubrock, J. (10 oct 2022). Deep Reinforcement Learning For Autonomous Control Of Low Voltage Grids With Focus On Grid Stability In Future Power Grids. In 2022 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT-Europe). . IEEE. https://ieeexplore.ieee.org/document/9960416/.
Rocchetta, R., Bellani, L., Compare, M., Zio, E. and Patelli, E. (may 2019). A reinforcement learning framework for optimal operation and maintenance of power grids. Applied Energy, v. 241, p. 291–301.
Roderick, M., MacGlashan, J. and Tellex, S. (2017). Implementing the Deep Q-Network.
Sukumar, S. R., Shankar, M., Olama, M., et al. (sep 2010). A methodology to consider combined electrical infrastructure and real-time power-flow impact costs in planning large-scale renewable energy farms. In 2010 IEEE Energy Conversion Congress and Exposition. . IEEE. https://ieeexplore.ieee.org/document/5617942/.
Sun, Q., Wang, D., Ma, D. and Huang, B. (nov 2017). Multi-objective energy management for we-energy in Energy Internet using reinforcement learning. In 2017 IEEE Symposium Series on Computational Intelligence (SSCI). . IEEE. http://ieeexplore.ieee.org/document/8285243/.
Van de Wiele, T., Warde-Farley, D., Mnih, A. and Mnih, V. (2020). Q-Learning in enormous action spaces via amortized approximate maximization.
Van Hasselt, H., Guez, A. and Silver, D. (2 mar 2016). Deep Reinforcement Learning with Double Q-Learning. Proceedings of the AAAI Conference on Artificial Intelligence, v. 30, n. 1.
Watkins, C. J. C. H. and Dayan, P. (may 1992). Q-learning. Machine Learning, v. 8, n. 3–4, p. 279–292.
Yang, W., Zhou, X. and Xue, F. (mar 2010). Impacts of Large Scale and High Voltage Level Photovoltaic Penetration on the Security and Stability of Power System. In 2010 Asia-Pacific Power and Energy Engineering Conference. . IEEE. https://ieeexplore.ieee.org/document/5448930/.