Improving the PID Controllers of Roll-to-Roll Processes using Reinforcement Learning
Resumo
Approximately 90% of control loops in industrial systems utilize Proportional Integral Derivative (PID) controllers, which are essential for ensuring product quality in roll-to-roll (R2R) manufacturing processes. These processes, which involve the continuous handling of materials on rolls, require precise control, especially of substrate tension, as it is important to control because this variable is directly connected to the quality of the final product. Traditional tuning of PID parameters can be complex, as it requires the mathematical formulation of all the dynamics of the process, making it laborious to define using the traditional approach. However, advances in computational techniques have facilitated the development of automated tuning methods for PID controllers. This article investigates the application of reinforcement learning, an artificial intelligence technique, to optimize the tuning of PID controllers. The proposed methodology occurs in two stages: first, we create a simulation environment to prevent damage to real R2R machines; then, we use the CARLA reinforcement learning algorithm to adjust the PID parameters. The results indicate a 65.1% reduction in costs compared to traditional empirical tuning methods, demonstrating a significant improvement in process efficiency.
Palavras-chave:
R2R, PID, CARLA, Aprendizagem por Reforço
Referências
(2022). Predicting columns in a table - in depth.
Dehui, W., Chen, C., Xiumiao, Y., Xuesong, L., and Yimin, H. (2014). Optimization of taper winding tension in roll-to-roll web systems. Textile research journal, 84(20):2175–2183.
Dogru, O., Velswamy, K., Ibrahim, F., Wu, Y., Sundaramoorthy, A. S., Huang, B., Xu, S., Nixon, M., and Bell, N. (2022). Reinforcement learning approach to autonomous pid tuning. Computers Chemical Engineering, 161:107760.
Gouda, M., Danaher, S., and Underwood, C. (2000). Fuzzy logic control versus conventional pid control for controlling indoor temperature of a building space. IFAC Proceedings Volumes, 33(24):249–254. 8th IFAC Symposium on Computer Aided Control Systems Design (CACSD 2000), Salford, UK, 11-13 September 2000.
Greener, J. (2018). Roll-to-Roll Manufacturing, chapter 1, pages 1–17. John Wiley Sons, Ltd.
Howell, M. N. and Best, M. C. (2000). On-line pid tuning for engine idle-speed control using continuous action reinforcement learning automata. Control Engineering Practice, 8(2):147–154.
Howell, M. N., Frost, G. P., Gordon, T. J., and Wu, Q. H. (1997). Continuous action reinforcement learning applied to vehicle suspension control. Mechatronics, 7(3):263–276.
Knospe, C. (2006). Pid control. IEEE Control Systems Magazine, 26(1):30–31.
Lawrence, N. P., Forbes, M. G., Loewen, P. D., McClement, D. G., Backström, J. U., and Gopaluni, R. B. (2022). Deep reinforcement learning with shallow controllers: An experimental application to pid tuning. Control Engineering Practice, 121:105046.
Lee, J., Byeon, J., and Lee, C. (2020). Theories and control technologies for web handling in the roll-to-roll manufacturing process. International Journal of Precision Engineering and Manufacturing-Green Technology, 7(2):525–544.
Lee, Y.-S. and Jang, D.-W. (2021). Optimization of neural network-based self-tuning pid controllers for second order mechanical systems. Applied Sciences, 11(17).
Ogata, K. (1999). Modern control engineering. Book Reviews, 35(1181):1184.
Ribeiro, J. M. S., Santos, M. F., Carmo, M. J., and Silva, M. F. (2017). Comparison of pid controller tuning methods: analytical/classical techniques versus optimization algorithms. In 2017 18th International Carpathian Control Conference (ICCC), pages 533–538.
Tao, Y., Yin, Y., and Ge, L. (1998). New pid control and application.
Dehui, W., Chen, C., Xiumiao, Y., Xuesong, L., and Yimin, H. (2014). Optimization of taper winding tension in roll-to-roll web systems. Textile research journal, 84(20):2175–2183.
Dogru, O., Velswamy, K., Ibrahim, F., Wu, Y., Sundaramoorthy, A. S., Huang, B., Xu, S., Nixon, M., and Bell, N. (2022). Reinforcement learning approach to autonomous pid tuning. Computers Chemical Engineering, 161:107760.
Gouda, M., Danaher, S., and Underwood, C. (2000). Fuzzy logic control versus conventional pid control for controlling indoor temperature of a building space. IFAC Proceedings Volumes, 33(24):249–254. 8th IFAC Symposium on Computer Aided Control Systems Design (CACSD 2000), Salford, UK, 11-13 September 2000.
Greener, J. (2018). Roll-to-Roll Manufacturing, chapter 1, pages 1–17. John Wiley Sons, Ltd.
Howell, M. N. and Best, M. C. (2000). On-line pid tuning for engine idle-speed control using continuous action reinforcement learning automata. Control Engineering Practice, 8(2):147–154.
Howell, M. N., Frost, G. P., Gordon, T. J., and Wu, Q. H. (1997). Continuous action reinforcement learning applied to vehicle suspension control. Mechatronics, 7(3):263–276.
Knospe, C. (2006). Pid control. IEEE Control Systems Magazine, 26(1):30–31.
Lawrence, N. P., Forbes, M. G., Loewen, P. D., McClement, D. G., Backström, J. U., and Gopaluni, R. B. (2022). Deep reinforcement learning with shallow controllers: An experimental application to pid tuning. Control Engineering Practice, 121:105046.
Lee, J., Byeon, J., and Lee, C. (2020). Theories and control technologies for web handling in the roll-to-roll manufacturing process. International Journal of Precision Engineering and Manufacturing-Green Technology, 7(2):525–544.
Lee, Y.-S. and Jang, D.-W. (2021). Optimization of neural network-based self-tuning pid controllers for second order mechanical systems. Applied Sciences, 11(17).
Ogata, K. (1999). Modern control engineering. Book Reviews, 35(1181):1184.
Ribeiro, J. M. S., Santos, M. F., Carmo, M. J., and Silva, M. F. (2017). Comparison of pid controller tuning methods: analytical/classical techniques versus optimization algorithms. In 2017 18th International Carpathian Control Conference (ICCC), pages 533–538.
Tao, Y., Yin, Y., and Ge, L. (1998). New pid control and application.
Publicado
17/11/2024
Como Citar
LIMA, Luís Eduardo Fernandes Costa; PFITSCHER, Ricardo José.
Improving the PID Controllers of Roll-to-Roll Processes using Reinforcement Learning. In: ENCONTRO NACIONAL DE INTELIGÊNCIA ARTIFICIAL E COMPUTACIONAL (ENIAC), 21. , 2024, Belém/PA.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 352-363.
ISSN 2763-9061.
DOI: https://doi.org/10.5753/eniac.2024.245136.