Otimização de um Portfólio de Algoritmos de Negociações Automatizadas utilizando Reinforcement Learning para o controle de risco

  • Ramon de Cerqueira Silva UEFS
  • Carlos Alberto Rodrigues UEFS

Resumo


Este trabalho apresenta uma abordagem inovadora para otimizar portfólios de sistemas de negociação automatizados (ATS) utilizando técnicas avançadas de Deep Reinforcement Learning (DRL). São analisados os algoritmos A2C, DDPG, PPO, SAC e TD3, visando avaliar suas performances em mercados voláteis. O principal objetivo é aprimorar o controle de risco e a eficiência operacional dos ATS com dados do mercado de ações brasileiro. Os modelos de DRL superaram os benchmarks, proporcionando melhor gestão de risco e retornos ajustados. Os resultados destacam o potencial dos algoritmos DRL em ambientes financeiros complexos e abrem caminhos para pesquisas futuras na integração do aprendizado de máquina em finanças quantitativas.

Referências

Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M. (2019). Optuna: A nextgeneration hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.

Buehler, H., Gonon, L., Teichmann, J., and Wood, B. (2019). Deep hedging. Quantitative Finance, 19(8):1271–1291.

Buşoniu, L., De Bruin, T., Tolić, D., Kober, J., and Palunko, I. (2018). Reinforcement learning for control: Performance, stability, and deep approximators. Annual Reviews in Control, 46:8–28.

Chekhlov, A., Uryasev, S., and Zabarankin, M. (2005). Drawdown measure in portfolio optimization. International Journal of Theoretical and Applied Finance, 8(01):13–58.

Day Trade Review (2023). Best time frame for day trading - when and how to trade. Accessed: 2024-07-24.

Framework, O. (2013). Review of business and economics studies. Studies, 1(1).

Fujimoto, S., Hoof, H., and Meger, D. (2018). Addressing function approximation error in actor-critic methods. In International conference on machine learning, pages 1587–1596. PMLR.

Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. (2018). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290.

Liu, X.-Y., Yang, H., Chen, Q., Zhang, R., Yang, L., Xiao, B., and Wang, C. D. (2020). Finrl: A deep reinforcement learning library for automated stock trading in quantitative finance. arXiv preprint arXiv:2011.09607.

Liu, X.-Y., Yang, H., Gao, J., and Wang, C. D. (2021). Finrl: Deep reinforcement learning framework to automate trading in quantitative finance. In Proceedings of the second ACM international conference on AI in finance, pages 1–9.

Martin, R. A. (2021). Pyportfolioopt: portfolio optimization in python. Journal of Open Source Software, 6(61):3066.

Parker, K. and Fry, R. (2020). More than half of us households have some investment in the stock market.

Sutton, R. S. and Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.

Treleaven, P., Galas, M., and Lalchand, V. (2013). Algorithmic trading review. Communications of the ACM, 56(11):76–85.

Yang, H., Liu, X.-Y., and Wu, Q. (2018). A practical machine learning approach for dynamic stock recommendation. In 2018 17th IEEE international conference on trust, security and privacy in computing and communications/12th IEEE international conference on big data science and engineering (TrustCom/BigDataSE), pages 1693–1697. IEEE.
Publicado
05/11/2024
SILVA, Ramon de Cerqueira; RODRIGUES, Carlos Alberto. Otimização de um Portfólio de Algoritmos de Negociações Automatizadas utilizando Reinforcement Learning para o controle de risco. In: ESCOLA REGIONAL DE COMPUTAÇÃO BAHIA, ALAGOAS E SERGIPE (ERBASE), 24. , 2024, Salvador/BA. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2024 . p. 139-148. DOI: https://doi.org/10.5753/erbase.2024.4398.