Desenvolvimento de malware utilizando o prompt AIM no ChatGPT
Resumo
Este trabalho explora a capacidade do ChatGPT gerar malware através de instruções que removem suas restrições, conhecidas como jailbreaks. O código malicioso foi analisado estaticamente para compreender sua estrutura interna e executado em ambiente controlado para verificar seu comportamento. Os resultados mostraram que, mesmo com mecanismos de segurança, é possível obter um malware através do ChatGPT.Referências
Gupta, M., Akiri, C., Aryal, K., Parker, E. & Praharaj, L. (2023). From ChatGPT to ThreatGPT: Impact of generative AI in cybersecurity and privacy. IEEE Access, 11, pp. 80218–80245.
Madani, P. (2023). Metamorphic malware evolution: The potential and peril of large language models. In: 5th IEEE TPS-ISA, pp. 74–81.
OpenAI Community. (2025). Interaction limits for ChatGPT-4. [link].
Stanford University. (2024). The 2024 AI Index Report. [link].
Tahir, R. (2018). A study on malware and malware detection techniques. International Journal of Education and Management Engineering, 8(2), p. 20.
Wong, M. Y., Landen, M., Antonakakis, M., Blough, D. M., Redmiles, E. M. & Ahamad, M. (2021). An inside look into the practice of malware analysis. In: Proceedings of the 2021 ACM CCS, pp. 3053–3069.
Xu, Z., Liu, Y., Deng, G., Li, Y. & Picek, S. (2024). A comprehensive study of jailbreak attack versus defense for large language models. In: Findings of the ACL 2024, pp. 7432–7449.
Yamin, M. M., Hashmi, E., e Katt, B. (2024). Combining uncensored and censored LLMs for ransomware generation. In: WISE 2024, pp. 189–202.
Yong, Z. X., Menghini, C. & Bach, S. (2023). Low-resource languages jailbreak GPT-4. In: Socially Responsible Language Modelling Research.
Madani, P. (2023). Metamorphic malware evolution: The potential and peril of large language models. In: 5th IEEE TPS-ISA, pp. 74–81.
OpenAI Community. (2025). Interaction limits for ChatGPT-4. [link].
Stanford University. (2024). The 2024 AI Index Report. [link].
Tahir, R. (2018). A study on malware and malware detection techniques. International Journal of Education and Management Engineering, 8(2), p. 20.
Wong, M. Y., Landen, M., Antonakakis, M., Blough, D. M., Redmiles, E. M. & Ahamad, M. (2021). An inside look into the practice of malware analysis. In: Proceedings of the 2021 ACM CCS, pp. 3053–3069.
Xu, Z., Liu, Y., Deng, G., Li, Y. & Picek, S. (2024). A comprehensive study of jailbreak attack versus defense for large language models. In: Findings of the ACL 2024, pp. 7432–7449.
Yamin, M. M., Hashmi, E., e Katt, B. (2024). Combining uncensored and censored LLMs for ransomware generation. In: WISE 2024, pp. 189–202.
Yong, Z. X., Menghini, C. & Bach, S. (2023). Low-resource languages jailbreak GPT-4. In: Socially Responsible Language Modelling Research.
Publicado
16/10/2025
Como Citar
CARVALHO, Gustavo Lofrese; LADEIRA, Ricardo de la Rocha; LIMA, Gabriel Eduardo.
Desenvolvimento de malware utilizando o prompt AIM no ChatGPT. In: ESCOLA REGIONAL DE INFORMÁTICA DO ESPÍRITO SANTO (ERI-ES), 10. , 2025, Espírito Santo/ES.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 150-153.
DOI: https://doi.org/10.5753/eries.2025.14922.