Privacy, Data Protection, Risk, and Compliance in the Age of Generative AI Systems: A Systematic Mapping Study

  • Richardson B. da S. Andrade UnB
  • Geraldo Pereira Rocha Filho UESB
  • Gilmar dos Santos Marques UnB
  • Edna Dias Canedo UnB

Resumo


Research Context: Generative Artificial Intelligence (GenAI), particularly Large Language Models (LLMs), is advancing rapidly, raising concerns about privacy, data protection, risk management, and regulatory compliance. Despite transformative potential, adoption remains immature and constrained by ethical, technical, and legal limits. Scientific and/or Practical Problem: Organizations, developers, and end users face risks of privacy violations, re-identification, model inversion, and opacity. Current frameworks such as GDPR and Brazil’s LGPD struggle to address GenAI’s complexity, leaving gaps between technical safeguards and legal obligations. Proposed Solution and/or Analysis: We performed a systematic mapping focused on privacy-preserving mechanisms, risk management frameworks, and compliance models applicable to GenAI. We synthesize state-of-the-art approaches, their strengths and limitations, and assess how they enable trustworthy adoption. Related IS Theory: The study is grounded in information systems governance, privacy by design, and responsible AI, using sociotechnical lenses integrating technological, organizational, and regulatory perspectives. Research Method: Following a protocol, we searched four major digital libraries (ACM DL, IEEE Xplore, ScienceDirect, SpringerLink), applied inclusion and exclusion criteria, and conducted quality assessment. From 1,138 initial studies, 44 were analyzed in depth, and 15 met all thresholds. Summary of Results: We identify four principal categories of privacy techniques (differential privacy, federated learning, cryptographic approaches, synthetic data), five risk management frameworks (e.g., NIST AI RMF, MITRE ATLAS/AI Security), and compliance instruments (DPIA, conformity assessment, FRIA). Comparative analyses reveal trade-offs among robustness, scalability, and regulatory alignment. Contributions and Impact to the IS area: This study consolidates mechanisms for addressing privacy, risk, and compliance in GenAI, highlights gaps between technical safeguards and legal requirements, and distills design implications for trustworthy systems. It supports scholars and practitioners in engineering responsible AI and informs IS research agendas, organizational policies, and regulatory strategies for intelligent information systems.

Referências

Al-Kfairy, M., Mustafa, D. G., Kshetri, N., Insiew, M., and Alfandi, O. (2024). Ethical challenges and solutions of generative ai: An interdisciplinary perspective. Informatics, 11:58.

Baloukas, C., Papadopoulos, L., Demestichas, K., Weissenfeld, A., Schlarb, S., Aramburu, M., Redó, D., García, J., Gaines, S., Marquenie, T., Eren, E., and Erdogan Peter, I. (2024). A Risk Assessment and Legal Compliance Framework for Supporting Personal Data Sharing with Privacy Preservation for Scientific Research. In Proceedings of the 19th International Conference on Availability, Reliability and Security, ARES ’24, New York, NY, USA. ACM. event-place: Vienna, Austria.

Barrett, M. P. (2018). Framework for improving critical infrastructure cybersecurity version 1.1. Technical Report NIST.CSWP.04162018, NIST Cybersecurity Framework.

Beltran, M. A., Ruiz Mondragon, M. I., and Han, S. H. (2024). Comparative Analysis of Generative AI Risks in the Public Sector. In Proceedings of the 25th Annual International Conference on Digital Government Research, dg.o ’24, pages 610–617, New York, NY, USA. Association for Computing Machinery. event-place: Taipei, Taiwan.

Blanco-Justicia, A., Sánchez, D., Domingo-Ferrer, J., and Muralidhar, K. (2023). A critical review on the use (and misuse) of differential privacy in machine learning. ACM Comput. Surv., 55(8):160:1–160:16.

Briggs, M. and Cross, M. (2024). Generative AI: Threatening Established Human Rights Instruments at Scale. In 2024 4th International Conference on Applied Artificial Intelligence (ICAPAI), pages 1–8.

Chen, K., Zhou, X., Lin, Y., Feng, S., Shen, L., and Wu, P. (2025). A survey on privacy risks and protection in large language models. J. King Saud Univ. Comput. Inf. Sci., 37(7).

de Paula Porto, D., Prado, R. D. C. V., dos Santos Marques, G., Serrano, A. L. M., de Mendonça, F. L., and Canedo, E. D. (2025). Ethical requirements in the age of artificial intelligence: A systematic literature review. Simpósio Brasileiro de Sistemas de Informação (SBSI), pages 663–672.

Diro, A., Kaisar, S., Saini, A., Fatima, S., Pham, H. C., and Erba, F. (2025). Workplace security and privacy implications in the genai age: A survey. J. Inf. Secur. Appl., 89:103960.

Djeffal, C. (2025). Reflexive Prompt Engineering: A Framework for Responsible Prompt Engineering and AI Interaction Design. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’25, pages 1757–1768, New York, NY, USA. Association for Computing Machinery.

Domínguez Hernández, A., Krishna, S., Perini, A. M., Katell, M., Bennett, S., Borda, A., Hashem, Y., Hadjiloizou, S., Mahomed, S., Jayadeva, S., Aitken, M., and Leslie, D. (2024). Mapping the individual, social and biospheric impacts of Foundation Models. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’24, pages 776–796, New York, NY, USA. ACM. Rio de Janeiro,Brazil.

Fabbri, S., Silva, C., Hernandes, E., Octaviano, F., Di Thommazo, A., and Belgamo, A. (2016). Improvements in the start tool to better support the systematic review process. In Proceedings of the 20th international conference on evaluation and assessment in software engineering, pages 1–5.

Feretzakis, G., Anastasiou, A., Pitoglou, S., Paxinou, E., Gkoulalas-Divanis, A., Kalodanis, K., Tsapelas, I., Kalles, D., and Verykios, V. (2024). Securing a generative ai-powered healthcare chatbot. Studies in health technology and informatics, 321:195–199.

Golda, A., Mekonen, K., Pandey, A., Singh, A., Hassija, V., Chamola, V., and Sikdar, B. (2024). Privacy and security concerns in generative ai: A comprehensive survey. IEEE Access, 12:48126–48144.

Gupta, M., Akiri, C., Aryal, K., Parker, E., and Praharaj, L. (2023). From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy. IEEE Access, 11:80218–80245.

Gupta, R. and Rathore, B. (2024). Exploring the generative ai adoption in service industry: A mixed-method analysis. Journal of Retailing and Consumer Services, pages –.

Hacker, P., Engel, A., and Mauer, M. (2023). Regulating ChatGPT and other Large Generative AI Models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’23, pages 1112–1123, New York, NY, USA. Association for Computing Machinery. event-place: Chicago, IL, USA.

Hassan, U., Zhu, J., Chen, D., and Cheung, S.-C. S. (2024). DPGEM: Differentially Private Generative Model with Exponential Mechanism. In 2024 IEEE International Workshop on Information Forensics and Security (WIFS), pages 1–6. ISSN: 2157-4774.

Hopster, J. K. G. and Maas, M. M. (2023). The technology triad: disruptive AI, regulatory gaps and value change. AI and Ethics, 4(4):1051–1069. Publisher: Springer Science and Business Media LLC.

Hu, R., Guo, Y., Li, H., Pei, Q., and Gong, Y. (2020). Personalized federated learning with differential privacy. IEEE Internet of Things Journal, 7:9530–9539.

Humphreys, D., Koay, A., Desmond, D., and Mealy, E. (2024). AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business. AI and Ethics, 4(3):791–804. Publisher: Springer Science and Business Media LLC.

Jadon, A. and Kumar, S. (2023). Leveraging Generative AI Models for Synthetic Data Generation in Healthcare: Balancing Research and Privacy. In 2023 International Conference on Smart Applications, Communications and Networking (SmartNets), pages 1–4.

Kaissis, G., Makowski, M., Rückert, D., and Braren, R. (2020). Secure, privacy-preserving and federated machine learning in medical imaging. Nature Machine Intelligence, 2:305 – 311.

Kamaruddin, S., Uphaday, N. K., Selamat, H. S., Mohd Saufi, N. N., Wan Rosli, W. R., and Mohamad, A. M. (2024). The Legal Paradigm of Generative AI in Malaysia and India: Problems and Prospects. In 2024 International Conference on Artificial Intelligence and Emerging Technology (Global AI Summit), pages 413–417.

Khowaja, S. A., Khuwaja, P., Dev, K., Wang, W., and Nkenyereye, L. (2024). Chat-GPT Needs SPADE (Sustainability, PrivAcy, Digital divide, and Ethics) Evaluation: A Review. Cognitive Computation, 16(5):2528–2550. Publisher: Springer Science and Business Media LLC.

Kim, B.-J., Jeong, S., Cho, B.-K., and Chung, J.-B. (2025). AI Governance in the Context of the EU AI Act. IEEE Access, 13:144126–144142.

Kitchenham, B. and Brereton, P. (2013a). A systematic review of systematic review process research in software engineering. Information and Software Technology, 55(12):2049–2075.

Kitchenham, B. A. and Brereton, P. (2013b). A systematic review of systematic review process research in software engineering. Inf. Softw. Technol., 55(12):2049–2075.

Kumar, K., Kuldeep, and Bhushan, B. (2023). Augmenting Cybersecurity and Fraud Detection Using Artificial Intelligence Advancements. In 2023 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pages 1207–1212.

Kumar, M., Sharma, S., Singh, J., and Dwivedi, Y. (2021). ’okay google, what about my privacy?’: User’s privacy perceptions and acceptance of voice based digital assistants. Comput. Hum. Behav., 120:106763.

Lee, H.-P. H., Yang, Y.-J., Von Davier, T. S., Forlizzi, J., and Das, S. (2024). Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, CHI ’24, New York, NY, USA. Association for Computing Machinery. event-place: Honolulu, HI, USA.

Lin, Y., Bao, L.-Y., Li, Z.-M., Si, S.-Z., and Chu, C.-H. (2020). Differential privacy protection over deep learning: An investigation of its impacted factors. Computers & Security, 99:102061.

Lin, Y., Gao, Z., Du, H., Niyato, D., Kang, J., and Liu, X. (2024). Incentive and Dynamic Client Selection for Federated Unlearning. In Proceedings of the ACM Web Conference 2024, WWW ’24, pages 2936–2944, New York, NY, USA. Association for Computing Machinery. event-place: Singapore, Singapore.

Liu, Y., Huang, J., Li, Y., Wang, D., and Xiao, B. (2025). Generative ai model privacy: a survey. Artif. Intell. Rev., 58:33.

Ma, J., Naas, S.-A., Sigg, S., and Lyu, X. (2021). Privacy-preserving federated learning based on multi-key homomorphic encryption. International Journal of Intelligent Systems, 37:5880 – 5901.

Maliakel, P. J., Ilager, S., and Brandic, I. (2024). FLIGAN: Enhancing Federated Learning with Incomplete Data using GAN. In Proceedings of the 7th International Workshop on Edge Systems, Analytics and Networking, EdgeSys ’24, pages 1–6, New York, NY, USA. Association for Computing Machinery. event-place: Athens, Greece.

Master, S., Chirputkar, A., and Ashok, P. (2024). Unleashing Creativity: The Business Potential of Generative AI. In 2024 2nd World Conference on Communication & Computing (WCONF), pages 1–6.

Metcalf, T. (2025). AI safety and regulatory capture. AI & SOCIETY. Publisher: Springer Science and Business Media LLC.

Meza, J., Saltos, M. F. L., Campoverde, E. U. V., Cejas, M. C. N., and Castro, V. C. A. (2025). AI Regulation for Ecuador. In 2025 Eleventh International Conference on eDemocracy & eGovernment (ICEDEG), pages 311–316. ISSN: 2573-1998.

Mothukuri, V., Parizi, R., Pouriyeh, S., ping Huang, Y., Dehghantanha, A., and Srivastava, G. (2021). A survey on security and privacy of federated learning. Future Gener. Comput. Syst., 115:619–640.

Nidhisree, C., Paul, A., Venunadh, A., and Bhowmick, R. S. (2024). Generative AI Under Scrutiny: Assessing the Risks and Challenges in Diverse Domains. In 2024 IEEE 6th International Conference on Cybernetics, Cognition and Machine Learning Applications (ICCCMLA), pages 243–248.

Ouadrhiri, A. E. and Abdelhadi, A. M. (2022). Differential privacy for deep and federated learning: A survey. IEEE Access, 10:22359–22380.

Park, J. H. and Madisetti, V. K. (2025). CAPRI: A Context-Aware Privacy Framework for Multi-Agent Generative AI Applications. IEEE Access, 13:43168–43177.

Petersen, K., Vakkalanka, S., and Kuzniarz, L. (2015). Guidelines for conducting systematic mapping studies in software engineering: An update. Inf. Softw. Technol., 64:1–18.

Petrovska, O., Clift, L., Moller, F., and Pearsall, R. (2024). Incorporating Generative AI into Software Development Education. In Proceedings of the 8th Conference on Computing Education Practice, CEP ’24, pages 37–40, New York, NY, USA. Association for Computing Machinery. event-place: Durham, United Kingdom.

Rauh, M., Marchal, N., Manzini, A., Hendricks, L. A., Comanescu, R., Akbulut, C., Stepleton, T., Mateos-Garcia, J., Bergman, S., Kay, J., Griffin, C., Bariach, B., Gabriel, I., Rieser, V., Isaac, W., and Weidinger, L. (2025). Gaps in the Safety Evaluation of Generative AI. In Proceedings of the 2024 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’24, pages 1200–1217, San Jose, California, USA. AAAI Press.

Rocha, L. D., Silva, G. R. S., and Dias Canedo, E. (2023). Privacy compliance in software development: A guide to implementing the lgpd principles. In Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing, pages 1352–1361.

Schmitz, A., Mock, M., Görge, R., Cremers, A. B., and Poretschkin, M. (2024). A global scale comparison of risk aggregation in AI assessment frameworks. AI and Ethics, 5(2):1407–1432. Publisher: Springer Science and Business Media LLC.

Shah, J. A. and Bajpai, G. (2025). Responsible Generative AI for Software Development Life Cycle. In 2025 IEEE World AI IoT Congress (AIIoT), pages 0056–0061.

Sovrano, F., Hine, E., Anzolut, S., and Bacchelli, A. (2025). Simplifying software compliance: AI technologies in drafting technical documentation for the AI Act. Empirical Software Engineering, 30(4). Publisher: Springer Science and Business Media LLC.

Teo, S. A. (2024). Artificial intelligence and its ‘slow violence’ to human rights. AI and Ethics, 5(3):2265–2280. Publisher: Springer Science and Business Media LLC.

Thelisson, E. and Verma, H. (2024). Conformity assessment under the EU AI act general approach. AI and Ethics, 4(1):113–121. Publisher: Springer Science and Business Media LLC.

Vigna, F. (2022). Co-regulation Approach for Governing Big Data: Thoughts on Data Protection Law. In Proceedings of the 15th International Conference on Theory and Practice of Electronic Governance, ICEGOV ’22, pages 59–63, New York, NY, USA. Association for Computing Machinery. event-place: Guimarães, Portugal.

Wang, Y., Pan, Y., Yan, M., Su, Z., and Luan, T. (2023). A survey on chatgpt: Ai–generated contents, challenges, and solutions. IEEE Open Journal of the Computer Society, 4:280–302.

Warudkar, S. and Jalit, R. (2024). Unlocking the Potential of Generative AI in Large Language Models. In 2024 Parul International Conference on Engineering and Technology (PICET), pages 1–5.

Wei, K., Li, J., Ding, M., Ma, C., Su, H., and Poor, H. (2021). User-level privacy-preserving federated learning: Analysis and performance optimization. IEEE Transactions on Mobile Computing, 21:3388–3401.

Wolfe, R., Slaughter, I., Han, B., Wen, B., Yang, Y., Rosenblatt, L., Herman, B., Brown, E., Qu, Z., Weber, N., and Howe, B. (2024). Laboratory-Scale AI: Open-Weight Models are Competitive with ChatGPT Even in Low-Resource Settings. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’24, pages 1199–1210, New York, NY, USA. ACM. Rio de Janeiro, Brazil.

Wu, C., Wu, F., Lyu, L., Huang, Y., and Xie, X. (2021). Communication-efficient federated learning via knowledge distillation. Nature Communications, 13.

Wörsdörfer, M. (2024). Biden’s Executive Order on AI: strengths, weaknesses, and possible reform steps. AI and Ethics, 5(2):1669–1683. Publisher: Springer Science and Business Media LLC.

Yoon, J., Drumright, L. N., and van der Schaar, M. (2020). Anonymization through data synthesis using generative adversarial networks (ADS-GAN). IEEE J. Biomed. Health Informatics, 24(8):2378–2388.

Zhang, C. and Meng, Y. (2025). Bridging the divide: technical research and application on legal judgment prediction. Artificial Intelligence and Law. Publisher: Springer Science and Business Media LLC.

Zhang, P. and Boulos, M. (2023). Generative ai in medicine and healthcare: Promises, opportunities and challenges. Future Internet, 15:286.

Zhang, Y., Tian, J., and Deng, F. (2026). In generative ai we trust: an exploratory study on dimensionality and structure of user trust in chatgpt. Interacting with Computers, 38:58–76.
Publicado
25/05/2026
ANDRADE, Richardson B. da S.; ROCHA FILHO, Geraldo Pereira; MARQUES, Gilmar dos Santos; CANEDO, Edna Dias. Privacy, Data Protection, Risk, and Compliance in the Age of Generative AI Systems: A Systematic Mapping Study. In: SIMPÓSIO BRASILEIRO DE SISTEMAS DE INFORMAÇÃO (SBSI), 22. , 2026, Vitória/ES. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2026 . p. 1181-1200. DOI: https://doi.org/10.5753/sbsi.2026.248735.

Artigos mais lidos do(s) mesmo(s) autor(es)

<< < 1 2 3 4 5 > >>