Navigating Privacy Risks in Generative AI: Concerns, Challenges, and Potential Solutions
DOI:
https://doi.org/10.54097/bx1ne091Keywords:
Generative AI, Privacy Risks, Membership Inference Attacks, Differential Privacy, Data Extraction AttacksAbstract
The rapid advancement of Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) has revolutionized numerous applications across healthcare, finance, and customer service. However, these technological breakthroughs introduce significant privacy risks as models may inadvertently memorize and expose sensitive information from their training data. This paper provides a comprehensive analysis of current privacy vulnerabilities in GenAI systems, including membership inference attacks, model inversion attacks, data extraction techniques, and data poisoning vulnerabilities. We examine state-of-the-art mitigation strategies including differential privacy (DP), cryptographic methods, anonymization techniques, and perturbation strategies. Through analysis of real-world case studies and empirical evidence, we demonstrate that current privacy-preserving techniques, while promising, face significant utility-privacy trade-offs. Our findings indicate that ε-differential privacy with ε = 5, δ = 10^-6 provides adequate protection for most practical applications, though stronger guarantees may be necessary for highly sensitive data. We conclude by presenting a comprehensive framework for user-centric privacy design and identifying critical areas for future research in privacy-preserving generative AI.
Downloads
References
[1] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33: 1877–1896.
[2] OpenAI. 2023. GPT-4 Technical Report. arXiv preprint arXiv:2303.08774.
[3] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
[4] Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. 2021. Extracting training data from large language models. In 30th USENIX Security Symposium. 2633–2650.
[5] Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, and Katherine Lee. 2023. Scalable extraction of training data from (production) language models. arXiv preprint arXiv:2311.17035.
[6] Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP). IEEE, 3–18.
[7] Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. 1322–1333.
[8] Alexandre Sablayrolles, Matthijs Douze, Cordelia Schmid, Yann Ollivier, and Hervé Jégou. 2019. White-box vs black-box: Bayes optimal strategies for membership inference. In International conference on machine learning. PMLR, 5558–5567.
[9] Zheng Li, Yang Zhang, et al. 2021. Membership inference attacks and defenses in neural network pruning. In 30th USENIX Security Symposium. 4561–4578.
[10] Congzheng Song and Ananth Raghunathan. 2020. Information leakage in embedding models. arXiv preprint arXiv:2004.00053.
[11] Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. 2019. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX Security Symposium. 267–284.
[12] Cynthia Dwork. 2006. Differential privacy. In International colloquium on automata, languages, and programming. Springer, 1–12.
[13] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference. Springer, 265–284.
[14] Hammond Pearce, Baleegh Ahmad, Benjamin Tan, Brendan Dolan-Gavitt, and Ramesh Karri. 2022. Asleep at the keyboard? assessing the security of github copilot's code contributions. In 2022 IEEE symposium on security and privacy (SP). IEEE, 754–768.
[15] Ann Cavoukian. 2009. Privacy by design: The 7 foundational principles. Information and privacy commissioner of Ontario, Canada 5.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Journal of Computing and Electronic Information Management

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.








