The "AI arms race" is forcing teams to move at breakneck speed, often leaving sensitive data protection as an afterthought. This creates a high-stakes environment where 91% of leaders believe sensitive data is necessary for AI training, yet a staggering 78% are highly concerned about the theft or breach of that same data.
Failing to build a clear strategy for sensitive data protection has serious consequences. Navigating blindly risks compliance gaps, costly privacy breaches, and irreversible damage to your reputation.
This mini-report cuts through the confusion with insights from 280 enterprise leaders. It is your guide to understanding where AI and data privacy collide, and how to automate compliance without slowing innovation.
What’s in the Report?
- Should sensitive data be allowed in AI model training and fine tuning?
- How concerned are you about issues like theft of model training data, audits, and personal data re-identification in the context of AI environments?
- How familiar are you with where sensitive data is currently used in AI development?
- Do you plan to invest in AI data privacy immediately?
Surprising Findings on AI and Data Privacy
The results of the survey — detailed in this report — surprised us.
- 91% say sensitive data should be allowed in AI training and testing.
- Yet, 78% are highly concerned about theft or breach of model training data.
- 86% of organizations plan to invest in AI data privacy over the next 1-2 years.
Explore the Latest Findings on AI and Data Privacy
Complete the form to get the 2025 report on AI data privacy compliance.