Report
The 2025 State of AI and Data Privacy Report
Security & Compliance,
AI,
Data Management
Explore insights on the intersection of AI and data privacy in this exclusive report.
91% of enterprise leaders believe sensitive data should be allowed in AI training, yet 78% are highly concerned about the theft or breach of that same data.
This massive disconnect highlights a critical challenge for organizations: how to innovate with AI without exposing sensitive data.
This exclusive mini-report, excerpted from our 2025 State of Data Compliance and Security Report, dissects the widespread confusion surrounding AI and data privacy.
Gain the expert insights you need to navigate this complex landscape and build a secure foundation for your AI initiatives. Complete the form to get your copy.
What’s In the Report?
We partnered with a third-party research firm to survey 280 global enterprise leaders. This report explores their responses to questions like:
- Should sensitive data be allowed in AI model training and fine tuning?
- How concerned are you about issues like theft of model training data, audits, and personal data re-identification in the context of AI environments?
- How familiar are you with where sensitive data is currently used in AI development?
- Do you plan to invest in AI data privacy immediately?


Surprising Findings on AI and Data Privacy
The results of the survey — detailed in this report — surprised us.
- 91% say sensitive data should be allowed in AI training and testing.
- Yet, 78% are highly concerned about theft or breach of model training data.
- 86% of organizations plan to invest in AI data privacy over the next 1-2 years.
Explore the Latest Findings on AI and Data Privacy
Complete the form now to download your copy of the 2025 State of AI and Data Privacy Report.