What do you choose between productivity and protection?
This is an issue most organizations are facing AI can significantly boost productivity, but it should not come at the cost of data security. Often, protection takes over productivity to prevent data leaks. But do we really have to choose between the two?
Of course not! With Microsoft’s robust security framework, organizations can harness the power of AI-driven productivity without compromising protection. In this article, I’m going to explain what you can do to increase the security, to mitigate data leaks through AI, so you don’t have to choose.
But how can we do that. It’s Microsoft Purview Data Security Posture Management (DSPM) for AI. DSPM for AI provides easy-to-use graphical tools and reports to quickly gain insights into AI use. This provides a central management location to help you quickly secure data for AI apps and proactively monitor AI use. These apps include Microsoft 365 Copilot, other copilots from Microsoft, and AI apps from third-party large language modules (LLMs).
Data Security Posture Management for AI offers a set of capabilities so you can safely adopt AI without having to choose between productivity and protection:
- Insights and analytics into AI activity in your organization
- Ready-to-use policies to protect data and prevent data loss in AI prompts
- Data assessments to identify, remediate, and monitor potential oversharing of data.
- Compliance controls to apply optimal data handling and storing policies

How to use Data Security Posture Management for AI
To get started with Data Security Posture Management for AI,
- Sign in to the Microsoft Purview portal > Solutions > DSPM for AI.
You need an account that has appropriate permissions for compliance management, such as an account that’s a member of the Microsoft Entra Compliance Administrator group role.
DSPM for AI -> OVERVIEW
From Overview, review the Get started section to learn more about Data Security Posture Management for AI, and the immediate actions you can take. Select each one to display the flyout pane to learn more, take actions, and verify the status.

Action | More information |
Activate Microsoft Purview Audit | Auditing is on by default for new tenants, so this might already meet this prerequisite. If do, and users are already assigned licenses for Copilot, you start to see insights about Copilot activities from the Reports section further down the page. |
Install Microsoft Purview browser extension | A prerequisite for third-party AI sites. |
Onboard devices to Microsoft Purview | A prerequisite for third-party AI sites. |
Extend your insights for data discovery | One-click policies for collecting information about users visiting third-party generative AI sites and sending sensitive information to them. The option is the same as the Extend your insights button in the AI data analytics section further down the page. |
DSPM for AI -> RECOMMENDATIONS
The Recommendations section and decide whether to implement any options that are relevant. Click on View All recommendations link, or Recommendations from the navigation pane to see all the available recommendations for your tenant, and their status.

These options include running a data assessment across SharePoint sites, creating sensitivity labels and policies to protect your data, and creating some default policies to immediately help you detect and protect sensitive data sent to generative AI sites.
Here are the Default one click policies creating:
- DLP policy: DSPM for AI: Detect sensitive info added to AI sites – This policy discovers sensitive content pasted or uploaded in Microsoft Edge, Chrome, and Firefox to AI sites. This policy covers all users and groups in your org in audit mode only.
- Insider risk management policy: DSPM for AI – Detect when users visit AI sites – Detects when users use a browser to visit AI sites.
- Insider risk management policy: DSPM for AI – Detect risky AI usage – This policy helps calculate user risk by detecting risky prompts and responses in Microsoft 365 Copilot and other generative AI apps.
- Communication Compliance: DSPM for AI – Unethical behavior in Copilot – This policy detects sensitive information in prompts and responses in Microsoft 365 Copilot. This policy covers all users and groups in your organization.
- DLP policy DSPM for AI – Block sensitive info from AI sites – This policy uses Adaptive Protection to give a block-with-override to elevated risky users attempting to paste or upload sensitive information to other AI apps in Edge, Chrome, and Firefox. This policy covers all users and groups in your org in test mode.
- Information Protection – This option creates default sensitivity labels and sensitivity label policies.
Note: If you’ve already configured sensitivity labels and their policies, this configuration is skipped.
DSPM for AI -> REPORTS
The Reports section or the Reports page from the navigation pane to view the results of the default policies created. It’s needed to be waited at least a day for the reports to be populated. Select the categories of Microsoft Copilot Experiences and Enterprise AI apps to identify the specific generative AI app.

DSPM for AI -> POLICIES
Policies page to monitor the status of the default one click policies and AI-related policies from other Microsoft Purview solutions. To edit the policies, use the corresponding management solution in the portal.

DSPM for AI -> ACTIVITY EXPLORER
Select Activity explorer to see details of the data collected from your policies. This more detailed information includes activity type and user, date and time, AI app category and app, app accessed in, any sensitive information types, files referenced, and sensitive files referenced.

DSPM for AI -> DATA ASSESSMENTS (preview)
Select Data assessments (preview) to identify and fix potential data oversharing risks in your organization. A default data assessment automatically runs weekly for all your SharePoint sites used by Copilot in your organization, and you might have already run a custom assessment as one of the recommendations.
After an assessment has run, wait at least 48 hours to see the results that don’t update again. You’ll need a new assessment to see any changes in the results.
