Microsoft Strengthens AI Security with New Identity Protection and Data Governance Capabilities
Microsoft Corp. has unveiled significant enhancements to its artificial intelligence security and governance offerings, introducing new capabilities designed to secure the emerging ‘agentic workforce’ where AI agents and humans collaborate. Announced at the company’s annual Build developer conference, Microsoft is expanding its Entra, Defender, and Purview services by embedding them directly into Azure AI Foundry and Copilot Studio.
The expanded capabilities aim to address growing concerns in AI development, including prompt injection, data leakage, and identity sprawl, while ensuring regulatory compliance. A key announcement is the launch of Entra Agent ID, a centralized solution for managing AI agent identities built in Copilot Studio and Azure AI Foundry. Each agent is automatically assigned a secure identity in Microsoft Entra, providing security teams with visibility and governance over non-human actors within the enterprise.

Additional features include the integration of Microsoft Defender for Cloud security insights directly into Azure AI Foundry, providing developers with AI-specific threat alerts and posture recommendations. The alerts cover over 15 detection types, including jailbreaks, misconfigurations, and sensitive data leakage. This integration aims to reduce friction between development and security teams, enabling faster responses to evolving threats without slowing deployment.
Microsoft’s Purview data security platform is also receiving updates, including a new software development kit that allows developers to embed policy enforcement, auditing, and data loss prevention into AI systems. The SDK enables organizations to identify sensitive data risks in real-time and apply consistent protection from development through production.
Azure AI Foundry is being updated with a feature called ‘Spotlighting’ that can detect prompt injection attacks embedded in external content, along with real-time task adherence evaluation and continuous monitoring dashboards. These updates help developers confirm that agent behavior remains within scope and aligned with enterprise policy. The service now also supports compliance workflows through integration with Microsoft Purview Compliance Manager and third-party governance solutions.
With these enhancements, Microsoft aims to provide comprehensive security and governance for AI applications and agents across their development lifecycle, addressing critical challenges in the rapidly evolving AI landscape.