As AI advances, we all have a role to play in unlocking the positive impact of AI for organizations and communities around the world, which is why we are focused on helping customers use and create AI trusted, that is, an AI safe, protected e private.
At Microsoft, we have commitments to ensure reliable AI and are building cutting-edge support technology.Our commitments and capabilities go hand in hand to ensure our customers and developers are protected at every layer.
Based on our commitments, we are announcing new ones today resources products to strengthen the security, protection and privacy of AI systems.
Security. Security is our top priority at Microsoft, and our Future Safe Initiative (SFI, in English) expanded underscores the commitments of the entire company and the responsibility we feel to make our customers more insurances. This week, we announce our first SFI Progress Report, highlighting updates that span culture, governance, technology and operations. This fulfills our promise to prioritize security above all else and is guided by three principles: secure by design, secure by default and secure operations, Microsoft Defender and Purview, Our AI services have basic security controls, such as built-in functions to help prevent immediate injections and copyright violations. Based on them, today we are announcing two new features: In addition to our primary offerings, we, Microsoft Defender and Purview, Our AI services have basic security controls, such as built-in functions to help prevent immediate injections and copyright violations. Based on them, today we are announcing two new features:
- assessments no Azure AI Studio to support proactive risk analysis.
- Microsoft 365 Copilot will provide transparency in web queries to help administrators and users better understand how web search improves Copilot response.
Our security features are already being used by customers. Cummins, a 105-year-old company known for its motor manufacturing and development of clean energy technologies, the, he turned to Microsoft Purview to strengthen your data security and governance by automating data classification, tagging and labeling. EPAM Systems, a software engineering and business consulting firm, has deployed the Microsoft 365 Copilot to 300 users because of the data protection they receive from Microsoft. J.T. Sodano, Senior Director of IT, shared that “we were much more confident with Copilot for Microsoft 365, compared to other LLMs (large language models), because we know the same data and information protection policies we set up in the Microsoft Purview apply to Copilot.”
Security. Including security and privacy, the broader principles of Responsible AI from Microsoft, Established in 2018, we continue to guide how we safely create and deploy AI across the enterprise. In practice, this means properly building, testing, and monitoring systems to prevent undesirable behaviors such as harmful content, bias, misuse, and other unintended risks. Over the years, we have made significant investments in building the governance framework, policies, tools, and processes necessary to defend these principles and build and deploy AI securely. At Microsoft, we are committed to sharing our learnings on this journey of defending our Responsible AI principles with our customers. We use our own best practices and learnings to provide people and organizations and tools to create AI applications by the same high standards that we strive to create.
Today, we are sharing new capabilities to help customers pursue the benefits of AI while mitigating the risks:
- A resource correctional in the grounding detection functionality of Microsoft Azure AI Content Safety that helps fix hallucination problems in real time before users see them.
- Embedded Content Security, which allows customers to insert Azure AI Content Safety into devices. This is important for on-device scenarios where cloud connectivity may be intermittent or unavailable.
- New assessments in Azure AI Studio to help customers assess the quality and relevance of outputs and how often their AI application generates protected material.
- Detection of protected material for programming it is now in preview in Azure AI Content Safety to help detect pre-existing content and code. This feature helps developers explore public source code in GitHub repositories, promoting collaboration and transparency, while enabling more informed coding decisions.
It is amazing to see how customers across industries are already using Microsoft solutions to build more secure and reliable AI applications, unity, the Unity, a platform for 3D gaming, it used Microsoft Azure OpenAI Service to create Muse Chat, an AI assistant that makes it easy to develop games. Muse Chat uses content filtering templates in the Azure AI Content Safety to ensure responsible use of the software. In addition, the ASOS, a UK-based fashion retailer with nearly 900 brand partners, it used the same internal content filters in Azure AI Content Safety to support high-quality interactions through an AI app that helps customers find new looks.
We are also seeing the impact on the educational space. The Public Schools of New York City they partnered with Microsoft to develop a secure and appropriate chat system for the educational context, which they are now testing in schools. The South Australian Department of Education it also brought generative AI into the classroom with EdChat, relying on the same infrastructure to ensure safe use for students and teachers.
Privacy. Data is at the heart of AI, and Microsoft's priority is to help ensure that customer data is protected and supported through our privacy principles longstanding, which include user control, transparency and legal and regulatory protections.To develop this, today we are announcing:
- Confidential inference in previous version in our Azure OpenAI Service Whisper model, customers can develop generative AI applications that support verifiable privacy from end to end. Confidential inference ensures that sensitive customer data remains safe and private during the inference process, which is when a trained AI model makes predictions or decisions based on new data.
- The general availability of Azure Confidential VMs with NVIDIA H100 Tensor Core GPUs, which allow customers to protect data directly on the GPU.This is based on our confidential computing solutions, which ensure that customer data remains encrypted and protected in a secure environment so that no one has access to the information or system without permission.
- Azure OpenAI Data Zones for the European Union and the United States are coming soon and build on the existing data residency provided by Azure OpenAI Service, making it easier to manage the data processing and storage of generative AI applications. This new functionality gives customers the flexibility to scale generative AI applications across all Azure regions within one geography, while giving them control of data processing and storage within the EU or the US.
We have seen growing customer interest in confidential computing and enthusiasm for sensitive GPUs, including from the application security provider F5, which is using Azure confidential VMs with NVIDIA H100 Tensor Core GPUs to create advanced AI-based security solutions, ensuring the confidentiality of the data its models are analyzing Royal Bank of Canada (RBC) Integrated Azure confidential computing into its own platform to analyze encrypted data while preserving customer privacy.With the general availability of Azure sensitive VMs with NVIDIA H100 Tensor Core GPUs, RBC can now use these advanced AI tools to work more efficiently and develop more powerful AI models.
Reach more with reliable AI
We all need and hope for an AI that we can trust, and we've seen what's possible when people are empowered to use AI reliably, from enrich employee experiences e reshape business processes even reinvent customer engagement e reimagining our lives everyday. With new features that improve security, protection and privacy, we continue to enable customers to use and build trusted AI solutions that help every person and organization on the planet achieve more. Ultimately, trusted AI encompasses everything we do at Microsoft and is essential to our mission as we work to expand opportunities, gain trust, protect fundamental rights, and promote sustainability in everything we do.


