How We Would Have Approached Copilot Implementation for the City of Amsterdam

February 26, 2025

The City of Amsterdam’s decision to suspend its pilot deployment of Microsoft Copilot due to privacy concerns highlights the paramount importance of a well-structured and comprehensive AI implementation strategy. To fully leverage Copilot’s potential while ensuring regulatory compliance and safeguarding sensitive municipal data, a meticulously designed framework which encompasses robust security measures, stringent governance protocols, rigorous privacy safeguards, data minimisation principles, and finely tuned access controls is not just advisable, but essential.

In this article, we will examine how Digital Bricks would navigate such a high-stakes implementation, ensuring that AI adoption is not only compliant but also strategically sound. By embedding clear safeguards, governance structures, and escalation processes from the outset, we mitigate risks before they escalate, enabling a secure and seamless deployment of Microsoft Copilot. Our approach prioritises transparency, operational resilience, and AI governance, transforming Copilot from a potential compliance challenge into a powerful, well-managed asset for public-sector innovation.

Taking a closer look at the Strategy stages of implementation

Conduct a Thorough Data Protection Impact Assessment (DPIA)

Before deploying Copilot, conducting a comprehensive Data Protection Impact Assessment (DPIA) is not just a best practice—it is a necessity in highly regulated environments such as municipal governments. The DPIA is a systematic process that identifies, assesses, and mitigates risks associated with the processing of personal data. It ensures that AI-driven tools like Copilot comply with legal, ethical, and operational standards while safeguarding citizen and employee data.

A well-executed DPIA should not be a one-time exercise but a continuous process that evolves as Copilot’s usage expands. Below, we dive deeper into the core components of an effective DPIA:

Data Flow Mapping: Tracing the Lifecycle of Data

Understanding how data flows through Copilot is critical to preventing unintended exposure of sensitive information. Data flow mapping is a visual and analytical process that traces how data is collected, stored, processed, and shared across various systems.

Key considerations for effective data flow mapping:

Identify data entry points: Determine how Copilot interacts with municipal systems—e.g., through email integrations, document searches, or Microsoft 365 applications.
Classify data types: Establish a clear taxonomy of data categories, distinguishing between public, internal, confidential, and highly sensitive information.
Map data storage locations: Pinpoint where Copilot stores or references information—whether it resides on-premises, in SharePoint, in the cloud, or within Microsoft’s AI models.
Assess third-party interactions: Identify whether external services or vendors (e.g., Microsoft cloud storage, Azure AI services) play a role in processing data.
Monitor data retention policies: Ensure that Copilot adheres to data minimisation principles, meaning data is not retained longer than necessary and deletion policies are enforced.

By meticulously mapping the end-to-end data journey, Amsterdam could have anticipated risks and designed safeguards that align with legal obligations and internal security policies.

Risk Assessment: Identifying and Quantifying Privacy Risks

A DPIA’s core objective is to quantify potential risks—not just from a technical perspective, but also in terms of legal, operational, and reputational impact. AI-driven systems introduce unique risks, including unintended data exposure, AI-generated inaccuracies, and compliance violations.

Key risk factors to assess:

Legal risks: Does Copilot’s data processing comply with GDPR, especially regarding data sovereignty, data subject rights, and lawful processing grounds?
Security vulnerabilities: Does Copilot introduce new attack surfaces, such as increased exposure to data breaches, insider threats, or AI hallucinations?
Access control risks: Can unauthorized personnel inadvertently access confidential municipal documents or citizen records through Copilot’s search functions?
Bias and fairness risks: Does Copilot have mechanisms to detect and prevent biased responses that could lead to discriminatory outcomes in decision-making?
Accountability and transparency gaps: Is there a clear audit trail showing who accessed what data, when, and for what purpose?

A comprehensive risk matrix should be developed, assigning each risk a likelihood score and impact level to prioritise mitigation strategies effectively.

Schematising risks inherent of AI to and their impact on business performance

Strengthening Security and Compliance

Once risks are identified, a structured mitigation plan must be put in place to reduce or eliminate potential threats. The success of Copilot’s implementation hinges on proactive security controls, privacy safeguards, and governance mechanisms.

Key mitigation strategies:

Encryption and Secure Data Handling
  • Implement end-to-end encryption for data in transit and at rest, ensuring unauthorised parties cannot intercept or decipher information.
  • Use Microsoft Purview for data classification and protection, restricting Copilot’s ability to process highly sensitive content.
Access Controls and Role-Based Permissions
  • Adopt strict Role-Based Access Controls (RBAC) to limit Copilot’s visibility based on user roles, departments, and security clearances.
  • Restrict external data-sharing capabilities, preventing unintentional exposure of government records.
  • Implement adaptive access policies that enforce multi-factor authentication (MFA) for high-risk interactions.
Data Retention and Deletion Policies
  • Enforce strict data minimisation, ensuring that Copilot does not retain or reuse information beyond predefined retention periods.
  • Implement automated data purging mechanisms for temporary AI-generated insights, preventing long-term storage of unnecessary data.
Transparency and Auditability
  • Enable detailed logging of all AI interactions, creating an audit trail for compliance and internal investigations.
  • Establish AI explainability requirements, ensuring that Copilot-generated recommendations are reviewable and justifiable.
AI Ethics and Bias Prevention
  • Deploy AI fairness checks that flag potentially biased responses in Copilot-generated outputs.
  • Require human-in-the-loop (HITL) validation, ensuring that critical AI-driven decisions undergo human review before execution.

By implementing multi-layered security protocols and governance frameworks, Amsterdam could have built an AI deployment that was not only efficient but also legally and ethically sound.

To ensure that Copilot’s implementation aligns with legal, ethical, and operational requirements, early and continuous engagement with key stakeholders is vital. AI adoption is not just an IT initiative—it requires input from legal, compliance, security, and governance teams. By fostering cross-functional collaboration, Amsterdam could have identified risks early, developed stronger governance policies, and ensured that AI adoption aligned with public-sector accountability standards.

Implement Robust Data Governance Frameworks

Effective data governance is the cornerstone of a secure and compliant Copilot implementation.

Managed governance with Microsoft Copilot

A robust framework should include:

Data Classification: Categorise data based on sensitivity levels (e.g., public, internal, confidential) to apply appropriate handling procedures.
Access Controls: Enforce role-based access controls (RBAC) to ensure that users only have access to data necessary for their roles.
Data Lifecycle Management: Establish policies for data retention and disposal, ensuring that data is not held longer than necessary.
Regular Audits: Conduct periodic reviews of data access logs and permissions to detect and rectify unauthorised access or anomalies.

By instituting these measures, we would ensure that Copilot operates within a structured and well-governed data environment, significantly reducing the risks associated with data mishandling.

Prioritise Data Minimisation Principles

Adhering to the principle of data minimisation involves collecting and processing only the data that is strictly necessary for the intended purpose. To achieve this:

Clearly define the specific purposes for which Copilot will process data, ensuring alignment with objectives. Maintain an inventory of data being processed, regularly reviewing and eliminating unnecessary data collection and where possible, implement techniques to anonymise or pseudonymise data, reducing the risk of identifying individuals.

By limiting data collection to what is necessary, the city can reduce potential exposure of sensitive information and enhance compliance with data protection regulations.

Data Access and Permissions Management

Before implementing Copilot, it is essential to establish a well-structured access control framework that defines who can access which data and under what conditions. Rather than making adjustments reactively as security concerns arise, access controls should be carefully designed before deployment to ensure that only authorised users and AI processes interact with sensitive data. Without a clear structure in place, there is a significant risk that Copilot could inadvertently process or surface information that should remain restricted, leading to compliance breaches, data exposure, or operational inefficiencies.

A strong data classification policy should be implemented at the outset, categorizing information based on its sensitivity, confidentiality, and regulatory requirements. For example, internal documents that contain municipal decision-making data, financial records, or citizen information should be classified with appropriate protections, ensuring that Copilot does not have unrestricted access to process or summarise them. This classification system must be consistently applied across Microsoft Graph, OneDrive, Dataverse, SharePoint, and other connected data sources, so that every integration point aligns with the broader security strategy.

At the same time, access control models should follow the principle of least privilege (PoLP), granting users and AI processes only the minimum permissions necessary to perform their roles effectively. This means configuring role-based access controls (RBAC) to define specific data visibility rules for employees based on their department, seniority, and operational needs. Without this level of control, there is a risk that Copilot could retrieve highly sensitive government data in response to a prompt from a user who should not have access to that information. Proactively structuring these permissions ensures that AI-generated insights remain within the appropriate governance framework.

A tiered security approach should also be implemented, where different departments or teams have segmented access rules that prevent unauthorised data sharing. For instance, legal and compliance teams may require access to Copilot’s summarization features for regulatory documentation, while customer service teams may need access only to structured datasets relevant to their workflows. By ensuring that different user groups interact only with the data that is necessary and appropriate for their roles, the organization reduces data leakage risks and strengthens compliance oversight.

Beyond internal access controls, external sharing restrictions should be put in place to prevent unintended data exposure. A common oversight in AI implementations is leaving broad access groups like “Everyone Except External Users” enabled, which could allow widespread internal access to sensitive files. By refining sharing policies before Copilot is deployed, you can eliminate the risk of unrestricted access to confidential documents and ensure that external data sharing is tightly controlled. These measures are particularly important when Copilot is used in a municipal setting, where public sector data must be handled with an elevated level of scrutiny and compliance.

Finally, real-time monitoring and auditing should be integrated from the outset to track how Copilot interacts with your data. A structured logging system should be established to record AI interactions, flag anomalies, and alert administrators to potential misconfigurations or unauthorised access attempts. Without a proactive monitoring strategy, it becomes significantly harder to identify security gaps or address data governance violations before they escalate into compliance issues.

By defining clear, structured access controls before deployment, you can ensure that Copilot is implemented within a secure, transparent, and well-governed data environment. This approach not only minimises security risks and regulatory exposure but also enables Copilot to function as an effective AI tool without compromising the integrity of sensitive information.

Optimise Microsoft Copilot’s Customisation for Public Sector Use

Amsterdam could leverage Copilot’s customisation features to align it more effectively with government-specific privacy, security, and operational requirements.

Leverage Copilot Studio for Customised Controls

Disable unnecessary AI functionalities that pose a security risk (e.g., limiting Copilot’s ability to summarise highly sensitive documents).
Fine-tune prompts to align with municipal terminology, ensuring Copilot generates contextually relevant outputs.
Restrict AI-generated recommendations from external sources to prevent unauthorised data leakage.

Integrate Azure OpenAI and Compliance Tools

Use Azure OpenAI’s responsible AI features to monitor Copilot’s compliance with Amsterdam’s data policies. Enable Microsoft Purview to track how Copilot interacts with sensitive government records and Implement data loss prevention (DLP) rules to prevent Copilot from processing or sharing classified municipal data.

Control AI Model Training with Public Data Restrictions

Ensure Copilot does not learn from or retain sensitive municipal data in a way that could expose it to unintended users. Configure Copilot’s data processing settings to ensure that Amsterdam’s proprietary documents remain within secured environments and regularly purge AI-generated data that is no longer needed to comply with data minimisation principles.

No Two AI Implementations Are Alike

Every AI implementation presents unique challenges, and while the City of Amsterdam may have taken the steps it deemed necessary, the complexity of deploying AI in highly regulated environments means that unforeseen hurdles are inevitable. The key to success isn’t simply avoiding challenges—it’s having the right expertise in place to identify risks early, adapt to evolving requirements, and ensure that security, governance, and compliance remain at the forefront. A structured approach, combined with the right oversight and agile response mechanisms, can make the difference between an implementation that stalls and one that thrives.

We understand that AI adoption is a journey, not a one-time decision. Our role is to provide deep expertise, strategic guidance, and hands-on support to ensure that you can proactively navigate obstacles and fine-tune your AI deployments with confidence. We’re here to help municipalities and enterprises alike turn potential roadblocks into opportunities for smarter, more effective AI integration, without compromising on security or compliance. If you're looking for a trusted partner to help you implement AI the right way, let’s talk.