Sandbox Under Construction: Where Do We Build in the Meantime?

January 17, 2025

The EU Artificial Intelligence Act mandates that by August 2, 2026, each Member State must establish at least one AI regulatory sandbox, a controlled environment designed to foster the safe and innovative development, testing, and validation of AI systems. However, with the enforcement of Article 5: Prohibited AI Practices, starting as early as February 2, 2025, developers, providers, and deployers face a regulatory crossfire. How can they ensure that their systems are built and tested in safe environments without causing harm before the required sandboxes are in place?

As of now, Europe has limited operational AI sandboxes—in Norway and the United Kingdom—both outside the EU legislative jurisdiction. Several EU member states have made progress in establishing AI regulatory sandboxes, though they remain at varying stages of development. Spain is leading with an operational pilot sandbox focusing on testing and compliance across multiple sectors, with plans for broader expansion. Denmark has partially operational sandboxes in healthcare and finance, though further sector inclusion and scalability are needed. France is advancing through pilot projects in healthcare and finance, with efforts underway to extend capacity to other industries.

The Netherlands has early-stage, partially operational sandboxes. Germany is running pilot sandboxes. Lastly, Finland is in the developmental phase. While these countries show promise, most still face challenges in fully aligning with the EU AI Act’s requirements and ensuring comprehensive sector coverage before the 2026 deadline.

EU AI Act, Article 57: Regulatory Sandboxes

What Is an AI Sandbox?

An AI sandbox is a controlled environment where businesses can safely develop, test, and refine innovative AI systems under the supervision of regulatory authorities. It minimises risks by allowing experimentation within a structured framework, ensuring technologies are tested responsibly without causing harm. While no universal definition exists, AI sandboxes serve two primary functions: enabling businesses to innovate and providing regulators with insights to shape effective policies.

Originally developed for the financial sector to test technologies like digital payments, sandboxes have expanded into industries such as healthcare, energy, and transportation. In these sectors, they facilitate the testing of complex systems like autonomous vehicles or predictive healthcare models, balancing innovation with compliance.

AI Sandbox Architecture from Norwegian Cognitive Centre

AI sandboxes are vital, aligning technological advancements with ethical and legal standards. Under the EU AI Act, these environments will play a critical role in ensuring AI systems meet regulatory requirements. However, their current availability in Europe is limited, raising a critical question: how can developers navigate the testing of systems with potential risks in the absence of these controlled environments?

The answer lies in the careful documentation of risk mitigation strategies during the development and testing phases. If an AI system, at its worst-case scenario, were deployed and deemed prohibited or borderline under the EU AI Act, regulators would likely scrutinise the steps taken during testing to minimise risks. Developers must demonstrate that they have acted responsibly, implementing processes to avoid harm and adhering to best practices to the greatest extent possible.

This challenge emphasises the importance of proactive risk management and ethical considerations in AI development, even in the absence of formal sandboxes. In the next section, we will explore strategies for navigating this complex landscape and ensuring compliance while fostering innovation.

Leveraging Existing Resources

In the absence of widespread AI regulatory sandboxes in Europe, developers, providers, and deployers must creatively leverage available tools and technologies to ensure their systems are tested responsibly and aligned with the principles of the EU AI Act. While formal infrastructure is still in development, there are several resources that can effectively support this interim period.

Simulation platforms are invaluable for creating controlled environments where AI systems can be rigorously tested. Tools like Simulink offer robust simulation and modeling capabilities, particularly for testing AI algorithms in robotics, autonomous systems, and control systems. Likewise, AnyLogic provides a versatile platform for simulating complex systems, making it ideal for testing AI models in logistics, manufacturing, and supply chains.

Synthetic data tools also play a crucial role in AI development, particularly when real-world data is unavailable or poses privacy concerns. Tonic AI allows developers to generate synthetic datasets that closely mimic real-world conditions, enabling safe and efficient model testing without compromising sensitive data. MOSTLY AI takes this further by specialsing in privacy-preserving synthetic data, which is particularly beneficial in sectors like finance and healthcare where data security is paramount.

Mostly.AI

For model validation and performance tracking, developers can rely on tools that provide transparency and insight into AI systems. Weights & Biases is an excellent platform for tracking experiments, visualising performance metrics, and identifying potential issues during development. In addition, AI Fairness 360 (IBM) offers an open-source toolkit to detect and mitigate bias in AI models, ensuring ethical compliance. Complementing these is the Microsoft Responsible AI Toolbox, which provides comprehensive tools for fairness, transparency, and interpretability.

Data privacy and security remain critical concerns, particularly when working with sensitive workloads. Azure Confidential Computing safeguards AI workloads during processing, ensuring compliance with privacy standards.

For aligning AI systems with emerging regulations, compliance and governance tools are indispensable. 360°AI from Holistic AI evaluates systems for adherence to regulatory and ethical guidelines, providing actionable insights into performance and risks.

These tools don’t replicate the full functionality of regulatory sandboxes, which are more comprehensive ecosystems combining real-world testing, regulatory oversight, and iterative feedback between developers and regulators. That said, these tools do have utility. They help developers prepare for future sandbox participation by refining systems, identifying potential compliance gaps, and ensuring that innovations meet certain ethical and technical standards.

Best Practices for Compliance

In a landscape where the full regulatory framework for AI is still evolving, developers must adopt proactive measures to ensure compliance with the EU AI Act and build trust with regulators, users, and stakeholders. While formal sandboxes are not yet widespread, adhering to best practices can help developers navigate this uncertain terrain responsibly and effectively.

Conduct Comprehensive Risk Assessments Understanding potential risks posed by AI systems is the foundation of compliance. Developers should evaluate the possible societal, ethical, and technical implications of their systems, considering worst-case scenarios and implementing safeguards to mitigate them. Conducting these assessments not only ensures safety but also aligns development processes with the principles of accountability and transparency outlined in the EU AI Act.

Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.

Implement Robust Documentation Practices Transparency is key to compliance. Maintaining clear, detailed records of development processes, testing protocols, and risk mitigation strategies is essential. These records demonstrate a developer’s commitment to ethical standards and provide evidence of due diligence if questions about compliance arise. A well-documented development lifecycle is also a crucial resource for engaging with regulators and participating in future sandbox environments.

Setting Up a documented AI Auditing Process

Prioritise Explainability and Interpretability AI systems must be explainable and interpretable to foster trust and accountability. Developers should integrate mechanisms that make their systems’ decision-making processes transparent and understandable to both technical and non-technical audiences. This ensures that stakeholders, including regulators, can evaluate AI behaviour and outcomes effectively. It's also important to consider that other legislation like Article 22 of GDPR give the rights to the individuals to demand an explanation on how an AI system has made a decision that can impact them.

How Explainable AI affects end users

Partner with Experts Compliance with AI regulations requires expertise in both technology and governance. At Digital Bricks, we specialise in helping organisations adopt and accelerate AI responsibly, providing strategic guidance to ensure compliance with emerging regulations like the EU AI Act. Our tailored workshops, risk assessments, and ethical AI frameworks empower businesses to navigate complex requirements with confidence. By collaborating with experienced partners, developers can stay ahead of regulatory changes while focusing on innovation.

Incorporate Ethical AI Frameworks Adopting ethical AI principles is no longer optional—it is a necessity for organisations aiming to remain competitive and compliant. Developers should integrate frameworks like the EU Ethics Guidelines for Trustworthy AI to design systems that prioritise fairness, transparency, and accountability from the ground up.

Integrating responsible practices in development, testing, and deployment.

Engage in Ongoing Learning and Adaptation Compliance is not a one-time task but an ongoing process. Developers must stay informed about updates to the EU AI Act and evolving industry standards. We offer continuous education and support, ensuring that you and your team is equipped to adapt to new regulatory landscapes while maintaining momentum in AI innovation.

Design for Privacy and Security Privacy and security are critical pillars of compliance. Developers should implement robust measures to protect sensitive data and ensure systems are designed with security at their core. Skyflow, last month introduced a new solution that secures the AI agent lifecycle with capabilities that protect sensitive information via de-identification to anonymise data.

Skyflow Agentic AI Security & Privacy

It is a purpose-built AI Gateway that protects sensitive interactions with its unique two-way data rehydration capability. The solution includes authorisation and auditing tools that ensure compliance with EU AI Act, enabling companies to build and deploy agents confidently while meeting legal requirements. This is just one example of how to prioritise Privacy and Security.

By following these best practices, developers can demonstrate their commitment to responsible AI development, mitigate risks, and position themselves as leaders in ethical AI innovation.

Webinar Invitation

As the EU AI Act begins to shape the future of artificial intelligence, the onus is on developers, providers, and deployers to embrace compliance as a driver of innovation. While the absence of widespread sandboxes presents challenges, leveraging existing tools, adhering to best practices, and preparing for upcoming regulatory changes will position organisations to thrive.

To dive deeper into the EU AI Act and its implications, join us for our upcoming webinar on January 29, 2025. We’ll explore the key articles set to come into effect on February 2, 2025, discuss practical strategies for compliance, and provide insights to help you prepare for the future of AI.

Webinar