The EU has taken a major step towards regulating AI with the introduction of the EU AI Act, one of the first of its kind in the world. The implicit goal of the EU is that the new AI Act will have a similar effect as the GDPR has had on the security of personal data – to set the global rules for the ethical use of AI, through the “Brussels effect“. While the intention to create a legal framework for the safe and ethical use of AI is both fair and welcome, it’s also clear that the Act’s impact on sectors like digital health will be significant and potentially challenging.
What is the EU’s new AI Act and What are its Implications on Digital Health?
EU’s AI Act is Europe’s attempt to set global standards for the ethical and safe use of AI. The act, which officially came into force on August 1, 2024, will gradually be enforced over the next months. It places strict regulations on AI providers and operators, especially those considered high-risk. Under the AI Act, providers of high-risk AI systems must comply with rigorous standards, including comprehensive risk management protocols, extensive documentation detailing AI model functionality, and continuous human oversight throughout the AI system’s entire lifecycle.
The AI Act classifies AI applications into three risk categories. First, applications and systems that pose an unacceptable risk, such as the CCP-style social scoring system used to police its citizens, are prohibited. Second, high-risk applications, such as AI-powered CV scanning tools that rank job applicants, are subject to specific requirements. Third, applications that fall outside the former two categories, are less regulated but must usually still inform users when they are interacting with AI.
Digital health technologies directly or indirectly deal with patient health data, whether anonymized or not. Therefore, all digital health solutions inadvertently fall into the “high risk” category under the new regulation. This means that all AI-based digital health solutions will be subject to the most stringent regulatory requirements, even if the risk they pose does not always objectively warrant such close scrutiny.
The AI Act places a strong emphasis on transparency and accountability, forcing digital health providers to confront a key challenge in AI: the “black box” paradox. Many advanced AI systems, particularly those driven by machine learning, are known for their lack of “explainability.” While these systems can produce highly accurate results, as shown in peer-reviewed research, providers often cannot clearly explain how the results are derived. This is the essence of the “black box” paradox.
To comply with the AI Act, providers will need to provide detailed justifications for AI decisions, which in many cases is not possible. Providers and operators will have to balance the power of AI with the demands of regulatory compliance. This requirement is likely to slow the adoption of AI technologies in routine clinical care, especially those that are highly complex and difficult to explain, such as the models used in medical imaging. It would be like asking a radiologist to explain the inner workings of their brain when diagnosing a patient.
Trifecta of Regulatory Complexity
Adding to the complexity, the interaction between the AI Act and existing frameworks such as MDR and GDPR remains unclear. Many of the compliance requirements of the new Act, relating to data protection, patient safety, and ethical standards, are already mandated for digital health providers through the existing regulations. For example, under the MDR, digital health products are classified into one of three risk categories based on their intended use and risk profile. However, critics have pointed out the ambiguity in the classification process itself, as different notified bodies across the EU interpret the regulations in their own way. In other words, the same software medical device could be classified as Class I (low risk) or Class II (moderate risk), depending on which notified body you talk to.
The combined weight of navigating MDR, GDPR, and now the AI Act creates a trifecta of regulatory complexity for digital health providers to deal with. Providers, especially startups and smaller innovators, may face longer and more costly pathways to bring their products to market. Without clearer guidance from EU authorities, it will be difficult for providers to prepare effective compliance strategies, potentially slowing innovation in areas where AI-driven healthcare solutions could provide significant public health benefits.
The Bigger European Picture
This brings us to a major concern: Europe’s declining competitiveness in new technologies, as outlined in Mario Draghi’s recent report. He and his team highlighted how Europe has fallen behind the US and China, particularly when it comes to new technologies such as AI. Since 2008, around 30% of European unicorns (startups valued at over $1 billion) have permanently relocated to the US, and the trend has been sustaining. The continent’s declining competitiveness could be further exacerbated by the Act.
For digital health startups, where product development is both costly and complex, and time-to-market is critical, more stringent regulations will most likely further limit the growth. Digital health startups are already facing a down market with limited funding, and may find the additional regulatory hurdles too burdensome to overcome. In addition, US tech ecosystem offers startups a more supportive financial environment, as VC funding in the health tech sector is significantly higher than in Europe. By relocating, European startups can tap into larger pools of capital, making it easier to fund compliance efforts or bypass costly regulatory delays altogether.
One overlooked problem is that large, predominantly US-based tech companies are far better positioned to navigate the complexities of the AI Act. These companies have deeper pockets and heavier legal war chests that allow them to comply with such regulations, or even find ways around them, much like they did with the GDPR. They can absorb the costs of compliance, adapt their technologies, and still thrive in the European market. On the other hand, European digital health startups and smaller companies simply don’t have the resources to do so.
The AI Act, in its current form, leaves room for uncertainty in terms of how specific provisions will be enforced and how penalties will be applied. For a startup in the field of digital health this uncertainty can be crippling. The need to navigate unclear guidelines and comply with evolving standards could slow innovation and scare away investors, compounding the financial and operational challenges that these startups already face.
Ultimately, the AI Act, while well-intentioned in its effort to create a safe and ethical AI landscape, risks inadvertently pushing Europe’s digital health innovators away. If Europe’s regulatory environment becomes too burdensome for smaller players, the region could lose its edge in one of the fastest-growing sectors of technology, with long-term effects for both its economy and global standing.
Path Forward
While the EU AI Act is a necessary step toward ensuring ethical and responsible AI use, its impact on digital health could be deeply restrictive. Here’s what I think needs to happen to mitigate the risks and ensure healthcare innovation can continue.
First, the EU needs to provide AI-based digital health providers with clearer guidance on how the AI Act will work with existing legislation, which in its current form leaves much to speculation. Streamlining these frameworks would help reduce the compliance burden on startups, making it easier for them to bring new technologies to market.
In addition, startups and smaller companies in the early stages of developing AI-based digital health solutions should receive temporary exemptions from certain provisions of the law. Provisions related to the classification of high-risk AI or strict documentation requirements, while essential for safety, can be overwhelming for smaller innovators. These companies already face a challenging market with complex regulatory requirements and difficult conditions. Flexibility in the early stages would allow them to focus on developing innovative products without the unnecessary burden of regulatory compliance. However, these exemptions must be balanced with mechanisms to maintain high ethical standards. For example, post-market surveillance could ensure that startups are held accountable for the safety and efficacy of their products as they grow.
The AI Act is an important and correct step towards the development of ethical AI products, but it must be implemented in a way that encourages innovation, not restricts it. By providing clearer guidance and allowing some regulatory flexibility, the EU can ensure that its law promotes both technological progress and the highest ethical standards in healthcare.