A modern illustration depicting AI in the medical field, showing a combination of AI technology and medical devices.

Ensuring Compliance and Safety: Supporting AI-Enhanced Medical Devices Under the New AI Act

9 minutes

The rise of artificial intelligence (AI) brings with it a lot of exciting possibilities but also some serious ethical and transparency challenges that current regulations don’t fully address. As Europe gets ready to roll out the new AI Act, companies from around the world wanting to bring their AI products and services to the EU will need to meet tough new standards.

Medical AI systems will be carefully examined to make sure they comply with IVDR and MDR regulations.

A modern illustration depicting AI in the medical field, showing a combination of AI technology and medical devices.
Picture produced with DALL·E AI technology

Correct ways to work with AI within the medical device industry according to new AI act

The goal of the new AI Act is to address the difficulties and dangers associated with integrating AI into medical equipment. It seeks to enhance previous rules by guaranteeing that AI technologies used in healthcare are not only state-of-the-art but also trustworthy, transparent, and safe. The AI Act lays out strict rules with the goal of fostering trust between patients and healthcare practitioners, thereby encouraging the responsible development and application of AI systems. This is particularly important in the medical field since technological glitches can have serious consequences.

These new requirements include AI systems intended for medical use undergoing a thorough evaluation to make sure they comply with MDR and IVDR laws. As part of this evaluation, notified entities conduct conformity assessments to check whether the AI systems meet the necessary criteria. Manufacturers are required to gather comprehensive technical documentation detailing every aspect of the system’s development, design, risk assessment, clinical analysis, and performance information. Prior to being put on the market, this documentation is essential for demonstrating the safety and effectiveness of the AI system.

AI systems under the MDR and IVDR are categorized based on their intended use and potential risks. Manufacturers have to implement robust risk management strategies to identify, assess, and mitigate potential risks throughout the product lifecycle. This ensures comprehensive risk management from development through deployment and use.

A large number of the Act’s requirements are also repeated in the EU MDR and EU IVDR. For instance, the need for technical documentation, use instructions, and a quality management system. Some manufacturers who have already certified their products under the MDR may need to update their technical documentation to align with the AI Act’s requirements.

These are the new requirements that are not in the MDR and IVDR:

  • Governance and data management requirements for training and testing data sets: Manufacturers have to put strong governance and data management procedures in place to guarantee the integrity and legality of datasets used for AI testing and training.
  • New record-keeping requirements, including the automatic recording of events (logs) over the system’s lifetime: To guarantee traceability and accountability, AI systems have to automatically log events throughout their lifecycle, improving the system’s integrity and dependability.
  • Transparent design requirements so deployers can interpret the output and use it appropriately: AI systems need to be transparently constructed so that people can understand how the system produces its results. Users can trust the AI system and use it correctly thanks to this transparency.
  • Human oversight design requirements: AI systems need to be built with human oversight in mind, so that qualified humans may step in and take control of the system as needed.
  • Accuracy and cybersecurity requirements: For artificial intelligence systems to safeguard data integrity, thwart cyberattacks, and prevent unauthorized access, they must be extremely accurate and equipped with robust cybersecurity protections. Respecting data protection rules like GDPR is essential to protecting sensitive information.

SeproTec’s alignment with the AI Act

As Europe prepares to enforce the new AI Act, addressing the integration of AI into medical devices becomes crucial amidst industry challenges. Manufacturers are required to coordinate their efforts to comply with the AI Act’s new requirements with their AI systems. SeproTec plays a crucial role in this regulatory landscape by offering essential language and IP services that support medical device manufacturers at every stage of their workflow.

From the early development phase, we assist in translating research documents and regulatory requirements, ensuring clear communication and adherence to international standards. During regulatory approval, we provide accurate translation and localization of technical documentation, clinical analysis, risk assessments, and performance data to meet the stringent requirements of the AI Act and other regulatory bodies.

SeproTec’s ISO 13485 accreditation ensures that our services support the maintenance of Quality Management Systems, helping medical device manufacturers comply with legal and quality standards. Additionally, SeproTec aids in the translation of training and testing datasets, ensuring data integrity and compliance with the AI Act’s governance requirements.

Leave a comment

There are no comments

Subscribe to the blog