Artificial Intelligence Policy

Transparency and Compliance with EU AI Act (EU Regulation 2024/1689)

Version: 07/10/2025
Via degli Anziani 14, 11013 Courmayeur (AO)
Contact: info@emsy.io
EU AI Act: Art. 4, 5, 50

⚠️ Important Notice - Interaction with Artificial Intelligence

EMSy uses artificial intelligence systems to support emergency medical professionals. These systems DO NOT replace clinical judgment, official guidelines (ERC, ILCOR, SIAARTI) or medical supervision.

1. Purpose and Scope of Application

This policy describes how EMSy S.r.l. ("EMSy", "we", "our") uses artificial intelligence (AI) systems on the emsy.io platform in compliance with Regulation (EU) 2024/1689 (EU AI Act) and applicable Italian and European legislation.

1.1 What is EMSy

EMSy is an educational and decision support platform for emergency healthcare professionals (physicians, nurses, EMT/OTS responders). It uses conversational AI systems for:

  • AI Assistant: Virtual assistant for in-depth analysis of clinical cases, protocols, pharmacology
  • AI Coach: Personalized mentor for performance analysis, study suggestions, feedback
  • Emergency Simulations: Interactive clinical scenarios for emergency training

1.2 Limitations of AI Systems

⚠️ EMSy's AI systems are educational support tools and DO NOT:

  • Replace medical diagnosis, prescription or treatment
  • Provide binding medical consultations or certifications
  • Eliminate the need for supervision by qualified personnel
  • Guarantee the absence of errors or 100% accuracy
  • Constitute medical devices under Regulation (EU) 2017/745 (MDR)

User responsibility: Clinical decisions must always be based on the professional judgment of the physician/operator, official guidelines, patient informed consent and the specific clinical context.

2. Prohibited AI Practices (Art. 5 EU AI Act)

In compliance with Article 5 of the EU AI Act, EMSy DOES NOT use the following prohibited practices:

❌ Subliminal Techniques and Manipulation

No AI system that distorts behavior subliminally or through dark patterns to cause significant harm.

❌ Emotion Recognition

No emotion recognition systems through facial/voice analysis in work or educational contexts.

❌ Social Scoring

No social evaluation or user classification systems based on behavior/personality for discriminatory decisions.

❌ Indiscriminate Biometric Scraping

No untargeted collection of biometric data (faces, voice prints) from the internet or CCTV for recognition databases.

❌ Predictive Crime Profiling

No predictive profiling systems based solely on personal traits or characteristics to predict crimes.

✅ Art. 5 Compliance: EMSy exclusively uses text-based conversational AI systems for educational support, without biometric recognition, behavioral manipulation or social scoring components.

3. Transparency and Disclosure (Art. 50 EU AI Act)

3.1 AI Interaction Notice

In compliance with Article 50(1) of the EU AI Act, EMSy ensures that:

  • All conversational AI systems (AI Assistant, AI Coach, Emergency Simulations) display a disclosure banner informing the user they are interacting with an AI system
  • The disclosure is clear, visible and distinguishable, positioned above the chat interface
  • The information is provided before the first interaction or at the time of exposure
  • Includes links to this AI Policy and the problem reporting form

3.2 AI-Generated Content

Text content generated by EMSy's AI systems (chat responses, feedback, suggestions) is:

  • Identifiable as AI-generated: Every response comes from an interface clearly labeled as "AI"
  • Subject to human verification: Users are encouraged to verify information with official sources
  • Not used for disinformation: EMSy does not generate deepfake content, synthetic audio/video or fake news articles

Note: EMSy currently generates only text content. If we generate synthetic images/audio/video in the future, we will apply watermarks and machine-readable metadata compliant with C2PA/Content Credentials standards (Art. 50(2) EU AI Act).

3.3 No Emotion Recognition or Biometric Categorization

EMSy DOES NOT use emotion recognition or biometric categorization systems. Therefore, the disclosure obligations under Art. 50(3) do not apply.

4. Third-Party AI Providers

EMSy DOES NOT develop proprietary AI models. We use advanced artificial intelligence models provided by industry-leading certified providers.

4.1 Certified AI Providers

Our AI systems are based on technologies provided by globally recognized companies, including:

Anthropic - Leader in AI safety and advanced language models

Google DeepMind - AI research and performance-optimized models

OpenAI - Pioneer in large language models

OpenRouter - Certified platform for AI API management

4.2 Automatic Model Selection

EMSy uses an intelligent routing system that automatically selects the most appropriate AI model based on request complexity, ensuring:

  • Optimal medical accuracy for complex clinical cases
  • Response speed for emergency scenarios
  • Service reliability and availability (99.9% uptime)
  • Cost optimization for service sustainability

4.3 Zero Data Retention Policy

✅ EMSy user data is NOT used for AI model training

  • Conversations with AI Assistant and AI Coach are private and encrypted
  • We use "zero data retention" mode with providers
  • No personal identifiable information (PHI/PII) is shared
  • Users are instructed to NOT enter sensitive data from real patients
  • GDPR Art. 25 compliance (Data Protection by Design)

5. Security and Data Protection Measures

5.1 Filters and Content Moderation

EMSy implements technical measures to ensure safe and appropriate AI outputs:

  • Harmful content filters: Automatic blocking of outputs suggesting dangerous practices, incorrect dosages or non-evidence-based treatments
  • Medical validation: RAG (Retrieval-Augmented Generation) system that verifies AI responses with official guidelines databases (ERC, ILCOR, WHO)
  • Rate limiting: Request limits to prevent abuse and ensure response quality
  • Logging and audit: Recording of AI interactions for quality assurance and incident response

5.2 Personal Data Protection (GDPR)

The use of AI systems on EMSy is subject to our Privacy Policy and complies with GDPR:

  • Data minimization: Only strictly necessary data is processed (user prompt text, profession/level profile for contextualization)
  • Pseudonymization: Conversations are anonymized on the AI provider side (session ID only, no username)
  • GDPR rights: Access, deletion, AI data portability available via GDPR profile
  • Legal basis: Contract performance (Art. 6.1.b GDPR) for AI services included in the subscription

⚠️ Important - Sensitive Data: NEVER enter in AI chats:

  • Identifying data of real patients (names, dates of birth, record numbers)
  • Information protected by professional secrecy without consent
  • Biometric or genetic data of identifiable persons

6. AI Literacy and Training (Art. 4 EU AI Act)

In compliance with Article 4 of the EU AI Act, EMSy ensures an adequate level of AI literacy for:

  • Internal staff: Periodic training on AI systems operation, limitations and risks
  • Professional users: Documentation, FAQ, guides on responsible use of AI tools
  • Stakeholders: Transparency on system capabilities and limitations through this policy

6.1 Internal AI Literacy Program

EMSy maintains a continuous training program that includes:

  • Basic principles of machine learning and large language models (LLM)
  • AI system limitations: hallucinations, bias, temporal drift
  • EU AI Act prohibited practices and red flag identification
  • Escalation procedures for incident response (dangerous outputs, malfunctions)
  • Sensitive data management and privacy protection in AI context
  • Training records (dates, participants, materials) kept for audit

6.2 User Resources

EMSy users have access to:

  • AI FAQ: Frequently asked questions about how AI Assistant and AI Coach work
  • In-app tutorials: Contextual guides on best practices for effective prompts and response verification
  • Correct/incorrect use examples: Case studies on appropriate and inappropriate AI situations
  • Reporting channel: Form for feedback and problem reporting

7. Problem Reporting and Incident Response

EMSy encourages users and stakeholders to report:

  • Dangerous, incorrect or inappropriate AI outputs
  • Suspected system malfunctions
  • Bias or discrimination in responses
  • Privacy or data security violations
  • Improper use of the platform

7.1 Reporting Form

Have you encountered a problem with AI systems?

Fill out the reporting form. All reports are evaluated by the AI security team within 48 hours.

Report an AI Problem

7.2 Incident Management Process

Reports follow this workflow:

  1. Receipt and triage (within 24h): Severity classification (critical/high/medium/low)
  2. Investigation: Log analysis, problem reproduction, root cause identification
  3. Corrective action: Model patch, filter update, system prompt modification, escalation to AI provider
  4. User notification: Feedback to reporter with outcome and actions taken
  5. Post-mortem and prevention: Incident documentation, training update, preventive improvements

Urgent contact: For critical issues that put patient safety at risk, immediately contact info@emsy.io with subject "URGENT - AI Safety Incident".

8. AI System Classification

Under the EU AI Act, EMSy evaluates its classification as follows:

8.1 Not a High-Risk AI System (Currently)

EMSy is not currently classified as a High-Risk AI System (Annex III EU AI Act) because:

  • NOT a medical device: Does not provide direct diagnosis, prognosis or therapeutic decisions (does not fall under MDR/IVDR Art. 2)
  • Is an educational tool: Supports professional training and updates, does not replace clinical decisions
  • Mandatory human supervision: All AI outputs require verification by the professional

⚠️ Important note: If in the future EMSy develops features that:

  • Constitute a software medical device (SaMD) under MDR
  • Are used for direct clinical decisions without human supervision
  • Fall under High-Risk categories in Annex III (e.g., critical patient management)

...then EMSy will become a High-Risk AI System with enhanced compliance obligations (QMS, technical documentation, EU Database notification, conformity assessment). Users will be informed in advance of such changes.

8.2 Not a GPAI Model Provider

EMSy IS NOT a General-Purpose AI (GPAI) model provider under Art. 53 EU AI Act. We exclusively use third-party models via API. GPAI obligations (training data documentation, copyright measures, systemic risk assessment) apply to upstream providers (Anthropic, Google, OpenAI).

9. Updates and Version Control

This AI Policy may be updated to reflect:

  • Changes to AI models used or providers
  • New AI features added to the platform
  • Regulatory updates (EU AI Act, AI Office guidelines, EDPB)
  • Security and transparency measure improvements

Users will be informed of substantial changes via:

  • In-app banner with link to new policy version
  • Email to registered address (if communications consent is active)
  • Version date update at the top of this page

Version history:

v1.0 - 07/10/2025: First AI policy publication, EU AI Act Art. 4, 5, 50 compliance

10. Contacts and Questions

For any questions about the use of artificial intelligence on EMSy, your rights or for reports:

Email

info@emsy.io

Subject: "AI Policy - [your request]"

Address

EMSy S.r.l.

Via degli Anziani 14

11013 Courmayeur (AO), Italy

Data Protection Manager (DPM)

Simon Grosjean

For privacy + AI matters: info@emsy.io

11. Regulatory References

This AI Policy is drafted in compliance with:

Regulation (EU) 2024/1689 (EU AI Act) - Artificial Intelligence Regulation

Applicable articles: Art. 4 (AI Literacy), Art. 5 (Prohibited practices), Art. 50 (Transparency)

Regulation (EU) 2016/679 (GDPR) - Personal data protection

D.Lgs. 196/2003 (Italian Privacy Code) as amended by D.Lgs. 101/2018

Regulation (EU) 2017/745 (MDR) - Medical devices (for classification assessment)

EDPB Guidelines - Guidelines 05/2020 on consent, Guidelines 04/2019 on DPIA

EU AI Act Timeline:

  • February 2, 2025: Entry into force Art. 4 (AI Literacy) and Art. 5 (Prohibited practices)
  • August 2, 2025: GPAI provider obligations apply (not applicable to EMSy)
  • August 2, 2026: Entry into force Art. 50 (Transparency obligations - chatbot disclosure)
  • August 2, 2027: Full application of High-Risk AI Systems obligations

Related Documents