EU AI Act · Article 27

Fundamental Rights Impact Assessment

Self-assessment of Qonera's impact on fundamental rights, prepared in accordance with Article 27 of Regulation (EU) 2024/1689.

Last reviewed: April 16, 2026

What Is a FRIA and When Is It Required?

Article 27 of the EU AI Act requires deployers of high-risk AI systems to carry out a fundamental rights impact assessment before putting the system into use. The assessment must evaluate the potential impact on individuals and groups affected by the system's output.

While Qonera operates as a professional research tool rather than a standalone high-risk AI system, we publish this assessment proactively as part of our commitment to transparency and to support our customers' own compliance processes.

AI System Description

Qonera is a multi-model AI research platform for professional teams. It runs user queries through multiple large language models (GPT-4.1, Claude Sonnet 4.5, Gemini 3.1 Pro) in parallel, then synthesises and stress-tests the results through a judge model (GPT-5.4). The platform provides source auditing, conflict analysis, evidence verification, and named human sign-off before outputs are shared externally.

The system is hosted in the EU (Frankfurt). AI model providers operate under contractual commitments not to train on customer data, with transfers protected by Standard Contractual Clauses.

Intended Purpose and Scope

Qonera is intended for professional research, analysis, and advisory workflows where AI-assisted outputs must be verifiable, reviewable, and defensible. Typical use cases include:

  • Legal research and due diligence support
  • Investment research and financial analysis
  • Compliance and regulatory review
  • Consulting and advisory report preparation
  • Source integrity auditing

Qonera is not intended for autonomous decision-making. All outputs require human review before external use.

Affected Groups and Fundamental Rights

Direct users: Employees and contractors of subscribing organisations who interact with the platform. Their rights to data protection (Art. 8 EU Charter) and fair working conditions (Art. 31) are addressed through GDPR-compliant data handling, transparent processing, and the requirement for human review of all outputs.

Indirect subjects: Individuals referenced in documents uploaded to or analysed by the platform. Their right to data protection is addressed through workspace-level retention policies, access controls, and the prohibition on using customer data for model training.

End recipients: Clients and stakeholders who receive outputs that were AI-assisted. Their right to information (Art. 11 EU Charter) is addressed through mandatory AI-generated labels on shared outputs and transparency disclosures.

Risk Identification and Mitigation

Risk: Inaccurate or fabricated AI output

Mitigated by multi-model stress testing, source integrity auditing, conflict detection, and mandatory human review before external use. Automated risk screening detects fabricated citations in real time.

Risk: Disclosure of personal data in AI output

Mitigated by automated PII detection in the runtime risk monitoring system. High-risk detections trigger incident reports and administrator notifications.

Risk: Inappropriate medical or legal advice

Mitigated by heuristic detection of prescriptive medical and legal language. Flagged responses are escalated to a secondary classifier. The platform includes disclaimers that it does not provide professional advice.

Risk: Bias in AI-generated output

Mitigated by running queries through multiple independent models from different providers, then surfacing disagreements and conflicts. This does not eliminate bias but makes it visible for human review.

Monitoring and Review

This assessment is reviewed at least annually, and updated when material changes are made to the platform's AI capabilities, data processing scope, or risk profile.

Ongoing monitoring is supported by the compliance dashboard (available to superadministrators), which tracks inference volumes, cost attribution, risk detection rates, and incident reports across all organisations.

Related documents

This document supports governance processes and internal controls. It does not constitute legal advice and does not guarantee compliance with the EU AI Act or any other regulation. Deployers should conduct their own impact assessment based on their specific use of the platform.