EU AI Act · Article 43

Conformity Assessment

Provider self-assessment of Qonera's conformity with the requirements of Regulation (EU) 2024/1689.

Last reviewed: April 16, 2026

Classification Under the AI Act

Qonera is a multi-model AI research platform that assists professional teams with research, analysis, and review. It is designed as a deployer tool that orchestrates calls to third-party AI models (OpenAI, Anthropic, Google) and adds verification, oversight, and governance layers on top.

Qonera does not develop or train its own foundation models. Under Article 6 and Annex III, the platform is not independently classified as a high-risk AI system. However, because it is used in professional contexts where outputs may inform consequential decisions, we apply high-risk governance standards voluntarily as a matter of responsible practice.

Quality Management System

Qonera maintains internal quality controls covering:

  • Version-controlled codebase with code review requirements
  • Automated testing and type checking before deployment
  • Infrastructure hosted in the EU with defined sub-processor agreements
  • Role-based access controls with organisation and workspace scoping
  • Incident response procedures with defined severity levels

Technical Documentation

The platform uses a multi-agent architecture. User queries are processed by up to three independent large language models in parallel. Results are synthesised by a judge model that identifies conflicts, assesses evidence quality, and produces a confidence score. Source integrity auditing and document verification provide additional layers of fact-checking.

Technical documentation including sub-processor details, data processing agreements, and security architecture is published at /sub-processors, /dpa, and /security.

Risk Management (Art. 9)

Qonera operates a runtime risk management system that screens every AI response for potential harm. A heuristic layer detects PII disclosure, prescriptive medical and legal advice, fabricated citations, and security-sensitive content. When signals are detected, a secondary AI classifier produces a structured risk verdict. High-risk outputs automatically generate incident reports and notify administrators.

Workspace-level approval policies allow organisations to gate high-risk responses for supervisor review before delivery.

Record-Keeping (Art. 12)

Every AI inference is logged with the model identifier, token counts, cost, provider, region, system prompt hash, and risk assessment result. These records are stored in a tamper-evident audit trail with hash-chain integrity verification.

Compliance administrators can review, filter, and export the complete audit trail as CSV or PDF. The audit trail is not affected by workspace retention policies, so regulatory records persist even when chat data is deleted.

Human Oversight (Art. 14)

Every AI-assisted output in Qonera passes through a structured review process before it can be shared externally. The review sequence includes source auditing, multi-model stress testing, conflict analysis, and named sign-off.

Workspaces can enforce approval policies that automatically queue outputs for supervisor review based on configurable criteria: all answers, deep-research answers only, or high-risk answers only.

Transparency (Art. 50)

Qonera discloses AI involvement at multiple levels: a first-session AI literacy modal, persistent model identification on every assistant message, AI-generated labels on shared outputs, and transparency disclosures in the share modal and recipient view.

Users are never left uncertain about whether content was AI-assisted or which models were involved.

Data Governance

All infrastructure is hosted in the EU (Frankfurt). AI model providers operate under contractual commitments not to train on customer data. International data transfers are protected by Standard Contractual Clauses.

Workspace-level retention policies allow organisations to set data lifecycle rules. The platform supports GDPR rights including data export, erasure, and portability.

Post-Market Monitoring

Qonera monitors system performance and safety through the compliance dashboard, which tracks inference volumes, cost attribution, risk detection rates, and incident reports across all organisations. Automated risk monitoring provides continuous post-deployment surveillance.

This conformity assessment is reviewed at least annually and updated when material changes are made to the platform.

Related documents

This self-assessment is published for transparency purposes. It does not constitute legal advice and does not guarantee conformity with the EU AI Act. Organisations should seek independent legal counsel for their own compliance obligations.