EU AI Act · Articles 16 & 26
Provider and Deployer Responsibility Matrix
How EU AI Act obligations are shared between Qonera (provider) and your organisation (deployer).
Last reviewed: May 5, 2026
The EU AI Act assigns distinct obligations to providers (who develop or place AI systems on the market) and deployers (who use AI systems in their professional activities). This matrix maps each relevant obligation to the responsible party, clarifying what Qonera handles and what falls to your organisation.
| Article | Obligation | Provider (Qonera) | Deployer (Your Organisation) |
|---|---|---|---|
| Art. 4 | AI Literacy | Builds literacy into the daily workflow: first-session AI literacy modal, visible model disagreement via Conflict Heatmap, mandatory engagement with source quality and evidence gaps before sign-off. Teams using Qonera develop practical understanding of AI capabilities and limitations through the review process itself. | Ensures staff using AI systems understand capabilities, limitations, and risks as required since February 2025. Uses the structured review workflow as the operational literacy mechanism. Responsible for ensuring personnel have sufficient AI literacy proportionate to their role. |
| Art. 9 | Risk Management | Operates a runtime risk management system with heuristic and AI-based screening on every response. Maintains risk assessment records on all AI outputs. | Reviews flagged incidents in the admin triage workflow. Configures workspace approval policies (e.g. high-risk gating) appropriate to the use case. |
| Art. 12 | Record-Keeping | Logs every AI inference with model, tokens, cost, provider, system prompt hash, and risk verdict. Maintains hash-chain integrity on the audit trail. Provides CSV/PDF export. | Retains exported records in accordance with internal retention policies and any regulatory requirements. Ensures audit exports are available for supervisory authorities if requested. |
| Art. 13 | Transparency to Deployers | Publishes technical documentation, sub-processor lists, data processing agreements, and this responsibility matrix. Discloses AI model identifiers on every output. | Reviews provider documentation and ensures it meets internal compliance standards. Communicates relevant transparency information to affected individuals. |
| Art. 14 | Human Oversight | Provides structured review workflows: multi-model stress testing, conflict analysis, source auditing, and named sign-off. Offers configurable approval policies (none, all, deep-research, high-risk). | Ensures qualified staff review AI-assisted outputs before external use. Selects and enforces appropriate approval policies per workspace. Maintains oversight procedures proportionate to the risk. |
| Art. 15 | Accuracy and Robustness | Runs every query through three independent AI models in parallel via the Multi-Model Stress Test. A judge model synthesizes results and surfaces disagreements through the Conflict Heatmap. Where models diverge, the discrepancy is made visible to the reviewer rather than hidden in a single averaged output. | Reviews Conflict Heatmap signals before approving outputs. Does not approve outputs with unresolved high-conflict indicators without investigation. Applies human judgment to model disagreements proportionate to the risk of the use case. |
| Art. 26 | Deployer Obligations | Provides the tools, documentation, and audit trails deployers need to fulfil their obligations. Does not make decisions on behalf of deployers. | Uses the AI system in accordance with the instructions for use. Monitors the system during operation. Suspends use if risks emerge that the provider has not addressed. Informs the provider of serious incidents. |
| Art. 27 | Fundamental Rights Impact Assessment | Publishes a proactive FRIA covering affected groups, identified risks, and mitigation measures. Updates annually or when material changes occur. | Conducts own FRIA based on specific use case, affected populations, and context. May use the provider’s FRIA as a starting point but must adapt it to their own deployment. |
| Art. 50 | AI-Generated Content Disclosure | Labels all AI-assisted outputs with model identifiers. Displays first-session AI literacy modal. Marks shared outputs with AI-generated labels visible to recipients. | Ensures end recipients are aware that content was AI-assisted. Does not remove or obscure AI-generated labels when forwarding or publishing outputs. |
| Art. 73 | Serious Incident Reporting | Provides manual and automated incident reporting. Automated risk monitoring creates incidents for high-risk outputs. Admin triage workflow with severity classification. Maintains incident records for disclosure. | Reports serious incidents to the relevant national competent authority without undue delay. Cooperates with the provider and authorities during investigations. Uses the incident reporting interface to document issues. |
Important Notes
This matrix is a simplified reference guide. The full text of the EU AI Act should be consulted for the authoritative statement of obligations. Some obligations may shift depending on the specific deployment context and risk classification.
Deployers who substantially modify the intended purpose of the AI system or integrate it into a broader high-risk system may assume additional provider-level obligations under Article 25.
Related documents
This matrix is published for transparency purposes. It does not constitute legal advice. Organisations should seek independent legal counsel to determine their specific obligations under the EU AI Act.