← Back to Blog
Regulation

Three Months to August 2026: What the EU AI Act Actually Requires from Your Team

Jozef Juchniewicz, Qonera·8 May 2026·4 min read
The August 2026 obligations are not primarily about the AI tools professional teams use. They are about how those teams use them, what records they keep, and how they demonstrate that humans are genuinely in control of AI-assisted output.

The August 2, 2026 enforcement date is roughly three months away. Most of the compliance chatter circulating right now focuses on whether the AI tools themselves are “compliant”: whether providers have filed technical documentation, trained models on appropriate data, or registered systems with the right authorities.

That framing is understandable, but it misses something. For agencies, consultancies, investment research teams, and professional services firms, the obligations that land in August are not primarily about the AI tools you use. They are about how your organisation uses them, what records you keep, and how you demonstrate that humans are genuinely in control of AI-assisted output.

Deployer obligations under Articles 12 and 14 are operational, not theoretical. And for most teams, the gap between current practice and what the regulation pushes toward is wider than it looks.

What actually becomes enforceable in August

The August 2 milestone activates the main body of the AI Act’s requirements, including the full enforcement powers of national supervisory authorities. Until now, regulators could observe and prepare. From August, they can inspect and sanction.

For deployers of AI systems used in professional work, the most operationally relevant articles are 12 (record-keeping), 14 (human oversight), and 50 (transparency obligations around AI-assisted content).

Article 12 requires that AI systems automatically log events in a way that enables traceability and post-market monitoring. Logs must be tamper-resistant and retained. Article 14 requires that human oversight be more than nominal: the person reviewing AI output must have the competence and authority to understand the system’s capabilities and limitations, detect issues, and decide not to use the output.

Article 50 requires clear disclosure of AI involvement where it could materially mislead a recipient.

None of these are abstract obligations. They require specific decisions about your AI workflow.

The record-keeping gap most teams haven’t addressed

Most professional teams using AI tools today have some kind of informal log. Someone knows which AI tool was used on a given project. There is probably a version history somewhere. A few teams have started keeping notes.

But that is not what Article 12 requires. The regulation calls for automatic, tamper-resistant logs that capture sufficient information to trace what happened: which model was used, on what evidence, at what time, producing what output. The logs need to support post-incident review, and they need to be retained. Tamper-resistant is also a meaningful standard. Logs stored in an editable document or a system where records can be deleted without trace will not meet it.

The practical implication is that a screenshot, a copied response, or a project file with an AI-assisted section is not a compliant audit trail. What you need is a record that proves the AI was used, shows what evidence it drew on, and demonstrates that a named person reviewed and approved the output before it went out.

For professional services teams delivering client work, this is not a future capability to plan for. August is three months away, and building audit infrastructure from scratch takes longer than most teams expect.

“Human oversight” means something you can demonstrate

Article 14 has generated significant discussion about what “meaningful” human oversight actually requires. The short version: it is not enough for a human to have technically seen the AI output. The regulation requires that the reviewer has the competence to understand what the AI produced, can identify when it is wrong or unreliable, and has the practical ability to intervene or reject it.

For professional teams, this plays out in a specific operational question: when your team member reviews an AI-assisted output before it goes to a client, are they actually equipped to challenge it? Do they know which models contributed? Do they know where the models disagreed? Can they trace a specific claim back to the source document it came from?

In practice, most AI tool interfaces do not expose any of this. The reviewer sees a finished answer. They may sense something is off, but without visibility into the model’s evidence and reasoning, they have little to work with. The real problem is not that teams lack willingness to review carefully. It is that most AI tools make genuine review structurally difficult. The human is nominally in the loop, but the loop is not designed to catch errors.

The question worth asking now

If a regulator or a client asked your team to demonstrate that AI-assisted work went through proper human oversight before delivery, what would you show them?

If the answer is “we would explain our process” rather than “we would show them the records,” that is the gap worth closing before August.

The EU AI Act does not require perfection. It pushes toward a standard of documentation and review that makes AI use accountable: records that exist, oversight that is structured, and a clear trail from AI input to human-approved output.

Building compliance-ready processes takes time. Three months is workable, but only if teams start with the operational question of what they actually need to document and how, rather than the vendor question of which tools have the right certification logos.

How Qonera helps

Qonera is built for teams that use AI in professional work and need more than a generated answer. It creates a structured review and approval layer around AI-assisted outputs: tamper-evident audit trails with full inference metadata, per-claim citations that make it possible to challenge AI output, and named reviewer sign-off before work is delivered. You can see how Qonera maps to EU AI Act requirements in detail.

If your team is working through what Article 12 and 14 compliance looks like in practice, the published Conformity Assessment is also worth reading alongside the regulation itself.

Qonera is designed to support stronger AI governance workflows. It does not provide legal advice and does not guarantee compliance with the EU AI Act or any other regulation. Organisations should consult qualified legal counsel for compliance guidance.

See how Qonera works in practice

Multi-model stress testing, Conflict Heatmap, tamper-evident audit trail, and structured sign-off, built for teams who need defensible AI output.