Workflow

From upload to signed delivery in eight steps

Every question goes through a structured review sequence: source auditing, multi model stress testing, conflict analysis, and named human sign off. Every step is logged in a tamper evident audit trail.

01

Client / Project Profile

Before anything runs, you configure the workspace for the client or project. Set the voice, define what's off-limits: words to avoid, topics that are legally sensitive, brand rules, competitor references to flag. Every client/project can have its own ruleset. Qonera applies those rules as a filter across the entire workflow.

02

Upload your files

Drag in the documents, data exports, or research that the analysis should be grounded in. Every file you upload is indexed into your Evidence Base, the verified source set that all models work from.

This is your Evidence Base

Every model in the workflow runs its analysis against this verified source set. Not the internet. Not training data. Your files. Keep it current and every answer that follows is grounded in what you actually know.

03

Data Integrity: Source Audit

Before any model runs, Qonera audits your documents. File versions are checked, timestamps are compared, and internal conflicts between documents are surfaced. Stale files are flagged. Contradictions are identified. The source set has to be clean before any analysis begins.

04

Instructions / Question

Give your instructions or pose your question. You can ask analytically, request a draft, or set a specific task. Qonera routes everything to your selected model ensemble with the client profile rules and clean source set already applied.

05

Stress Test the Logic: Parallel Model Run

Multiple AI models independently analyze the question against your documents, with no cross contamination between them. Each model forms its own view. The logic is tested from every angle.

06

Conflict heatmap generated

Every claim is tagged Green, Orange, Red, or Outlier based on how models agree or diverge, with per claim citations.

07

Answer generated with citations

Qonera synthesizes one reviewed answer from the model outputs. Every finding is linked back to your Evidence Base, citing the specific file and section. Every claim is traceable. No unsupported conclusions reach the reviewer.

08

Reviewer sign off & signed output delivered

The named supervisor reviews flagged claims, approves, annotates, or sends back for revision. Once approved, the signed answer is delivered with the complete audit trail. Nothing leaves without a named sign off.

Audit Trail

Every decision recorded. Every record tamper evident.

The audit trail isn't a log file. It's a hash chain verified, append only record of every AI inference, every risk screening result, every source check, and every human sign off.

What gets recorded

  • Model identifier and provider
  • Token counts and cost
  • System prompt hash
  • Risk screening verdict
  • Reviewer identity and timestamp

How it's protected

  • Hash chain integrity verification
  • Append only (no edits, no deletions)
  • Database-level update trigger prevention
  • SHA-256 row hashing
  • Previous-hash linking for tamper detection

How you use it

  • Export as CSV for data analysis
  • Export as PDF for compliance review
  • Filter by date, organisation, or model
  • Compliance dashboard for administrators
  • Ready for regulator disclosure

EU AI Act alignment

How each workflow step maps to what the EU AI Act pushes toward

Qonera doesn't guarantee compliance. It builds the practical review controls that align with the governance principles the regulation is driving toward.

Steps 1-3

Source audit & workspace setup

Art. 9 — Risk Management

Structured source integrity checks before any AI analysis runs. Workspace-level rules catch sensitive topics and brand risks. Risk is managed before output is generated, not after.

Step 5

Parallel model run with inference logging

Art. 12 — Record Keeping

Every AI inference is logged with model identifier, token counts, cost, provider, region, and system prompt hash. These records form the foundation of the tamper evident audit trail.

Step 6

Conflict heatmap with per claim citations

Art. 13 — Transparency

Every claim is traceable to its source. Model agreement and disagreement are visible per finding. The reviewer sees exactly what evidence supports each conclusion.

Step 8

Named sign off before delivery

Art. 14 — Human Oversight

A named human reviewer approves every output before it reaches a client. The sign off is recorded with identity and timestamp. No AI output bypasses human review.

Continuous

Automated risk screening

Art. 9 — Risk Management

Every AI response passes through heuristic and AI-based risk screening for PII disclosure, fabricated citations, prescriptive advice, and security-sensitive content. High-risk detections trigger incident reports automatically.

Continuous

Incident reporting

Art. 73 — Incident Reporting

Manual and automated incident reporting with admin triage, severity classification, and full audit trail. All records available for regulator disclosure.

Qonera supports governance processes and internal controls. It does not provide legal advice and does not guarantee compliance with the EU AI Act or any other regulation.

From confidently wrong
to verifiably right

See Qonera running on your own documents.