EU AI Act Article 12 enforcement begins August 2, 2026 — 98 days away

Compliance

Krapheno and the EU AI Act

Governance infrastructure for the accountability gap.

Our Position

Krapheno is not a high-risk AI system. It is the governance layer over AI systems — the audit infrastructure that Article 12 of the EU AI Act assumes exists but does not mandate how to build.

Where the EU AI Act requires that high-risk AI systems maintain sufficient logs for traceability, Krapheno provides the substrate: an append-only, hash-chained, cryptographically verifiable record of every AI decision — at the moment it occurs.

What Article 12 Requires

"High-risk AI systems shall be technically able to ensure traceability of the functioning of the AI system throughout its lifetime."

— EU AI Act, Article 12

Krapheno provides exactly this: every AI decision is recorded with its policy context, its verdict, the specific MIC constraints evaluated, and its SHA-256 hash. The record cannot be modified after inscription. The chain cannot be altered without invalidating every subsequent hash.

Checklist

Log every automated decision made by a high-risk AI system
Record the inputs that led to the decision
Maintain logs for a minimum of 6 months
Ensure logs are available to national supervisory authorities on request
Provide human oversight mechanism (ability to override)

How Krapheno satisfies Article 12

Automated decision loggingSmritiTree append-only ledger
Input recordingdecision payload stored with each inscription
Retentionhash-chained ledger, permanent and tamper-evident
Authority accesspublic trace endpoint per decision
Human overrideESCALATE verdict with approve/reject workflow

What This Means for Your AI Tools

If you are deploying AI tools that affect campaign budgets, targeting parameters, or optimisation decisions — those tools may be subject to Article 12 logging requirements. Krapheno is the logging substrate. Our governance-as-code approach makes those obligations operational before a model can act.

Every decision is recorded

At the moment it is proposed — not reconstructed after the fact. The timestamp, payload, verdict, and policy version are all part of the immutable record.

Every verdict is explainable

MIC constraints make the reason for each ALLOW, ESCALATE, or BLOCK auditable. The specific constraint that fired — and the values that triggered it — are stored in the ledger record.

The chain is tamper-evident

SHA-256 hash chaining means any alteration to a historical record invalidates every record that follows. The audit trail is cryptographically verifiable, not just logged.

ESCALATE and Human Oversight

The EU AI Act emphasises meaningful human oversight of high-risk AI decisions. Krapheno's ESCALATE verdict routes any decision outside policy bounds to a human reviewer before execution.

The human resolution — approve or reject — is recorded in the same immutable chain. The complete decision lifecycle (proposal → escalation → human review → resolution) is traceable from a single decision_id.

This is not a workaround for automation. It is the architectural pattern the EU AI Act's human oversight requirement calls for — implemented before execution, not audited after.

See the governance chain in action

Every decision in the demo portal is governed by a real hash chain with real MIC constraint evaluation. You can also see a live governed decision and inspect how policy, payload, verdict, and oversight resolution line up.

Open DemoWhat is Governance-as-Code?