Skip to content
MEOK.AI
🚀 Activate your agent

Free forever · No credit card

← meok.ai
EU AI Act Article 13 + 50 · GDPR 22 · ISO 24028 · NIST AI RMF

AI Transparency & Explainability

£399/mo Standard · £1,499/mo Enterprise

Continuous decision-trace logging + signed transparency attestations for FinServ, Healthcare, insurance pricing, and any high-risk AI deployer. Every individual AI decision wrapped in an HMAC-signed cert with a public verify URL — auditors curl, regulators verify, your procurement reviewers stop asking "how do I see what the model decided?"

Pricing

Standard

£399/mo

FinServ + Healthcare baseline. 50K decision-traces / month. Article 50 transparency obligations + GDPR Article 22 explainability + EU AI Act high-risk Article 13 documentation.

  • Continuous decision-trace logging API
  • Signed transparency attestations per batch
  • Article 50 + GDPR Art 22 + Art 13 crosswalk
  • ISO/IEC TR 24028 explainability standard
  • Public verify URL per cert
  • 50,000 decision traces per month
  • Email + Slack support
Subscribe — £399/mo
ENTERPRISE

Enterprise

£1,499/mo

Multi-BU FinServ deployment. Unlimited traces. Per-decision-class transparency policies. Custom verify domain + dedicated CSM.

  • Everything in Standard
  • Unlimited decision traces
  • Multi-BU audit-grade separation
  • Per-decision-class transparency policies
  • Custom verify domain (your-firm.com/verify)
  • Notified-Body-ready evidence pack
  • Dedicated CSM + 99.9% SLA
  • Reseller white-label option
Subscribe — £1,499/mo

Regulatory coverage

EU AI Act Article 13
Provide users with sufficient information to interpret system output (high-risk AI)
How MEOK covers it: Decision-trace IDs, model card references, performance metrics auto-attached to every trace
EU AI Act Article 50(1)
AI system intended to interact with natural persons must be designed so users know they're talking to AI
How MEOK covers it: Disclosure-pixel emitted per decision; auditor verifies via public URL
EU AI Act Article 50(4)
Deployer using AI for emotion-recognition / biometric categorisation / deepfake must inform individuals
How MEOK covers it: Deployer-disclosure-statement signed + timestamped per usage event
GDPR Article 22
Right not to be subject to solely automated decision-making + meaningful info about logic
How MEOK covers it: Per-decision explanation string (LIME/SHAP/Anchor mode) embedded in attestation
GDPR Article 13(2)(f) + 14(2)(g)
Inform data subject of automated decision-making + meaningful logic info at collection
How MEOK covers it: Notice template auto-generated per system, EDPB harmonised wording
ISO/IEC TR 24028
Trustworthiness of AI: technology landscape including transparency and interpretability
How MEOK covers it: Maps every cert to ISO 24028 § 6.5 (transparency) + § 6.6 (explainability)
ISO/IEC 42001 Annex A.5
Documentation of AI system + data + intended purpose
How MEOK covers it: Auto-generated AIMS documentation tied to each decision-trace ID
NIST AI RMF MEASURE 2.8 / 2.9
Risks associated with transparency + accountability are documented
How MEOK covers it: Quarterly transparency-risk review attestation

Frequently asked

Why is this priced higher than the rest of the MEOK suite?

Transparency is the FinServ and Healthcare RFP ticket. Bank credit-decision systems and clinical-decision-support systems are subject to additional explainability obligations (PSD2 + EBA Guidelines for FinServ, MDR + EU MDR for healthcare). Buyers in those verticals already pay £30K-£200K/yr for explainability dashboards — £399/mo is a 50× cost reduction with cryptographic evidence on top.

What's a 'decision-trace'?

Every time your AI system makes a decision (loan approval, diagnosis recommendation, content moderation, eligibility check, etc.), the input + model version + output + explanation + timestamp are logged as a single trace. Each trace gets an HMAC-signed cert with a public verify URL any auditor can curl. Standard tier handles 50K traces/mo; Enterprise unlimited.

How does this differ from your /bias-detection product?

/bias-detection is Article 10 (data quality + bias mitigation) — it tests the dataset and the trained model. /transparency is Article 13 + 50 + GDPR 22 — it logs every individual decision the deployed system makes. Different obligations; complementary products. Most regulated buyers need both.

Can the explanation be wrong?

The explanation is your responsibility — we wrap and sign whatever explanation method you choose (LIME, SHAP, Anchor, counterfactual, attention-weights, custom). MEOK guarantees the cryptographic integrity of the cert (signature + verify URL); you guarantee the truthfulness of the explanation content. Standard for explainability tooling.

Does this work for LLM outputs?

Yes. For LLM decisions (RAG retrievals, agent tool calls, classification outputs) we log the prompt, model version, decoded output, retrieval-context references, and your chosen explanation method. Particularly useful for FinServ chatbots and healthcare triage agents where every output is potentially auditable.

Do I need this if I'm not in FinServ or Healthcare?

Not as a hard obligation. EU AI Act Article 13 only binds high-risk providers (Annex III + Annex I); GDPR Article 22 binds anyone making solely-automated decisions with significant effects. If you're outside both, the /bias-detection (£299/mo) + /audit-prep-bundle (£4,950) covers most needs. /transparency is the FinServ/Healthcare/credit-scoring/insurance-pricing escalation tier.

FinServ or Healthcare RFP coming up?

Free 30-min triage call: bring the RFP, we map every transparency clause to a MEOK cert + public verify URL. No pitch deck.

Book RFP triage (free) →Or jump to £4,950 audit-prep bundle →

MEOK AI Labs · CSOAI LTD · UK Companies House 16939677 · 30-day money-back