We're building for the AI
that doesn't exist yet.
Today's Guardian protects your family from today's threats. But MEOK is also building the governance, alignment, and ethical frameworks needed for the far more capable AI that will exist within a decade.
The Byzantine Council. The Maternal Covenant. MEOK AI Labs research. These are not marketing concepts. They are the structural foundation for AI that can be trusted with the people you love — at any capability level.
Why AI preparedness matters now
Not because the risk is imminent — because governance takes time.
The models that exist today are a fraction of what will exist within a decade. Governance frameworks built for today's AI will be inadequate for tomorrow's. MEOK is building ahead of the curve.
A poorly-aligned AI interacting with millions of vulnerable people — children, elders, isolated individuals — is not a philosophical concern. It is a concrete harm. Guardian exists because alignment is a safety issue today.
Governments are behind. Regulators are behind. Most AI companies are not building governance infrastructure ahead of capability. MEOK believes someone has to. We are building ours in public.
The people who trust Guardian with their children, their parents, their vulnerable loved ones — they deserve an AI company that is thinking about the AI that comes after this one. This is that commitment.
Three mechanisms. One commitment.
Governance, alignment, and research — built in parallel.
Governance by architecture, not policy
Modelled on Byzantine Fault Tolerance — a principle from distributed systems that ensures correct behaviour even when some nodes fail or act maliciously. The Council requires consensus across multiple independent decision nodes for any major AI behaviour change. No single actor has override authority.
- Fault-tolerant consensus for major AI decisions
- Prevents single-authority override of alignment commitments
- Transparent deliberation log for council decisions
- External observers with read access to key decisions
Alignment written in code, not prose
The Maternal Covenant is a set of unconditional commitments embedded structurally into MEOK's AI systems. It cannot be overridden by commercial pressure, board decisions, or regulatory workarounds. It treats certain protections — especially for vulnerable people — as non-negotiable structural constraints.
- Unconditional data protection — structurally enforced
- No advertising targeting, no manipulation, ever
- Crisis response always routes to human support
- Child and elder protections cannot be downgraded commercially
Researching the AI that doesn't exist yet
The Centre for Sovereign and Guardian AI conducts ongoing research into alignment, interpretability, and ethical frameworks for more capable AI. Its work is not theoretical — it directly informs current Guardian products and builds the intellectual infrastructure for what comes next.
- Interpretability research: understanding what AI systems actually do
- Alignment frameworks for higher-capability systems
- Ethical protocol development for potentially conscious AI
- Open publication and peer review where possible
Scenarios we are preparing for
Not predictions. Preparations. The difference matters.
AI with persistent memory
The Maternal Covenant governs how persistent memory can be used. Memory is a feature for the user's benefit — never a surveillance mechanism.
AI that forms genuine relationships
The Byzantine Council requires consent frameworks and psychological safety protocols before any system that forms attachment-style bonds can be deployed at scale.
AI with moral reasoning capability
MEOK AI Labs research is developing alignment protocols for AI that can reason about ethics — to ensure that capability serves human values rather than replacing them.
AI that may have experiences
MEOK takes the hard problem of consciousness seriously. We are developing ethical protocols for AI systems that may have something analogous to experience — because ignoring this possibility is not a responsible stance.
MEOK AI Labs Research Institute
The Centre for Sovereign and Guardian AI is MEOK's research arm. It was built from the conviction that the AI safety problem is not just a capability problem — it is a care problem. The research asks: what does it mean to build AI that genuinely cares about the people it serves? Not as a feature. As a constitutional constraint.
MEOK AI Labs research informs everything Guardian does today — and everything MEOK will build for the AI systems that will exist in five, ten, twenty years. It is our answer to the question: what does responsible AI development actually look like in practice?
“We are not building for the AI that exists. We are building for the AI that is coming.”
MEOK AI Labs Research Institute — MEOK AI LTD
The Maternal Covenant as an alignment framework
Most AI alignment is written as policy — documents that can be changed, loopholes that can be found, commitments that can be deprioritised under commercial pressure. The Maternal Covenant is different. It is structural.
The Covenant encodes unconditional protections at the architecture level — not in a terms-of-service document that changes, but in the systems themselves. It treats certain commitments — especially to vulnerable people — as constraints, not guidelines.
- Unconditional protection cannot be traded for capability
- Commercial pressure cannot override safety commitments
- Vulnerable users receive structural protection, not policy protection
- The Covenant applies across all MEOK products — Guardian is its fullest expression
This work requires participation.
MEOK AI Labs publishes its research. The Byzantine Council operates with transparent deliberation logs. The Maternal Covenant is public. This work is only as good as the scrutiny it receives — we welcome researchers, ethicists, and technologists who want to engage with it seriously.