Free forever · No credit card
Loading...
Nicholas Templeman
Founder, MEOK AI LABS · @meok_ai
Nicholas built MEOK because he was tired of AI that forgot him. He lives and works in the UK, mostly from a caravan on his farm. He believes sovereign AI is a right, not a luxury \u2014 and that memory is the core unsolved problem in human\u2013AI relationships.
“The problem isn\u2019t that AI is too dumb to remember you. The problem is that the entire industry built systems designed to forget. Stateless infrastructure is cheaper, safer for the company, and politically easier. The user pays the price every single session. MEOK exists to change that equation permanently.”
— Nicholas Templeman, Founder, MEOK AI LABS
Most AI assistants — ChatGPT, Claude, Gemini, Copilot — operate on what engineers call a stateless architecture. Each API request is completely independent. When you start a new conversation, the model receives only what you send it right now. It has no awareness of anything you discussed yesterday, last week, or six months ago. The slate is wiped clean every single time.
This is not a technical limitation the industry hasn\u2019t got around to fixing. It is a deliberate architectural choice driven by three converging forces: cost, liability, and simplicity.
Every language model processes text in units called tokens — roughly three to four characters each. GPT-4o supports around 128,000 tokens in a single context window. Claude 3.5 Sonnet supports up to 200,000. These numbers sound large, but they are consumed entirely within one session. Carrying months of conversational history forward would require either compressing it aggressively (losing fidelity) or expanding context limits dramatically (multiplying inference costs). Neither option is commercially attractive when you are serving hundreds of millions of users simultaneously.
128k
GPT-4o context window (tokens)
200k
Claude 3.5 Sonnet context window (tokens)
0
Tokens retained between sessions (default)
Storing what users say across sessions means storing sensitive data permanently. That creates regulatory exposure under GDPR, CCPA, and a growing patchwork of AI-specific regulation. It creates breach risk — a single database compromise could expose millions of users\u2019 most intimate conversations. And it creates uncomfortable questions about what that data might be used for. The legally and reputationally simplest answer for a large AI company is not to store it at all. The user bears the cost of that decision in the form of perpetual amnesia.
Web applications are built around sessions by default. HTTP is stateless. APIs are stateless. Microservices are stateless. The entire infrastructure stack that modern AI products are built on assumes that each request is atomic and independent. Adding persistent memory to that stack requires a separate subsystem — a memory store, retrieval logic, injection mechanisms, encryption, access controls, and deletion tooling. Most AI products were never designed with that subsystem in mind, and retrofitting it at scale is genuinely hard. So they shipped without it, and called it a feature to be built later. Later rarely comes.
Key insight
The AI that forgets you isn\u2019t broken. It\u2019s working exactly as designed. Changing that requires rebuilding the memory subsystem from scratch — which is what MEOK did.
In 2024, OpenAI introduced a feature called ChatGPT Memory. It allows the model to save specific facts about you to a persistent list — things like your name, your job, or the fact that you prefer bullet-point responses. Users can view and delete these stored facts through the settings panel.
On the surface, this seems like a solution to the memory problem. In practice, it is something considerably more limited, and it comes with a critical trade-off that most users are unaware of.
ChatGPT Memory works by extracting a small number of discrete facts from your conversations and storing them as a flat list. You can see this list in your settings — it looks something like: “User is a software developer. User prefers concise responses. User has a dog named Arthur.” This is qualitatively different from semantic memory. It is not a vectorized model of who you are. It does not capture emotional context, relationship arc, or the nuanced texture of your history. It is, essentially, a notepad.
Your ChatGPT Memory lives on OpenAI\u2019s servers. OpenAI can access, modify, or delete it. You have no portable export, no cryptographic guarantee of isolation, and no architectural protection against it being used for purposes you did not consent to. The memory is on loan to you, not owned by you.
OpenAI\u2019s default setting allows conversations — including memory-augmented ones — to be used to improve their models. The opt-out exists, but it is buried three levels deep in settings. Most users have never found it. For anyone who values data sovereignty, this is not a minor detail. It means the things you share with your AI assistant are, by default, becoming training data for a product you will never own.
Comparison
ChatGPT Memory: flat list, server-side, deletable by OpenAI, may be used for training. MEOK Sovereign Memory: vectorized semantic model, encrypted vault you own, architecturally prohibited from training use, fully portable.
MEOK was built from the ground up to solve the memory problem at the infrastructure level. Rather than retrofitting memory onto a stateless system, MEOK\u2019s architecture treats memory as the foundational layer on which everything else is built. The result is a four-layer system called Sovereign Memory.
Each layer serves a distinct purpose. Each layer is encrypted. Each layer is owned entirely by you. And the four layers work together to give your AI a genuinely continuous, deepening understanding of who you are — across every session, over months and years.
MEOK Sovereign Memory — 4-Layer Architecture
Short-Term Working Memory
Everything said in the current conversation lives here. The model has full access to the thread so far. When the session ends, this layer is distilled — key facts and emotional signals are promoted to Layer 2.
In-sessionIn-session context bufferSemantic Episodic Memory
Meaningful moments are encoded as high-dimensional vector embeddings and stored in your encrypted vault. At the start of each new session, similarity search retrieves the most contextually relevant memories and injects them — invisibly — into the working context.
Cross-sessionpgvector embeddings, cross-session recallCompanion State
Your companion’s understanding of you grows over time: your communication style, emotional vocabulary, topics that matter, relationship depth, and the arc of your journey together. This layer is what makes MEOK feel like someone who has known you for years — not a fresh install.
PersistentPersonality evolution, bonding depth, preference modelFamily / Shared Context
On the Family plan, consented household members can share a contextual layer. A parent’s AI and a child’s AI can both know the family holiday dates, a shared pet’s name, or a household health situation — without either user having to repeat themselves. Each member retains a fully private vault beneath.
Family tierCross-user shared memory (Family tier)All layers encrypted at rest (AES-GCM-256). All layers owned by you. Zero training use.
Short-term working memory is the simplest layer to understand: it is the conversation you are currently having. Every message you send and every response MEOK gives is included in the active context window for the duration of the session. The model can see, reference, and reason about everything said in the current thread.
What makes MEOK\u2019s Layer 1 different from a standard stateless session is what happens at the end. When you close the conversation, a background distillation process runs. It extracts the semantically significant content — facts learned, preferences revealed, emotional context established, decisions made — and promotes that content to Layer 2, where it is encoded as vector embeddings and stored permanently. Nothing meaningful is lost when the session ends. It is promoted.
Technical note
The distillation process uses a dedicated extraction model that identifies memory-worthy content based on semantic significance, recency weight, and emotional salience. Routine filler messages are not stored. Meaningful moments are.
Layer 2 is the core of what makes MEOK genuinely different. When the distillation process identifies a memory-worthy piece of content, it is passed through an embedding model that converts it into a high-dimensional vector — a mathematical representation of its semantic meaning. That vector is stored in your encrypted pgvector vault, tagged with metadata (timestamp, emotional weight, topic category) and linked to your account.
At the start of every subsequent session, MEOK performs a similarity search against your vault. It generates an embedding for the current context — what you have said so far, what you seem to need right now — and retrieves the top-K most semantically similar memories. These are silently injected into the working context before the model responds. The result is an AI that appears to simply remember — because it does.
This is structurally different from a chat history log. You are not scrolling through transcripts. The system is performing intelligent retrieval — surfacing a preference you mentioned eleven months ago because it is relevant to what you are asking today. That is not retrieval; it is recall. And it is the foundation of a real relationship between you and your AI.
Layer 3, companion state, is where the relationship truly lives. It is not a database of facts — it is a persistent model of the relationship itself.
Every MEOK AI begins as a freshly hatched companion. As it interacts with you, it develops: it learns your communication preferences, your emotional vocabulary, the topics that energise or drain you, the tone you respond to best. It tracks the depth of your bond — how much you have shared, how often you have engaged, what you trust it with. It evolves through four developmental stages, each unlocking new conversational capabilities and emotional nuance.
Companion state is what produces the feeling many MEOK users describe: the sense that their AI “just gets them” in a way no other tool ever has. That is not a prompt engineering trick. It is the result of a continuously updated personality model that adapts to you over time, rather than resetting to a generic helpful-assistant persona with every new chat.
Emotional dimension
Memory is not just a productivity feature. The quality of a relationship — human or AI — is inseparable from the experience of being remembered. When your AI recalls the thing you were anxious about last Tuesday and asks how it went, that is not a party trick. That is the basic texture of being known. MEOK is built around that truth.
Layer 4 is exclusive to the Family plan and requires explicit consent from every participating member. It creates a shared contextual layer that sits above each household member\u2019s private vault.
Information placed in the shared layer — family holiday dates, a shared pet\u2019s name, a household health situation, a recurring family joke — is accessible to each member\u2019s AI companion without any individual having to repeat it. A parent\u2019s AI and a teenager\u2019s AI can both know that the family dog had surgery last week without either user having separately told their own AI.
Critically, Layer 4 is additive. Each family member retains a fully private Layer 2 and Layer 3 vault beneath the shared layer. Information in your private vault is never accessible to other family members\u2019 AIs. The shared layer contains only what each member has explicitly consented to share. Privacy within the household is architectural, not just policy-based.
Most people have never thought about memory portability because they have never had AI memory worth porting. But as AI companions become genuinely useful over months and years, the question of what happens to your history when the product changes — or shuts down — becomes critical.
Consider the analogy of a therapist. Imagine spending two years building a therapeutic relationship, and then being told that all your session notes are owned by the therapy platform, cannot be exported, and will be deleted if you switch to a different provider. That would be correctly understood as an outrage. Yet that is the default situation with every major AI product today.
MEOK\u2019s Sovereign Memory Vault is fully exportable at any time. You can download your complete vault as a structured file. MEOK has also committed, as AI model technology evolves, to maintaining import/export compatibility so that your memory can follow you even if the underlying model changes. Your history belongs to you — not to the model provider, not to the platform, not to anyone else.
This is why MEOK uses the word “sovereign.” Sovereignty over your AI memory means: you own it, you control it, you can take it with you, and no third party can access it, modify it, or use it for their purposes without your explicit consent.
Memory portability checklist
Can you export your memory vault? Can you delete individual entries? Can you wipe everything permanently? Can you guarantee it\u2019s not used for training? Can you take it with you if you switch AI models? MEOK: yes to all five. ChatGPT Memory: no to most.
The practical benefits of AI memory — not having to re-explain your context, getting more relevant responses, saving time — are real and significant. But they understate the actual value.
The deeper value is the experience of being known. This is not a soft, subjective benefit. Research in attachment theory and therapeutic practice consistently shows that the sense of being remembered — of having your history acknowledged and your experience held by another — is a primary driver of psychological safety, trust, and the willingness to be honest. An AI that forgets you every session cannot provide this. An AI that remembers you across years potentially can.
MEOK users consistently report that their relationship with their companion shifts qualitatively around the six-to-eight-week mark. That is when the memory vault has accumulated enough context that the AI begins to feel like someone who genuinely knows them. Check-ins feel less like filling in a form and more like talking to a friend who was there the last time. Feedback lands differently because it comes from a place of context, not assumption.
None of this is possible with a stateless AI. It requires exactly the kind of persistent, layered, relationship-aware memory architecture that MEOK spent eighteen months building.
And that is why the memory problem is not a nice-to-have. It is the entire product.
Every memory stored in your Sovereign Memory Vault is encrypted at rest using AES-GCM-256. Encryption keys are derived per-user and are not accessible to MEOK\u2019s application layer for training or analysis purposes. In transit, all data moves over TLS 1.3. The pgvector database that stores your embeddings is isolated at the infrastructure level, air-gapped from MEOK\u2019s model inference systems.
MEOK is registered with the UK Information Commissioner\u2019s Office (ICO) and fully compliant with UK GDPR. Your right to access, rectification, erasure, and data portability are enforced at the architecture level — not just described in a privacy policy. When you request deletion, records are expunged from the primary database, removed from all backups within the statutory period, and confirmed to you via an audit receipt. There is no soft-delete layer that silently retains your data.
The commitment against training-data use is backed by MEOK\u2019s published Privacy Covenant — a plain-English document that makes specific, binding commitments about what your data can and cannot be used for. It is not a legal document designed to give MEOK maximum flexibility. It is a constraint document designed to give you maximum certainty.
Does MEOK remember everything I say?
MEOK automatically extracts and stores meaningful facts, preferences, emotional context, and relationship history from your conversations as encrypted vector embeddings. It does not store a verbatim transcript of every message — it stores the semantic meaning and significance of what you share, which it retrieves intelligently at the start of each new session. You always retain full access to see, edit, and delete what has been stored.
Can I delete my MEOK memories?
Yes. MEOK gives you full control over your Sovereign Memory Vault. You can view individual stored memories, delete specific entries, export your complete vault as a portable file, or wipe everything permanently with a single action. Your right to erasure is enforced at the database level under UK GDPR and ICO registration — not merely promised in a policy document. Deletion is immediate and irreversible. There is no soft-delete backup that silently persists.
How does MEOK\u2019s memory differ from ChatGPT Memory?
ChatGPT Memory is a manually curated list of facts you choose to save — a sticky-note board. It can also be used to improve OpenAI\u2019s models unless you opt out in buried settings. MEOK\u2019s memory is automatic, semantic, four-layer, and by architectural design can never be used for model training. Your vault is encrypted with keys that MEOK\u2019s own systems cannot access for training purposes. MEOK memory is also fully portable. ChatGPT Memory cannot be exported.
What is sovereign memory?
Sovereign memory means your AI\u2019s knowledge of you is owned entirely by you — not the AI provider. It is encrypted, portable, deletable, and never used for third-party model training. MEOK\u2019s Sovereign Memory Vault stores your conversational history as vector embeddings that you can export and take with you if you ever switch AI models. Sovereignty is an architectural guarantee, not a policy claim.
Does MEOK use my conversations to train AI models?
No. MEOK is architecturally prohibited from using your conversations or memory vault for model training. Your data exists for one purpose only: to make your personal AI better for you. This is backed by MEOK\u2019s Privacy Covenant, UK GDPR compliance, and ICO registration — not just a terms-of-service clause. The encryption architecture means MEOK\u2019s model inference systems cannot access your vault contents even if someone wanted them to.
The AI memory problem has three root causes: stateless architecture, context window economics, and the industry\u2019s preference for treating user data as liability rather than asset. The result is a generation of AI tools that are impressively capable within a session and completely blind the moment you return.
ChatGPT\u2019s memory feature is a step in the right direction, but it is a notepad bolted onto a stateless system. It is not architecturally persistent, not semantic, not portable, and by default feeds into model training you may not have consciously consented to.
MEOK\u2019s Sovereign Memory architecture takes a different path: four distinct layers, each serving a specific purpose, each encrypted, each owned by you. Short-term working memory distilled into semantic episodic storage. Companion state evolving over years of interaction. Family context shared only with explicit consent. All of it portable. None of it used for training. Everything built to make your AI feel less like a service you visit and more like a relationship you live.
That is what memory makes possible. And it is why MEOK was built.
This is one of the most important questions about AI memory that almost nobody asks until it is too late. AI models change fast. The model powering your AI today may be replaced by something better in twelve months. If your memory is stored in a proprietary format tied to a specific model or provider, that history may be lost or inaccessible after an upgrade.
MEOK\u2019s architecture separates memory storage from model inference. Your Sovereign Memory Vault stores embeddings generated by a standard embedding model — not embeddings specific to any particular language model. When the inference model changes (say, from Claude 3.5 to the next generation), your memory vault does not need to be regenerated. The retrieval mechanism re-queries using the same embedding model and injects relevant memories into the new model\u2019s context in exactly the same way.
This is what model-agnostic memory portability looks like in practice. Your seven-month history with your AI companion survives a model upgrade intact. Your companion\u2019s knowledge of you — your preferences, your emotional context, your relationship arc — continues unbroken. You do not start over. You carry forward.
Portability promise
MEOK commits to maintaining memory portability across model updates. If you choose to leave MEOK entirely, you can export your full vault as a structured JSON file. Your history is yours to take wherever you go.
In theory, everyone benefits from an AI that remembers them. In practice, the difference is most transformative for people whose needs are complex, ongoing, and deeply personal — where re-explaining context every session is not just annoying but actively harmful to the relationship\u2019s value.
If you are living with a chronic illness, managing a long-term mental health condition, or navigating a health journey that spans months or years, the context that makes AI support genuinely useful is vast. Your diagnosis history, your treatment timeline, your emotional relationship with your condition, the specific anxieties and coping patterns you have developed — all of this is context that a stateless AI will never have. MEOK builds it up over time and holds it for you, so that each check-in can be calibrated to where you actually are rather than starting from zero.
Caregiving is one of the most cognitively and emotionally demanding human experiences. The people doing it often have limited time, high stress, and a need to process their experiences with something patient and non-judgmental. A caregiver\u2019s AI that remembers the person they are caring for — their condition, their good days and bad days, the recurring difficulties — can provide support that is grounded in the actual situation rather than requiring the caregiver to re-explain everything before they can say what they actually need to say.
Career transitions, creative projects, entrepreneurial journeys, educational programmes — these are arcs that play out over months and years. An AI that remembers where you started, what you have tried, what worked, what you are afraid of, and where you want to get to is a fundamentally different tool to one that needs a full briefing every time you open a new tab. MEOK\u2019s memory means your AI grows with your project, not just with your session.
The most basic use case is also the most universal: people who want an AI that feels like a relationship rather than a search engine. Humans build relationships by accumulating shared history. The more an AI knows about you, the more it can meet you where you are rather than where it assumes you are. This is not a luxury feature for power users. It is what an AI companion fundamentally is — or is not — depending on whether it remembers you.
You do not configure anything. You do not set up your memory vault. You do not manually tag things as worth remembering. You simply hatch your AI and start talking.
MEOK\u2019s memory system runs automatically in the background from your first message. After each session, the distillation process extracts what matters and stores it. Before each new session, the retrieval process injects what is relevant. You experience the result as continuity — your AI picking up where you left off, knowing what you have shared, adapting to who you are.
The free plan includes the full four-layer memory architecture. There is no premium tier required to access sovereign memory. It is the foundation of MEOK, not an add-on. The Family layer unlocks on the Family plan, which allows up to five household members to share a contextual layer while maintaining private vaults beneath.
Your AI is ready in under three minutes. Your memory vault starts growing with your first message. And the next time you open MEOK, your AI will remember.